Re: travel guide for the next IETF...

2013-01-05 Thread Yoav Nir

On Jan 5, 2013, at 6:51 AM, John Levine jo...@taugh.com wrote:

 So if you don't attend IEEE, quit your whining:  at least you won't have 
 to eat he same hotel food for 2 weeks in a row...
 
 You don't have to eat there.  Check out the reviews of this restaurant
 across the street:
 
 https://plus.google.com/118141773512616354020/about

I wonder if you risk a jaywalking ticket for crossing that street without a car.

Re: Hello ::Please I need to know LEACH protocol standard???

2013-01-05 Thread Abdussalam Baryun
Hi Mahmoud,

The LEACH is not a protocol worked on so far in IETF, not sure if it
standard yet elsewhere!

AB
-

Hello everybody,

I am a researcher of Master's degree , working on LEACH routing
protocol for wireless sensor networks and i need to know for which
standard does LEACH , its family ,or even Layer 3 belong to.

Thank you


Re: travel guide for the next IETF...

2013-01-05 Thread Ole Jacobsen

On Sat, 5 Jan 2013, Yoav Nir wrote:

 
 I wonder if you risk a jaywalking ticket for crossing that street without a 
 car.

You did read the reviews, right? Sounds like you would risk more than 
a ticket...

https://plus.google.com/118141773512616354020/about


Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Abdussalam Baryun
Hi Hector,

I like your method, which beleive is the reason why the RFC2119 is
great help for implementors. As I mentioned before if the protocol
specification is long and complicated no doubt its language may make
it more difficult to readers or writters. Therefore, it will be nice
if the IETF surveys this matter as you and Scott suggested.

Thanking you
Abdussalam Baryun

+++
Date: Fri, 04 Jan 2013 22:24:50 -0500
From: Hector Santos hsantos at isdg.net
To: Scott Brim swb at internet2.edu
Sub:Re: I'm struggling with 2219 language again

We have implemented numerous protocols since the 80s. I have a
specific method of approaching a new protocol implementation which
allows for fastest implementation, testing proof of concept and above
all minimum cost. Why bother with the costly complexities of
implementing SHOULDs and MAYs, if the minimum is not something you
want in the end anyway?

A good data point is that for IP/Legal reasons, we do not use other
people's code if we can help it and in the early days, open source was
not as wide spread or even acceptable at the corporate level. In other
words, it was all done in-house, purchased or nothing. I also believe
using other people's code has a high cost as well since you don't have
an in-house expert understanding the inner workings of the externally
developed software.
o Step 1 for Protocol Implementation:


Look for all the MUST protocol features. This includes the explicit
ones and watchful of semantics where its obviously required or things
will break, perhaps it fell thru the crack.

An important consideration for a MUST is that operators are not given
the opportunity to disable these protocol required features. So from a
coding standpoint, this is one area you don't have to worry about
designing configuration tools, the UI, nor including operation
guidelines and documentation for these inherent protocol required
features.

This is the minimum coding framework to allow for all inteop testing
with other software and systems.

The better RFC spec is the one that has documented a checklist, a
minimum requirement summary table, etc. Good example is RFC 1113 for
the various internet hosting protocols. I considered RFC 1123 the
bible!

Technical writing tip: Please stay away from verbosity especially of
subjective concepts and please stop writing as if everyone is stupid.
I always viewed the IETF RFC format as a blend of two steps
of the SE process - functional and technical specifications.
Functional specs tell us what we want and technical specs
tell us how we do it.  So unless a specific functional requirements
RFC was written, maybe some verbosity is needed but it should
be minimized.


Generally, depending on the protocol, we can release code just on
using MUST requirements - the bottom line framework for client/server
communications. Only when this is completely successfully, can your
implementation consider moving on at extending the protocol
implementation with additional SHOULD, MAY features and its optional
complexities.
o Step 2


Look for the SHOULDs. This is the candies of the protocol. If the
SHOULD is really simple to implement, it can be lumped in with step 1.

I know many believe a SHOULD are really a MUST as an alternative
method perhaps - different version of MUST to be done nonetheless.

However, I believe these folks play down an important consideration
for implementing SHOULD based protocol features:
   Developers need to offer these as options to deployment operators.


In other words, if the operator can not turn it off then a SHOULD was
incorrectly used for a MUST which is required with no operator option
to disable.
o Step 3


Look for the MAYs. Very similar to SHOULD, a good way to consider a
SHOULD is as a default enabled (ON out of the box) option and a MAY as
a default disabled (OFF out of the box) option.
Summary:

  MUST   - required, no operator option to disabled. Of course,
   its possible to have a hidden, undocumented switch
   for questionable stuff.

  SHOULD - good idea, recommended. if implemented, enabled it
   out of the box.

  MAY- similar to SHOULD, does not have to be enabled out
   of box.


In both cases for SHOULD and MAY, the operator can turn these protocol
features off/on. For a MUST, the operator can not turn the MUST
feature. These SHOULD/MAY features are documented for operators and
support.

One last thing, I believe in a concept I call CoComp - Cooperative
Competition, where all competitive implementators, including the
protocol technology leader all share a common framework for a minimum
protocol generic to all parties and the internet community. It is
least required to solve the problem or provide a communication avenue.
All else, the SHOULDs, the MAYs, is added value for competing
implementators. It generally is what differentiate the various
implementators software.

I personally believe it is doable to write a new 

Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Abdussalam Baryun
IMO, too many specs seriously overuse/misuse 2119 language, to the
detriment of readability, common sense, and reserving the terms to
bring attention to those cases where it really is important to
highlight an important point that may not be obvious to a casual
reader/implementor.

also to highlight an important point that may not be obvious to
imlementors which are non-English speakers/writers.

AB


Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Mikael Abrahamsson


As an operator, I purchase equipment and need to write RFQs. I would like 
to able to ask more than does the product implement RFC whatever, I 
want to also ask Please document all instances where you did not follow 
all MUST and SHOULD, and why.


Otherwise I think there needs to be better definition of what it means to 
implement or support an RFC when it comes to completness and what this 
means as per following SHOULD and MAY.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Mikael Abrahamsson

On Sat, 5 Jan 2013, Mikael Abrahamsson wrote:

Otherwise I think there needs to be better definition of what it means 
to implement or support an RFC when it comes to completness and what 
this means as per following SHOULD and MAY.


Also what it means following things in it that is not RFC2119 language.

--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Abdussalam Baryun
(was Re: I'm struggling with 2219 language again)

 Where you want to use MUST is where an implementation might be tempted
 to take a short cut -- to the detriment of the Internet -- but could
 do so without actually breaking interoperability. A good example is
 with retransmissions and exponential backoff. You can implement those
 incorrectly (or not at all), and still get interoperability. I.e.,
 two machines can talk to each other. Maybe you don't get good
 intereoperability and maybe not great performance under some
 conditions, but you can still build an interoperabile implementation.

 IMO, too many specs seriously overuse/misuse 2119 language, to the
 detriment of readability, common sense, and reserving the terms to
 bring attention to those cases where it really is important to
 highlight an important point that may not be obvious to a casual
 reader/implementor.

Sadly true.

We can fix that, by discussing it further, or as Scott mentioned the survey [*]

 two machines can talk to each other. Maybe you don't get good
 intereoperability and maybe not great performance under some
 conditions, but you can still build an interoperabile implementation.

As machines reads and writes may depend on conditions, I don't think
it is true that you can still interoperabile implementation by
ignoring using/documenting requirement keys language (i.e. all common
keys of all languages).

AB


Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Abdussalam Baryun
We can fix that, by discussing it further, or as Scott mentioned make
a survey within IETF [*]

[*] http://www.ietf.org/mail-archive/web/ietf/current/msg76582.html

AB

On 1/5/13, Abdussalam Baryun abdussalambar...@gmail.com wrote:
 (was Re: I'm struggling with 2219 language again)

 Where you want to use MUST is where an implementation might be tempted
 to take a short cut -- to the detriment of the Internet -- but could
 do so without actually breaking interoperability. A good example is
 with retransmissions and exponential backoff. You can implement those
 incorrectly (or not at all), and still get interoperability. I.e.,
 two machines can talk to each other. Maybe you don't get good
 intereoperability and maybe not great performance under some
 conditions, but you can still build an interoperabile implementation.

 IMO, too many specs seriously overuse/misuse 2119 language, to the
 detriment of readability, common sense, and reserving the terms to
 bring attention to those cases where it really is important to
 highlight an important point that may not be obvious to a casual
 reader/implementor.

Sadly true.

 We can fix that, by discussing it further, or as Scott mentioned the survey
 [*]

 two machines can talk to each other. Maybe you don't get good
 intereoperability and maybe not great performance under some
 conditions, but you can still build an interoperabile implementation.

 As machines reads and writes may depend on conditions, I don't think
 it is true that you can still interoperabile implementation by
 ignoring using/documenting requirement keys language (i.e. all common
 keys of all languages).

 AB



Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Abdussalam Baryun
I totally agree with you,

AB

+++
As an operator, I purchase equipment and need to write RFQs. I would
like to able to ask more than does the product implement RFC
whatever, I want to also ask Please document all instances where
you did not follow all MUST and SHOULD, and why.

Otherwise I think there needs to be better definition of what it means
to implement or support an RFC when it comes to completness and
what this means as per following SHOULD and MAY.
--
Mikael Abrahamssonemail: swmike at swm.pp.se


Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Abdussalam Baryun
Hi Mikael

Also what it means following things in it that is not RFC2119 language.

It will mean, you should understand me/english/ietf/procedure even if
I don't have to explain, and you need to understand English well even
if you are a great implementor or great programming language speaker.

AB
===

On 1/5/13, Abdussalam Baryun abdussalambar...@gmail.com wrote:
 I totally agree with you,

 AB

 +++
 As an operator, I purchase equipment and need to write RFQs. I would
 like to able to ask more than does the product implement RFC
 whatever, I want to also ask Please document all instances where
 you did not follow all MUST and SHOULD, and why.

 Otherwise I think there needs to be better definition of what it means
 to implement or support an RFC when it comes to completness and
 what this means as per following SHOULD and MAY.
 --
 Mikael Abrahamssonemail: swmike at swm.pp.se



Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Mikael Abrahamsson

On Sat, 5 Jan 2013, Abdussalam Baryun wrote:


Hi Mikael


Also what it means following things in it that is not RFC2119 language.


It will mean, you should understand me/english/ietf/procedure even if
I don't have to explain, and you need to understand English well even
if you are a great implementor or great programming language speaker.


The problem here is that I want them to pay back some of the money (or 
take back the equipment totally and give back all money) for breach of 
contract, when I discover that they haven't correctly (as in intention and 
interop) implemented the RFC they said they said they were compliant in 
supporting.


Ianal, but it feels that it should easier to do this if there are MUST and 
SHOULD in there and I asked them to document all deviations from these.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: WCIT outcome?

2013-01-05 Thread Eliot Lear
Hi,

At its core, the value of the IETF is technical.  We must always make
the best technical standards we can possibly make, adhering to the
values of rough consensus and running code.  Everything else is
secondary or nobody (government or otherwise) will want to implement
what we develop.  It's easy to lose sight of this in this conversation. 
It's an advantage we have over organizations who vote by country, and we
will always have it so long as such votes are allowed and where the
majority of expertise is found in a minority of countries, or where the
voice of expertise is silenced through representation.  Because of
this approach, what happened at WCIT and at WTSA is likely to harm
developing countries more than anyone else, and that is truly unfortunate.

And so what do we need to do?

 1. Keep developing the best technical standards we can develop, based
on rough consensus and running code.
 2. Do not get overly consumed by palace intrigue in other
organizations.  It detracts from (1) above.
 3. While we cannot control others, we can and should occasionally
remind them when they're going to do something that when implemented
as specified would harm those who deploy the technology.
 4. Invite and encourage all to participate in our activities so that
the best ideas flourish and all ideas are tested.

The other thing we need to understand is that the IETF doesn't live
without friends or in a vacuum.  The RIRs, NOGs, other standards bodies,
and ISOC all are working at many different levels, as are vendors.  If
WCIT shows anything, it is that these organizations are being listened
to, at least by many in the developed world.  Why?  Because over 2.5
billion people are connected, thanks to the collaboration of these and
other organizations.  That's moral authority that should not be
underestimated.  Nor should it be taken for granted.  See (1) above. 
And we also shouldn't try to boil the ocean by ourselves or it will
surely impact (1) above.

Can we do a better job on outreach to governments?  Yes.  I'd even
venture to say that the IETF should be held – from time to time – in a
developing country, so that people can more clearly see who we are and
what we do.  But not too often, lest it interfere with (1) above.  If we
keep building the best stuff, they will continue to come, even if there
are bumps along the road.

Eliot



Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread John C Klensin


--On Saturday, January 05, 2013 10:13 +0100 Mikael Abrahamsson
swm...@swm.pp.se wrote:

 The problem here is that I want them to pay back some of the
 money (or take back the equipment totally and give back all
 money) for breach of contract, when I discover that they
 haven't correctly (as in intention and interop) implemented
 the RFC they said they said they were compliant in supporting.
 
 Ianal, but it feels that it should easier to do this if there
 are MUST and SHOULD in there and I asked them to document all
 deviations from these.

Folks, there is a long-term tension in almost every standards
body (at least those who notice) between

-- a specification whose primary purpose is guidance for
implementers and, in particular, implementers who intend
to do the best job possible to make things work together.

-- a specification whose primary purpose is to set
boundaries for what constitutes a conforming
implementation so that, as Mikael suggests, one can hold
those whose implementations do not fall within those
boundaries can be help accountable legally and/or
financially.

The traditional position of the IETF is not only that the first
is much more important than the second, but that the test of a
successful specification is interoperable implementations.   If
a pair of implementations do not interoperate, our assumption is
that something is broken, be it implementation 1, implementation
2, or the spec.  Indeed, our old Draft Standard requirements
took it as at least a default that the problem lay in the spec,
in part because we weren't very worried about bad-faith
implementations.

Similarly, if there are a half-dozen implementations, all of
which are widely deployed and interoperate smoothly but disagree
with the spec, we usually look on the spec with suspicion and
fix it to conform to those implementations.   In an organization
following the second model, the spec is paramount and all six of
those implementations are wrong (and can be held accountable by
third parties for being wrong).

In the IETF, that tension shows up in many ways.  There are not
just arguments about the use of 2119 terms, but statements like
the audience for IETF Standards and the RFC Series is
implementers.  Those statements are hotly debated because,
among other things, our documents -- like it or not-- are used
in procurements and, if there are problems, in precisely the way
Mikael describes.   Some of them work better in those contexts
than others.

Another tension is that, while interoperability is a good
criterion for some specs, it is less so for others.  If you do
X, things won't interoperate can be a valid and important
statement, but so can If you do Y, things may interoperate
perfectly well in the 'they work together' sense, but there will
be one operational mess after another.  A very narrow reading
of 2119 would permit a MUST NOT for the first case, but not even
a SHOULD NOT for the second because the problem has nothing to
do with a narrow reading of interoperability.  This is the
reason why some of us have never particularly liked the
interoperability-linked language of 2119.  Its intent, I think,
is to shift the concern toward the first model above and away
from the second but the side-effect of the language chosen has
been to create arguments against strong requirements for
operational quality and minimum implementation quality --
requirements that are perfectly consistent with the first model
above (and possibly even helpful for the second).  

It is also why some of us have argued for years that invocation
of 2119 and its definitions should be optional.  But optional
has often morphed in practice -- and with IESGs were were, IMO,
more interested in rules and their rigid interpretation than in
sensible and flexible understanding of particular situations and
needs -- into optional only in theory or optional only if one
can prove to the satisfaction of the most rigid AD that it is
absolutely necessary.

And, again, that is further complicated by the observation that
IETF Standards are used for procurement and even for litigation
about product quality.   We either need to accept that fact and,
where necessary, adjust our specification style to match or we
run the risk of bodies springing up who will profile our
Standards, write procurement-quality conformance statements for
their profiles, and become, de facto, the real standards-setter
for the marketplace (and obviously do so without any element of
IETF consensus).  

Aside but really, IMO, at the core of these sorts of
discussions: Telling people what the IETF says they can't do is
pretty useless, at least until we organize the Protocol Police
to enforce our positions and, ideally, the Protocol Army and
Gallows to make the Protocol Police credible.  If people think
they need to do something, they will do it without allowing us
to comment.  And, if they think they need to 

Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Hector Santos
Keep in mind only a STD is a real standard. A RFC is still only a 
recommendation,  a guideline.  What makes it a pseudo-standard is 
the # of implementations, how wide spread it is and foremost IMO, how 
much embedded it is so that a change has a negative impact.  At that 
point, an RFC not having a STD status probably doesn't matter any more.


At best, you might be able to sue for malpractice (failing to do what 
most experts in field would be doing) in cases where there is provable 
harm caused by the neglect to implement a well known practice. But 
only maybe getting your money back is realistic. :)


ikael Abrahamsson wrote:

On Sat, 5 Jan 2013, Abdussalam Baryun wrote:


Hi Mikael


Also what it means following things in it that is not RFC2119 language.


It will mean, you should understand me/english/ietf/procedure even if
I don't have to explain, and you need to understand English well even
if you are a great implementor or great programming language speaker.


The problem here is that I want them to pay back some of the money (or 
take back the equipment totally and give back all money) for breach of 
contract, when I discover that they haven't correctly (as in intention 
and interop) implemented the RFC they said they said they were compliant 
in supporting.


Ianal, but it feels that it should easier to do this if there are MUST 
and SHOULD in there and I asked them to document all deviations from these.




--
HLS




A proposal for a scientific approach to this question [was Re: I'm struggling with 2219 language again]

2013-01-05 Thread Marc Petit-Huguenin
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

I read the responses so far, and what can be said today is that there is 2
philosophies, with supporters in both camps.  The goal of the IETF is to make
the Internet work better, and I do believe that RFC 2119 is one of the
fundamental tool to reach this goal, but having two ways to use it does not
help this goal.

One way to find out would be to measure which philosophy results in the best
implementations.  Let's say that we can associate with each Standard Track RFC
one of these two philosophy.  If we had statistics on implementations then it
would be a simple matter of counting which one produce the less
interoperability problems, security issues and congestion problems (is there
other criteria?).  But as far as I know, there is no such data available -
maybe we should start collecting these, but that does not help for our current
problem.

Another way to look at it would be to run the following experiment:

1. Someone design a new protocol, something simple but not obvious, and write
in a formal language and keep it secret.

2. The same protocol is rewritten in RFC language but in two different
variants according to the two philosophies.  These also are kept secret.

3. The two variants are distributed randomly to a set of volunteer
implementers, who all implement the spec they received the best they can and
submit the result back, keeping their implementation secret.

4.  A test harness is written from the formal description, and all
implementations are run against each other, collecting stats related to the
criteria listed above (some criterion may be tricky to automatically assess,
we'll see).

5. Results are published, together with the protocol in formal form, the
specs, the results and the recommendation for one or the other philosophy.


That could be an interesting research project, and could even find some
funding from interested parties.


On 01/03/2013 09:15 PM, Dean Willis wrote:
 
 I've always held to the idea that RFC 2119 language is for defining levels
 of compliance to requirements, and is best used very sparingly (as
 recommended in RFC 2119 itself). To me, RFC 2119 language doesn't make
 behavior normative -- rather, it describes the implications of doing
 something different than the defined behavior, from will break the
 protocol if you change it to we have reason to think that there might be
 a reason we don't want to describe here that might influence you not to do
 this to here are some reasons that would cause you to do something
 different and on to doing something different might offend the
 sensibilities of the protocol author, but probably won't hurt anything
 else.
 
 But I'm ghost-editing a document right now whose gen-art review suggested 
 replacing the vast majority of is does and are prose with MUST. The 
 comments seem to indicate that protocol-defining text not using RFC 2119 
 language (specifically MUST) is not normative.
 
 This makes me cringe. But my co-editor likes it a lot. And I see smart
 people like Ole also echoing the though that RFC 2119 language is what
 makes text normative.
 
 For example, the protocol under discussion uses TLS or DTLS for a plethora
 of security reasons. So, every time the draft discusses sending a response
 to a request, we would say the node MUST send a response, and this
 response MUST be constructed by (insert some concatenation procedure here)
 and MUST be transmitted using TLS or DTLS.
 
 Or, a more specific example:
 
 For the text:
 
 In order to originate a message to a given Node-ID or Resource-ID, a node 
 constructs an appropriate destination list.
 
 
 The Gen-ART comment here is: -First sentence: a node constructs - a
 node MUST construct
 
 
 We'll literally end up with hundreds of RFC 2119 invocations (mostly MUST)
 in a protocol specification.
 
 Is this a good or bad thing? My co-editor and I disagree -- he likes 
 formalization of the description language, and I like the English prose.
 But it raises process questions for the IETF as a whole:
 
 Are we deliberately evolving our language to use RFC 2119 terms as the 
 principle verbs  of a formal specification language?
 
 Either way, I'd like to see some consensus. Because my head is throbbing
 and I want to know if it MUST hurt, SHOULD hurst, or just hurts. But I
 MUST proceed in accordance with consensus, because to do otherwise would
 undermine the clarity of our entire specification family.
 

- -- 
Marc Petit-Huguenin
Email: m...@petit-huguenin.org
Blog: http://blog.marc.petit-huguenin.org
Profile: http://www.linkedin.com/in/petithug
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)

iQIcBAEBCAAGBQJQ6Ht3AAoJECnERZXWan7ETHwP/1MwWKyX4ZoTqS2AZr5VdCwx
jGO/0+tbHppplfippPlJRR6cV5rfrrtkKp9j3Xbr477Jeuaaadjv3y0CfkGF+DUb
fDhcB/GQLiN1oC6s3cjiib46Rnd18Ela6xUAZleiLjKKoo0TJKfQ8oAt3tYonokK
onb95NAsF0FsbiqBzoUi23aEf/SFoKOg3a67DAt5XmntnNh5K6jVOmT4GFYtF3LB

Re: Making RFC2119 key language easier to Protocol Readers

2013-01-05 Thread Melinda Shore
On 1/4/13 11:39 PM, Mikael Abrahamsson wrote:
 As an operator, I purchase equipment and need to write RFQs. I would
 like to able to ask more than does the product implement RFC
 whatever, I want to also ask Please document all instances where you
 did not follow all MUST and SHOULD, and why.
 
 Otherwise I think there needs to be better definition of what it means
 to implement or support an RFC when it comes to completness and what
 this means as per following SHOULD and MAY.

I think being clear about who our constituencies are and what they
need is probably key to coming to any sort of agreement on any of this.
We've often complained about the lack of operator participation and
Mikael's comments may be an example the consequences of that - that we
don't fully understand how our documents are being used.

That said, frankly I've tended to assume that language in standards
documents is normative unless otherwise specified, and that highly
legalistic language is difficult to read.  On a third hand it wouldn't
be a small thing if non-native English speakers had an easier time with
our documents if every single normative thing in document is flagged
through the use of 2119 language.

So, basically where that leaves me is: 1) language in standards-track
documents is already normative by default; 2) however, if inserting
2119 language in all standards-track documents will make documents more
useful to people who actually run networks and/or clearer to people
whose first language is not English, it's probably worth tightening up
our language.

Melinda



Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread Robert Sayre
Mark,

The WG's reasoning, as stated in your message below, seems flawed.
Messages since your last communication on this matter have shown:

1) The ambiguity around arrays makes the patch format unsuitable for
common concurrent editing algorithms.
2) The ambiguity is likely to occur in the real world, for a couple of
different reasons.
3) It's not possible to tell whether a JSON Pointer document is
syntactically correct in isolation.

Additionally, you raised this point in your message below:

 the patch author already has to understand the semantics of the document 
 they're patching

That claim does not seem to be well-justified, and it could be
meaningless to the person implementing patch software (for example:
https://github.com/sayrer/json-sync).

This issue is a problem in practice, and it's a problem in theory as
well. JSON-Patch messages aren't sufficiently self-descriptive, so
they aren't appropriate for use in a RESTful system.

A response containing technical reasoning seems in order, since the
points raised by myself and others on this issue are unrelated to the
WG's previous thinking.

- Rob

On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net wrote:
 Robert,

 This was discussed extensively in the Working Group.

 The root of the issue was that some people reflexively felt that this was 
 necessary, but upon reflection, we decided it wasn't; although it seems 
 natural to some, especially those coming from a static language background, 
 it didn't provide any utility.

 You might argue that someone who (for example) adds to /foo/1 in the 
 mistaken belief that it's an array, when in fact it's an object, will get 
 surprising results. That's true, but if we were to solve this problem, that 
 person would still need to understand the underlying semantics of foo to do 
 anything useful to it -- and I'm not hearing anyone complain about that (I 
 hope).

 Put another way -- do you really think that people PATCHing something as if 
 it's an array (when in fact it's an object) is a significant, real-world 
 problem, given that the patch author already has to understand the semantics 
 of the document they're patching? I don't, and the WG didn't either.

 Regards,


 On 17/12/2012, at 3:36 PM, Robert Sayre say...@gmail.com wrote:

 The cost of fixing it seems low, either by changing the path syntax of
 JSON pointer or changing the names of operations applied to arrays.
 Array-like objects are common enough in JavaScript to make this a
 worry. The other suggestions either assume a particular policy for
 concurrent edits or require more machinery (test operation etc).
 Wouldn't it be simpler to make the patch format more precise?

 - Rob

 On Sun, Dec 16, 2012 at 4:33 PM, Matthew Morley m...@mpcm.com wrote:
 I am usually lurking and struggling to keep up with these posts. But, I
 concur with James, this really is a non-issue in practice.

 The JSON Pointer expresses a path down a JSON object to a specific context.
 The Patch expresses a change within or to that context.
 Everything about the both standards is about that end context.

 If you want to confirm the type of the context before applying a patch, this
 should probably be part of a test operation. I'm not sure if this is
 possible at this point (?), but that is where the logic should exist.



 On Sun, Dec 16, 2012 at 12:22 AM, James M Snell jasn...@gmail.com wrote:




 On Sat, Dec 15, 2012 at 8:36 PM, Robert Sayre say...@gmail.com wrote:

 On Fri, Dec 14, 2012 at 9:17 AM, Markus Lanthaler
 markus.lantha...@gmx.net wrote:

 Hmm.. I think that’s quite problematic. Especially considering how JSON
 Pointer is used in JSON Patch.

 I agree--I provided the same feedback privately. It seems
 straightforwardly unsound.


 In practice it doesn't seem to be much of an issue.

 Specifically, if I GET an existing document and get an etag with the JSON,
 then make some changes and send a PATCH with If-Match, the fact that any
 given pointer could point to an array or object member doesn't really 
 matter
 much.

 For example:

 GET /the/doc HTTP/1.1

HTTP/1.1 200 OK
 ETag: my-document-tag
 Content-Type: application/json

 {1:foo}

 PATCH /the/doc HTTP/1.1
 If-Match: my-document-etag
 Content-Type: application/json-patch

 [{op:add,path:/2,value:bar}]

 Generally speaking, someone should not be using PATCH to perform a partial
 modification if they don't already have some knowledge in advance what they
 are modifying. The only time the apparent ambiguity becomes an issue is 
 when
 a client is blindly sending a patch to an unknown endpoint... in which 
 case,
 you get whatever you end up with.

 - James



 - Rob



 --

 Markus Lanthaler

 @markuslanthaler







 From: James M Snell [mailto:jasn...@gmail.com]
 Sent: Friday, December 14, 2012 5:41 PM
 To: Markus Lanthaler
 Cc: IETF Discussion; IETF Apps Discuss
 Subject: Re: [apps-discuss] Last Call:
 draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to 

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread James M Snell
Robert,

I may have missed it, but can you provide a non-theoretical example of this
problem that you're suggesting exists in practice?

On Sat, Jan 5, 2013 at 4:19 PM, Robert Sayre say...@gmail.com wrote:

 Mark,

 The WG's reasoning, as stated in your message below, seems flawed.
 Messages since your last communication on this matter have shown:

 1) The ambiguity around arrays makes the patch format unsuitable for
 common concurrent editing algorithms.


Why? I'm still not seeing how it's unsuitable. Again, a non-theoretical
example would be helpful.



 2) The ambiguity is likely to occur in the real world, for a couple of
 different reasons.


Such as? What are the reasons?


 3) It's not possible to tell whether a JSON Pointer document is
 syntactically correct in isolation.


There is no such thing as a JSON Pointer document and I have absolutely
no idea what syntactically correct in isolation means with regards to
this problem you're suggesting. If I see /a/b/1, that is a syntactically
correct JSON Pointer... whether or not it points to anything specific
depends entirely on the specific JSON structure it is applied to. If I had
to guess, you're saying that it's not possible to tell if /a/b/01 is a
valid JSON Pointer or not given nothing but the pointer? If so, who cares
really? The JSON Pointer is not useful unless it's applied to an actual
JSON structure, it's only at that point that we really ought to care about
validity.

Still not seeing the problem.



 Additionally, you raised this point in your message below:
 
  the patch author already has to understand the semantics of the document
 they're patching

 That claim does not seem to be well-justified, and it could be
 meaningless to the person implementing patch software (for example:
 https://github.com/sayrer/json-sync).

 This issue is a problem in practice, and it's a problem in theory as
 well. JSON-Patch messages aren't sufficiently self-descriptive, so
 they aren't appropriate for use in a RESTful system.


What would be sufficiently self-descriptive ? Again, a non-theoretical
example and suggested alternative that we can compare would be helpful for
context.

It is possible that I missed a couple of posts on this over the holiday so
if you already provided an example, please do let me know and I'll go
hunting through the archives.

- James


 A response containing technical reasoning seems in order, since the
 points raised by myself and others on this issue are unrelated to the
 WG's previous thinking.




 - Rob

 On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net wrote:
  Robert,
 
  This was discussed extensively in the Working Group.
 
  The root of the issue was that some people reflexively felt that this
 was necessary, but upon reflection, we decided it wasn't; although it seems
 natural to some, especially those coming from a static language
 background, it didn't provide any utility.
 
  You might argue that someone who (for example) adds to /foo/1 in the
 mistaken belief that it's an array, when in fact it's an object, will get
 surprising results. That's true, but if we were to solve this problem, that
 person would still need to understand the underlying semantics of foo to
 do anything useful to it -- and I'm not hearing anyone complain about that
 (I hope).
 
  Put another way -- do you really think that people PATCHing something as
 if it's an array (when in fact it's an object) is a significant, real-world
 problem, given that the patch author already has to understand the
 semantics of the document they're patching? I don't, and the WG didn't
 either.
 
  Regards,
 
 
  On 17/12/2012, at 3:36 PM, Robert Sayre say...@gmail.com wrote:
 
  The cost of fixing it seems low, either by changing the path syntax of
  JSON pointer or changing the names of operations applied to arrays.
  Array-like objects are common enough in JavaScript to make this a
  worry. The other suggestions either assume a particular policy for
  concurrent edits or require more machinery (test operation etc).
  Wouldn't it be simpler to make the patch format more precise?
 
  - Rob
 
  On Sun, Dec 16, 2012 at 4:33 PM, Matthew Morley m...@mpcm.com wrote:
  I am usually lurking and struggling to keep up with these posts. But, I
  concur with James, this really is a non-issue in practice.
 
  The JSON Pointer expresses a path down a JSON object to a specific
 context.
  The Patch expresses a change within or to that context.
  Everything about the both standards is about that end context.
 
  If you want to confirm the type of the context before applying a
 patch, this
  should probably be part of a test operation. I'm not sure if this is
  possible at this point (?), but that is where the logic should exist.
 
 
 
  On Sun, Dec 16, 2012 at 12:22 AM, James M Snell jasn...@gmail.com
 wrote:
 
 
 
 
  On Sat, Dec 15, 2012 at 8:36 PM, Robert Sayre say...@gmail.com
 wrote:
 
  On Fri, Dec 14, 2012 at 9:17 AM, Markus Lanthaler
  

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread Mark Nottingham
Robert,

I neither represent the WG (except in as far as I attempt to do so as document 
editor), nor do I judge consensus in the WG (the Chairs do, although at this 
late date, the IESG are making the decisions).

That said, if we were starting this from scratch, I *personally* could see 
adding some syntax to distinguish intent, as I don't see it causing any huge 
amount of harm (besides leading some to believe that JSON Pointer is a 
proto-schema language). 

However, at this point, doing so really a judgement call; we have multiple 
implementations, and we shouldn't force them to change arbitrarily. As far as I 
can see, you haven't convinced anyone that this is a serious enough problem to 
do so (and I don't appear to be the only one to hold that opinion, by any 
means). Furthermore, it's not clear that the use cases you have in mind (since 
you have brought up JSON Sync) are in-scope for these specifications.

Soon, the IESG will make its determination, so you'd likely be much more 
productive by laying out your argument cogently to them, rather than focusing 
on me.

If we were to open up to further changes, I'd like to see us discuss things 
like allowing header modifications in the format, and specifying the target's 
intended media type (which was already discussed and rejected in the WG). 

However, I'm even more interested in getting this format published, in the 
knowledge that HTTP PATCH has been defined for some time, but is effectively 
useless (at least against JSON) without any defined, stable patch format. If we 
find serious deficiencies, nothing stops us, or you, or anyone else from 
defining a new patch format (as James is already doing).

Cheers,


On 06/01/2013, at 11:19 AM, Robert Sayre say...@gmail.com wrote:

 Mark,
 
 The WG's reasoning, as stated in your message below, seems flawed.
 Messages since your last communication on this matter have shown:
 
 1) The ambiguity around arrays makes the patch format unsuitable for
 common concurrent editing algorithms.
 2) The ambiguity is likely to occur in the real world, for a couple of
 different reasons.
 3) It's not possible to tell whether a JSON Pointer document is
 syntactically correct in isolation.
 
 Additionally, you raised this point in your message below:
 
 the patch author already has to understand the semantics of the document 
 they're patching
 
 That claim does not seem to be well-justified, and it could be
 meaningless to the person implementing patch software (for example:
 https://github.com/sayrer/json-sync).
 
 This issue is a problem in practice, and it's a problem in theory as
 well. JSON-Patch messages aren't sufficiently self-descriptive, so
 they aren't appropriate for use in a RESTful system.
 
 A response containing technical reasoning seems in order, since the
 points raised by myself and others on this issue are unrelated to the
 WG's previous thinking.
 
 - Rob
 
 On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net wrote:
 Robert,
 
 This was discussed extensively in the Working Group.
 
 The root of the issue was that some people reflexively felt that this was 
 necessary, but upon reflection, we decided it wasn't; although it seems 
 natural to some, especially those coming from a static language 
 background, it didn't provide any utility.
 
 You might argue that someone who (for example) adds to /foo/1 in the 
 mistaken belief that it's an array, when in fact it's an object, will get 
 surprising results. That's true, but if we were to solve this problem, that 
 person would still need to understand the underlying semantics of foo to 
 do anything useful to it -- and I'm not hearing anyone complain about that 
 (I hope).
 
 Put another way -- do you really think that people PATCHing something as if 
 it's an array (when in fact it's an object) is a significant, real-world 
 problem, given that the patch author already has to understand the semantics 
 of the document they're patching? I don't, and the WG didn't either.
 
 Regards,
 
 
 On 17/12/2012, at 3:36 PM, Robert Sayre say...@gmail.com wrote:
 
 The cost of fixing it seems low, either by changing the path syntax of
 JSON pointer or changing the names of operations applied to arrays.
 Array-like objects are common enough in JavaScript to make this a
 worry. The other suggestions either assume a particular policy for
 concurrent edits or require more machinery (test operation etc).
 Wouldn't it be simpler to make the patch format more precise?
 
 - Rob
 
 On Sun, Dec 16, 2012 at 4:33 PM, Matthew Morley m...@mpcm.com wrote:
 I am usually lurking and struggling to keep up with these posts. But, I
 concur with James, this really is a non-issue in practice.
 
 The JSON Pointer expresses a path down a JSON object to a specific context.
 The Patch expresses a change within or to that context.
 Everything about the both standards is about that end context.
 
 If you want to confirm the type of the context before applying a patch, 
 this
 

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread Robert Sayre
On Sat, Jan 5, 2013 at 5:48 PM, Mark Nottingham m...@mnot.net wrote:

 However, at this point, doing so really a judgement call; we have multiple 
 implementations, and we shouldn't
 force them to change arbitrarily.

The word arbitrarily seems inappropriate here. I raised at least
four technical issues and your message addresses none of them.

 As far as I can see, you haven't convinced anyone that this is a serious 
 enough problem to do so (and I don't
 appear to be the only one to hold that opinion, by any means).

Did you read this thread? Markus Lanthaler and Conal Tuohy raised
similar points.

 Furthermore, it's not clear that the use cases you have in mind (since you 
 have brought up JSON Sync)
 are in-scope for these specifications.

That assertion is both unsubstantiated and incorrect. json-sync has
identical primitive operations to JSON Patch (create/edit/remove vs
add/replace/remove). The JSON Patch document defines Copy and Move in
terms of the add/replace, so those are mostly syntactic sugar. The
only meaningful delta is the test operation, and I do plan to add
that to json-sync, since it's a good way to make application-specific
assertions.

 However, I'm even more interested in getting this format published,

Well, I guess someone has something they want to ship...

- Rob

 Cheers,


 On 06/01/2013, at 11:19 AM, Robert Sayre say...@gmail.com wrote:

 Mark,

 The WG's reasoning, as stated in your message below, seems flawed.
 Messages since your last communication on this matter have shown:

 1) The ambiguity around arrays makes the patch format unsuitable for
 common concurrent editing algorithms.
 2) The ambiguity is likely to occur in the real world, for a couple of
 different reasons.
 3) It's not possible to tell whether a JSON Pointer document is
 syntactically correct in isolation.

 Additionally, you raised this point in your message below:

 the patch author already has to understand the semantics of the document 
 they're patching

 That claim does not seem to be well-justified, and it could be
 meaningless to the person implementing patch software (for example:
 https://github.com/sayrer/json-sync).

 This issue is a problem in practice, and it's a problem in theory as
 well. JSON-Patch messages aren't sufficiently self-descriptive, so
 they aren't appropriate for use in a RESTful system.

 A response containing technical reasoning seems in order, since the
 points raised by myself and others on this issue are unrelated to the
 WG's previous thinking.

 - Rob

 On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net wrote:
 Robert,

 This was discussed extensively in the Working Group.

 The root of the issue was that some people reflexively felt that this was 
 necessary, but upon reflection, we decided it wasn't; although it seems 
 natural to some, especially those coming from a static language 
 background, it didn't provide any utility.

 You might argue that someone who (for example) adds to /foo/1 in the 
 mistaken belief that it's an array, when in fact it's an object, will get 
 surprising results. That's true, but if we were to solve this problem, that 
 person would still need to understand the underlying semantics of foo to 
 do anything useful to it -- and I'm not hearing anyone complain about that 
 (I hope).

 Put another way -- do you really think that people PATCHing something as if 
 it's an array (when in fact it's an object) is a significant, real-world 
 problem, given that the patch author already has to understand the 
 semantics of the document they're patching? I don't, and the WG didn't 
 either.

 Regards,


 On 17/12/2012, at 3:36 PM, Robert Sayre say...@gmail.com wrote:

 The cost of fixing it seems low, either by changing the path syntax of
 JSON pointer or changing the names of operations applied to arrays.
 Array-like objects are common enough in JavaScript to make this a
 worry. The other suggestions either assume a particular policy for
 concurrent edits or require more machinery (test operation etc).
 Wouldn't it be simpler to make the patch format more precise?

 - Rob

 On Sun, Dec 16, 2012 at 4:33 PM, Matthew Morley m...@mpcm.com wrote:
 I am usually lurking and struggling to keep up with these posts. But, I
 concur with James, this really is a non-issue in practice.

 The JSON Pointer expresses a path down a JSON object to a specific 
 context.
 The Patch expresses a change within or to that context.
 Everything about the both standards is about that end context.

 If you want to confirm the type of the context before applying a patch, 
 this
 should probably be part of a test operation. I'm not sure if this is
 possible at this point (?), but that is where the logic should exist.



 On Sun, Dec 16, 2012 at 12:22 AM, James M Snell jasn...@gmail.com wrote:




 On Sat, Dec 15, 2012 at 8:36 PM, Robert Sayre say...@gmail.com wrote:

 On Fri, Dec 14, 2012 at 9:17 AM, Markus Lanthaler
 markus.lantha...@gmx.net wrote:

 Hmm.. I think 

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread Mark Nottingham
On 06/01/2013, at 1:29 PM, Robert Sayre say...@gmail.com wrote:

 On Sat, Jan 5, 2013 at 5:48 PM, Mark Nottingham m...@mnot.net wrote:
 
 However, at this point, doing so really a judgement call; we have multiple 
 implementations, and we shouldn't
 force them to change arbitrarily.
 
 The word arbitrarily seems inappropriate here. I raised at least
 four technical issues and your message addresses none of them.

... and I explained why. 


 As far as I can see, you haven't convinced anyone that this is a serious 
 enough problem to do so (and I don't
 appear to be the only one to hold that opinion, by any means).
 
 Did you read this thread? Markus Lanthaler and Conal Tuohy raised
 similar points.

Yes. 


 Furthermore, it's not clear that the use cases you have in mind (since you 
 have brought up JSON Sync)
 are in-scope for these specifications.
 
 That assertion is both unsubstantiated and incorrect. json-sync has
 identical primitive operations to JSON Patch (create/edit/remove vs
 add/replace/remove). The JSON Patch document defines Copy and Move in
 terms of the add/replace, so those are mostly syntactic sugar. The
 only meaningful delta is the test operation, and I do plan to add
 that to json-sync, since it's a good way to make application-specific
 assertions.

Yes, you've brought that to our attention several times. If you wanted this 
spec to align with your software, it would have been much easier if you'd got 
involved before Last Call.


 However, I'm even more interested in getting this format published,
 
 Well, I guess someone has something they want to ship...

Right. I'll let that statement stand on its own; I think anyone who's been 
participating or watching the WG can assess how justified it is.

Always a pleasure, Rob.


 
 - Rob
 
 Cheers,
 
 
 On 06/01/2013, at 11:19 AM, Robert Sayre say...@gmail.com wrote:
 
 Mark,
 
 The WG's reasoning, as stated in your message below, seems flawed.
 Messages since your last communication on this matter have shown:
 
 1) The ambiguity around arrays makes the patch format unsuitable for
 common concurrent editing algorithms.
 2) The ambiguity is likely to occur in the real world, for a couple of
 different reasons.
 3) It's not possible to tell whether a JSON Pointer document is
 syntactically correct in isolation.
 
 Additionally, you raised this point in your message below:
 
 the patch author already has to understand the semantics of the document 
 they're patching
 
 That claim does not seem to be well-justified, and it could be
 meaningless to the person implementing patch software (for example:
 https://github.com/sayrer/json-sync).
 
 This issue is a problem in practice, and it's a problem in theory as
 well. JSON-Patch messages aren't sufficiently self-descriptive, so
 they aren't appropriate for use in a RESTful system.
 
 A response containing technical reasoning seems in order, since the
 points raised by myself and others on this issue are unrelated to the
 WG's previous thinking.
 
 - Rob
 
 On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net wrote:
 Robert,
 
 This was discussed extensively in the Working Group.
 
 The root of the issue was that some people reflexively felt that this was 
 necessary, but upon reflection, we decided it wasn't; although it seems 
 natural to some, especially those coming from a static language 
 background, it didn't provide any utility.
 
 You might argue that someone who (for example) adds to /foo/1 in the 
 mistaken belief that it's an array, when in fact it's an object, will get 
 surprising results. That's true, but if we were to solve this problem, 
 that person would still need to understand the underlying semantics of 
 foo to do anything useful to it -- and I'm not hearing anyone complain 
 about that (I hope).
 
 Put another way -- do you really think that people PATCHing something as 
 if it's an array (when in fact it's an object) is a significant, 
 real-world problem, given that the patch author already has to understand 
 the semantics of the document they're patching? I don't, and the WG didn't 
 either.
 
 Regards,
 
 
 On 17/12/2012, at 3:36 PM, Robert Sayre say...@gmail.com wrote:
 
 The cost of fixing it seems low, either by changing the path syntax of
 JSON pointer or changing the names of operations applied to arrays.
 Array-like objects are common enough in JavaScript to make this a
 worry. The other suggestions either assume a particular policy for
 concurrent edits or require more machinery (test operation etc).
 Wouldn't it be simpler to make the patch format more precise?
 
 - Rob
 
 On Sun, Dec 16, 2012 at 4:33 PM, Matthew Morley m...@mpcm.com wrote:
 I am usually lurking and struggling to keep up with these posts. But, I
 concur with James, this really is a non-issue in practice.
 
 The JSON Pointer expresses a path down a JSON object to a specific 
 context.
 The Patch expresses a change within or to that context.
 Everything about the both standards is 

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread Robert Sayre
On Sat, Jan 5, 2013 at 6:59 PM, Mark Nottingham m...@mnot.net wrote:

 Yes, you've brought that to our attention several times. If you wanted
 this spec to align with your software, it would have been much easier
 if you'd got involved before Last Call.

Well, there shouldn't be any big adjustments to my software at all,
and the document generally looks good. This is just a bug: two parties
can apply the same patch and get different results, without
encountering an error.

 However, I'm even more interested in getting this format published,

 Well, I guess someone has something they want to ship...

 Right. I'll let that statement stand on its own; I think anyone who's been
 participating or watching the WG can assess how justified it is.

Ah. I meant that the WG seems to be favoring running code a little
too heavily in the presence of a bug. It's an old argument, and it's
boring: We can't change it now, there are already twelve users!


 Always a pleasure, Rob.

That tone leaves something to be desired, but no matter. This is a bug
and the WG should fix it. I don't think more process emails are
necessary.

- Rob

On Sat, Jan 5, 2013 at 6:59 PM, Mark Nottingham m...@mnot.net wrote:
 On 06/01/2013, at 1:29 PM, Robert Sayre say...@gmail.com wrote:

 On Sat, Jan 5, 2013 at 5:48 PM, Mark Nottingham m...@mnot.net wrote:

 However, at this point, doing so really a judgement call; we have multiple 
 implementations, and we shouldn't
 force them to change arbitrarily.

 The word arbitrarily seems inappropriate here. I raised at least
 four technical issues and your message addresses none of them.

 ... and I explained why.


 As far as I can see, you haven't convinced anyone that this is a serious 
 enough problem to do so (and I don't
 appear to be the only one to hold that opinion, by any means).

 Did you read this thread? Markus Lanthaler and Conal Tuohy raised
 similar points.

 Yes.


 Furthermore, it's not clear that the use cases you have in mind (since you 
 have brought up JSON Sync)
 are in-scope for these specifications.

 That assertion is both unsubstantiated and incorrect. json-sync has
 identical primitive operations to JSON Patch (create/edit/remove vs
 add/replace/remove). The JSON Patch document defines Copy and Move in
 terms of the add/replace, so those are mostly syntactic sugar. The
 only meaningful delta is the test operation, and I do plan to add
 that to json-sync, since it's a good way to make application-specific
 assertions.

 Yes, you've brought that to our attention several times. If you wanted this 
 spec to align with your software, it would have been much easier if you'd got 
 involved before Last Call.


 However, I'm even more interested in getting this format published,

 Well, I guess someone has something they want to ship...

 Right. I'll let that statement stand on its own; I think anyone who's been 
 participating or watching the WG can assess how justified it is.

 Always a pleasure, Rob.



 - Rob

 Cheers,


 On 06/01/2013, at 11:19 AM, Robert Sayre say...@gmail.com wrote:

 Mark,

 The WG's reasoning, as stated in your message below, seems flawed.
 Messages since your last communication on this matter have shown:

 1) The ambiguity around arrays makes the patch format unsuitable for
 common concurrent editing algorithms.
 2) The ambiguity is likely to occur in the real world, for a couple of
 different reasons.
 3) It's not possible to tell whether a JSON Pointer document is
 syntactically correct in isolation.

 Additionally, you raised this point in your message below:

 the patch author already has to understand the semantics of the document 
 they're patching

 That claim does not seem to be well-justified, and it could be
 meaningless to the person implementing patch software (for example:
 https://github.com/sayrer/json-sync).

 This issue is a problem in practice, and it's a problem in theory as
 well. JSON-Patch messages aren't sufficiently self-descriptive, so
 they aren't appropriate for use in a RESTful system.

 A response containing technical reasoning seems in order, since the
 points raised by myself and others on this issue are unrelated to the
 WG's previous thinking.

 - Rob

 On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net wrote:
 Robert,

 This was discussed extensively in the Working Group.

 The root of the issue was that some people reflexively felt that this was 
 necessary, but upon reflection, we decided it wasn't; although it seems 
 natural to some, especially those coming from a static language 
 background, it didn't provide any utility.

 You might argue that someone who (for example) adds to /foo/1 in the 
 mistaken belief that it's an array, when in fact it's an object, will get 
 surprising results. That's true, but if we were to solve this problem, 
 that person would still need to understand the underlying semantics of 
 foo to do anything useful to it -- and I'm not hearing anyone complain 
 about that (I hope).

 

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread James M Snell
On Jan 5, 2013 8:20 PM, Robert Sayre say...@gmail.com wrote:

 On Sat, Jan 5, 2013 at 6:59 PM, Mark Nottingham m...@mnot.net wrote:
 
  Yes, you've brought that to our attention several times. If you wanted
  this spec to align with your software, it would have been much easier
  if you'd got involved before Last Call.

 Well, there shouldn't be any big adjustments to my software at all,
 and the document generally looks good. This is just a bug: two parties
 can apply the same patch and get different results, without
 encountering an error.


Not seeing the bug... applying the same patch to different resources that
have different states ought to have different results. #worksasintended
#wontfix #moveonthereisnothingmoretoseehere

- james

  However, I'm even more interested in getting this format published,
 
  Well, I guess someone has something they want to ship...
 
  Right. I'll let that statement stand on its own; I think anyone who's
been
  participating or watching the WG can assess how justified it is.

 Ah. I meant that the WG seems to be favoring running code a little
 too heavily in the presence of a bug. It's an old argument, and it's
 boring: We can't change it now, there are already twelve users!

 
  Always a pleasure, Rob.

 That tone leaves something to be desired, but no matter. This is a bug
 and the WG should fix it. I don't think more process emails are
 necessary.

 - Rob

 On Sat, Jan 5, 2013 at 6:59 PM, Mark Nottingham m...@mnot.net wrote:
  On 06/01/2013, at 1:29 PM, Robert Sayre say...@gmail.com wrote:
 
  On Sat, Jan 5, 2013 at 5:48 PM, Mark Nottingham m...@mnot.net wrote:
 
  However, at this point, doing so really a judgement call; we have
multiple implementations, and we shouldn't
  force them to change arbitrarily.
 
  The word arbitrarily seems inappropriate here. I raised at least
  four technical issues and your message addresses none of them.
 
  ... and I explained why.
 
 
  As far as I can see, you haven't convinced anyone that this is a
serious enough problem to do so (and I don't
  appear to be the only one to hold that opinion, by any means).
 
  Did you read this thread? Markus Lanthaler and Conal Tuohy raised
  similar points.
 
  Yes.
 
 
  Furthermore, it's not clear that the use cases you have in mind
(since you have brought up JSON Sync)
  are in-scope for these specifications.
 
  That assertion is both unsubstantiated and incorrect. json-sync has
  identical primitive operations to JSON Patch (create/edit/remove vs
  add/replace/remove). The JSON Patch document defines Copy and Move in
  terms of the add/replace, so those are mostly syntactic sugar. The
  only meaningful delta is the test operation, and I do plan to add
  that to json-sync, since it's a good way to make application-specific
  assertions.
 
  Yes, you've brought that to our attention several times. If you wanted
this spec to align with your software, it would have been much easier if
you'd got involved before Last Call.
 
 
  However, I'm even more interested in getting this format published,
 
  Well, I guess someone has something they want to ship...
 
  Right. I'll let that statement stand on its own; I think anyone who's
been participating or watching the WG can assess how justified it is.
 
  Always a pleasure, Rob.
 
 
 
  - Rob
 
  Cheers,
 
 
  On 06/01/2013, at 11:19 AM, Robert Sayre say...@gmail.com wrote:
 
  Mark,
 
  The WG's reasoning, as stated in your message below, seems flawed.
  Messages since your last communication on this matter have shown:
 
  1) The ambiguity around arrays makes the patch format unsuitable for
  common concurrent editing algorithms.
  2) The ambiguity is likely to occur in the real world, for a couple
of
  different reasons.
  3) It's not possible to tell whether a JSON Pointer document is
  syntactically correct in isolation.
 
  Additionally, you raised this point in your message below:
 
  the patch author already has to understand the semantics of the
document they're patching
 
  That claim does not seem to be well-justified, and it could be
  meaningless to the person implementing patch software (for example:
  https://github.com/sayrer/json-sync).
 
  This issue is a problem in practice, and it's a problem in theory as
  well. JSON-Patch messages aren't sufficiently self-descriptive, so
  they aren't appropriate for use in a RESTful system.
 
  A response containing technical reasoning seems in order, since the
  points raised by myself and others on this issue are unrelated to the
  WG's previous thinking.
 
  - Rob
 
  On Sun, Dec 16, 2012 at 9:41 PM, Mark Nottingham m...@mnot.net
wrote:
  Robert,
 
  This was discussed extensively in the Working Group.
 
  The root of the issue was that some people reflexively felt that
this was necessary, but upon reflection, we decided it wasn't; although it
seems natural to some, especially those coming from a static language
background, it didn't provide any utility.
 
  You might argue that someone who 

Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread Robert Sayre
On Sat, Jan 5, 2013 at 8:55 PM, James M Snell jasn...@gmail.com wrote:

 On Jan 5, 2013 8:20 PM, Robert Sayre say...@gmail.com wrote:

 On Sat, Jan 5, 2013 at 6:59 PM, Mark Nottingham m...@mnot.net wrote:
 
  Yes, you've brought that to our attention several times. If you wanted
  this spec to align with your software, it would have been much easier
  if you'd got involved before Last Call.

 Well, there shouldn't be any big adjustments to my software at all,
 and the document generally looks good. This is just a bug: two parties
 can apply the same patch and get different results, without
 encountering an error.


 Not seeing the bug... applying the same patch to different resources that
 have different states ought to have different results.

This argument is fallacious. Consider this JSON patch:

{ op: remove, path: /1 }

This patch can be generated by removing a key from a hashtable by the
sender, and then applied to an array by the recipient (which may
result in array shifts etc). A good quality patch format would not
permit such an obvious ambiguity, because applying that patch can fail
all parties. The resulting document does not reflect the intent of any
author.

I have obviously said my piece. And, fwiw, I don't think the IESG
should contradict the WG.

- Rob


Re: [apps-discuss] Last Call: draft-ietf-appsawg-json-pointer-07.txt (JSON Pointer) to Proposed Standard

2013-01-05 Thread James M Snell
[{op:type,path:,value:array},{op:remove,path:/1}]

Problem solved. Still no bug, and still nothing I can see that needs
fixing.

I've said my piece on it to. Afaic, this spec is done and ready to go.

- James
 On Jan 5, 2013 9:25 PM, Robert Sayre say...@gmail.com wrote:

 On Sat, Jan 5, 2013 at 8:55 PM, James M Snell jasn...@gmail.com wrote:
 
  On Jan 5, 2013 8:20 PM, Robert Sayre say...@gmail.com wrote:
 
  On Sat, Jan 5, 2013 at 6:59 PM, Mark Nottingham m...@mnot.net wrote:
  
   Yes, you've brought that to our attention several times. If you wanted
   this spec to align with your software, it would have been much easier
   if you'd got involved before Last Call.
 
  Well, there shouldn't be any big adjustments to my software at all,
  and the document generally looks good. This is just a bug: two parties
  can apply the same patch and get different results, without
  encountering an error.
 
 
  Not seeing the bug... applying the same patch to different resources that
  have different states ought to have different results.

 This argument is fallacious. Consider this JSON patch:

 { op: remove, path: /1 }

 This patch can be generated by removing a key from a hashtable by the
 sender, and then applied to an array by the recipient (which may
 result in array shifts etc). A good quality patch format would not
 permit such an obvious ambiguity, because applying that patch can fail
 all parties. The resulting document does not reflect the intent of any
 author.

 I have obviously said my piece. And, fwiw, I don't think the IESG
 should contradict the WG.

 - Rob