Re: Of governments and representation (was: Montevideo Statement)

2013-10-11 Thread John Curran
On Oct 11, 2013, at 9:32 AM, Jorge Amodio  wrote:

> Just to start, there is no clear consensus of what "Internet Governance" 
> means and entails.

You are correct.  The term "Internet Governance" is a term of art, and a poor 
one
at that.  It is the term that governments like to use, and in fact, in 2005 
several of 
them got together at the United Nations-initiated World Summit on the 
Information 
Society (WSIS) and came up with the following definition:

"Internet governance is the development and application by Governments, the 
private sector and civil society, in their respective roles, of shared 
principles, norms, rules, decision-making procedures, and programmes that shape 
the evolution and use of the Internet."  
<http://www.wgig.org/docs/WGIGREPORT.pdf>

I happen to hate the term "Internet Governance", but its use has become a 
common 
as shorthand for the discussions of governments expressing their needs and 
desires 
with respect to the Internet, its related institutions, and civil society.

It might not be necessary for the IETF to be involved (if it so chooses), but 
I'm not
certain that leaving it to ISOC would make sense if/when the discussion moves 
into 
areas such as structures for managing delegated registries of IETF-defined 
protocols
(i.e. protocols, names, numbers)

> In your particular case as President and CEO of ARIN, clearly you "lead" that 
> organization but it does not make you representative of the Internet or its 
> users. I can't find anywhere in the Bylaws and Articles of Incorporation of 
> ARIN the word "Governance."
> 
> Nobody will deny any of the alleged "leaders" to participate in any meeting, 
> conference, event, in their individual capacities, but NONE has any 
> representation of the whole Internet.

Full agreement there...  No one has any representation of the entire Internet, 
and 
we should oppose the establishment of any structures that might aspire to such.

> Do we really want to create a "government" for the Internet ? How do you 
> propose to select people to be representatives for all the sectors ? 

I do not, and expect others on this list feel the same.  However, it is likely 
that more
folks need to participate to make sure that such things don't happen.

> And in particular how do you propose to select an IETF representative and 
> who/how it's going to give her/him its mandate to represent the organization 
> on other forums ?

That is the essential question of this discussion, and hence the reason for my 
email.

I'd recommend that the IETF select leaders whose integrity you trust, you 
provide them 
with documents of whatever principles the IETF considers important and how it 
views 
it relations with other Internet institutions (could be developed via Internet 
Drafts) and 
ask them to report back as frequently as possible.   Alternatively, the IETF 
could opt
to not participate in such discussions at all, and deal with any developments 
after the 
fact (an option only if there is sufficient faith that the current models, 
structures, and 
relationships of the IETF are inviolate.)

FYI,
/John

Re: "The core Internet institutions abandon the US Government"

2013-10-11 Thread John Levine
>Just few quick questions,
>
>In what part of Fadi Chehad� mandate at ICANN this falls ? And who
>sanctified him as representative of the Internet Community ?
>
>He is just an employee of ICANN and these actions go way beyond ICANN's
>mission and responsibilities.

ICANN has a long running fantasy that they are a global
multi-stakeholder organization floating above mere politics, and not a
US government contractor incorporated as a California non-profit.
This will never change, and everyone familiar with the situation knows
it, but for internal political reasons ICANN likes to pretend
otherwise.

I suppose in the current political situation about the NSA there's no
harm in the other groups going along with it for a while.

R's,
John


Of governments and representation (was: Montevideo Statement)

2013-10-11 Thread John Curran
Folks - 

As a result of the Internet's growing social and economic importance, the 
underlying 
Internet structures are receiving an increasing level of attention by both 
governments 
and civil society.  The recent revelations regarding US government surveillance 
of 
the Internet are now greatly accelerating government attention on all of the 
Internet 
institutions, the IETF included.  All of this attention is likely bring about 
significant
changes in the Internet ecosystem, potentially including how the IETF interacts 
with
governments, civil society, and other Internet organizations globally.

In my personal view, it is a very important for the IETF to select leadership 
who can
participate in any discussions that occur, and it would further be prudent for 
the IETF
leaders to be granted a sufficient level of support by the community to take 
positions 
in those discussions and make related statements, to the extent the positions 
and
the statements are aligned with established IETF positions and/or philosophy.   

The most interesting part of the myriad of Internet Governance discussions is 
that 
multiple organizations are all pushing ahead independently from one another, 
which
results in a very dynamic situation where we often don't even know that there 
will be
a conference or meeting until after its announced, do not know auspices under 
which 
it will be held, nor what the scope of the discussions held will ultimately be. 
 However, 
the failure of any of the Internet organizations to participate will not 
actually prevent 
consideration of a variety of unique and colorful proposals for improving the 
Internet 
and/or the IETF, nor will it preclude adoption even in the absence of IETF 
input...

The IETF is a very important Internet institution, and it deserves to be 
represented
in any discussions which might propose changes to the fundamental mechanisms of 
Internet cooperation.  It would be a wonderful world indeed if all of these 
discussions 
started with submission of an Internet Draft and discussion on open mailing 
lis, but 
that hasn't been the modus operandi of governments and is probably too much to 
realistically expect.

/John

Re: consensus, was leader statements

2013-10-10 Thread John Levine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

>Because we've got more than 120 working groups, thousands of
>participants, and the internet is now part of the world's
>communications infrastructure.  I don't like hierarchy but
>I don't know how to scale up the organization without it.

There are largish organizations that work by consensus, notably Quaker
meetings and their regional and national organizations.  But we are
not like the Quakers.  For one thing, they have long standing
traditions of how consensus works, including a tradition of "standing
aside" and not blocking consensus if you disagee but see that most
people agree in good faith.  For another, they are very, very patient.
The meeting in Ithaca NY, near where I live, took ten years to decide
about getting their own meeting house rather than rented space.  

I don't see us as that disciplined or that patient (including myself,
I'm not a Quaker, but married to one.)

So it is a reasonable question how an organization like the IETF can
govern itself.  My inclination is to be careful in the choice of
leadership, and then trust the leaders to act reasonably.

R's,
John
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.21 (FreeBSD)

iEYEARECAAYFAlJXA2oACgkQkEiFRdeC/kXbFACfYcKTHPfjK3yFvyGvydHZB0jx
z6AAn23U7x2tygklXyGav0DuYWjEdAvV
=s3DJ
-END PGP SIGNATURE-


Re: Montevideo statement

2013-10-09 Thread John C Klensin


--On Wednesday, October 09, 2013 02:44 -0400 Andrew Sullivan
 wrote:

>...
> That does not say that the IAB has issued a statement.  On the
> contrary, the IAB did not issue a statement.  I think the
> difference between some individuals issuing a statement in
> their capacity as chairs and CEOs and so on, and the body for
> which they are chair or CEO or so on issuing a similar
> statement, is an important one.  We ought to attend to it.
> 
> Please note that this message is not in any way a comment on
> such leadership meetings.  In addition, for the purposes of
> this discussion I refuse either to affirm or deny concurrence
> in the IAB chair's statement.  I merely request that we, all
> of us, attend to the difference between "the IAB Chair says"
> and "the IAB says".

Andrew,

While I agree that the difference is important for us to note,
this is a press release.  It would be naive at best to assume
that its intended audience would look at it and say "Ah. A bunch
of people with leadership roles in important Internet
organizations happened to be in the same place and decided to
make a statement in their individual capacities".  Not only does
it not read that way, but there are conventions for delivering
the "individual capacity" message, including prominent use of
phrases like "for identification only". 

Independent of how I feel about the content of this particular
statement,  if the community either doesn't like the message or
doesn't like this style of doing things, I think that needs to
be discussed and made clear.  That includes not only at the
level of preferences about community consultation but about
whether, in in the judgment of the relevant people, there is
insufficient time to consult the community, no statement should
be made at all.

Especially from the perspective of having been in the
sometimes-uncomfortable position of IAB Chair, I don't think IAB
members can disclaim responsibility in a situation like this.
Unlike the Nomcom-appointed IETF Chair, the IAB Chair serves at
the pleasure and convenience of the IAB.  If you and your
colleagues are not prepared to share responsibility for
statements (or other actions) the IAB Chair makes that involve
that affiliation, then you are responsible for taking whatever
actions are required to be sure that only those actions are
taken for which you are willing to share responsibility.   Just
as you have done, I want to stress that I'm not recommending any
action here, only that IAB members don't get to disclaim
responsibility made by people whose relationship with the IAB is
the reason why that are, e.g., part of a particular letter or
statement.

  john



Re: Last Call: (On Consensus and Humming in the IETF) to Informational RFC

2013-10-07 Thread John Leslie
Ted Lemon  wrote:
> On Oct 7, 2013, at 3:34 PM, Brian E Carpenter  
> wrote:
> 
>> So I'd like to dispute Ted's point that by publishing a version of
>> resnick-on-consensus as an RFC, we will engrave its contents in stone.
>> If that's the case, we have an even deeper problem than misunderstandings
>> of rough consensus.
> 
> Right, I think what Ted is describing is a BCP, not an Informational RFC.

   Oh my! I just saw the IESG agenda, and this _is_ proposed for BCP.

   I retract anything I said which might criticize Ted and/or Dave Crocker
for being too picky!

--
John Leslie 


Re: Last Call: (On Consensus and Humming in the IETF) to Informational RFC

2013-10-07 Thread John Leslie
Brian E Carpenter  wrote:
> 
> ... If the phrase "Request For Comments" no longer means what it says,
> we need another RFC, with a provisional title of
> "Request For Comments Means What It Says".

   ;^)

> We still see comments on RFC 791 reasonably often, and I see comments
> on RFC 2460 practically every day. That's as it should be.

   Absolutely!

> So I'd like to dispute Ted's point that by publishing a version of
> resnick-on-consensus as an RFC, we will engrave its contents in stone.

   Well, of course, all RFCs have always been "archival" -- to that
extent, they _are_ "engraved in stone"...

> If that's the case, we have an even deeper problem than misunderstandings
> of rough consensus.

   Alas, IMHO, we do. :^(

   Ted and Dave Crocker _have_ made very good comments on this I-D which
is proposed to be an Informational RFC. For the most part, these comments
could just as well come _after_ it is published as such.

   The _problem_ is that an RFC clearly labeled as an individual
contribution _should_ have value _as_ an individual contribution. It
should not have to pass muster of our famous peanut gallery before
being published.

   The story is completely different for working-group documents
published on the Standards Track. For those, our quality-control process
is quite necessary.

   Perhaps the problem comes from the boilerplate for Informational:
] 
] Status of This Memo
] 
]  This document is not an Internet Standards Track specification; it is
]  published for informational purposes.
] 
]  This document is a product of the Internet Engineering Task Force
]  (IETF).  It represents the consensus of the IETF community.  It has
]  received public review and has been approved for publication by the
]  Internet Engineering Steering Group (IESG).  Not all documents
]  approved by the IESG are a candidate for any level of Internet
]  Standard; see Section 2 of RFC 5741.
] 
]  Information about the current status of this document, any errata,
]  and how to provide feedback on it may be obtained at
]  http://www.rfc-editor.org/info/rfc7017.

   The only differences between this boilerplate and the boilerplate
for standards track is that the Standards Track has a different first
paragraph
]
] This is an Internet Standards Track document.

and the Standards Track boilerplate omits the sentence
] 
]  Not all documents approved by the IESG are a candidate for any level
]  of Internet Standard; see Section 2 of RFC 5741.

replacing it with
] 
]  Further information on Internet Standards is available in Section 2
]  of RFC 5741.

   (In fact, the IESG review is quite different for these categories.)

   We see that all these categories state, "It represents the consensus
of the IETF community." In fact, if there is an easy way to tell from
the published RFC whether an Informational RFC represents an individual
contribution or a Working Group output, it escapes me at the moment. :^(

   Thus perhaps Ted and Dave are right to hold this draft to a high
"consensus of the IETF community" standard.

   I just wish that were not so...

--
John Leslie 


Re: Last Call: Change the status of ADSP (RFC 5617) to Historic

2013-10-03 Thread John C Klensin


--On Thursday, October 03, 2013 16:51 +0200 Alessandro Vesely
 wrote:

>> ADSP was basically an experiment that failed.  It has no
>> significant deployment, and the problem it was supposed to
>> solve is now being addressed in other ways.
> 
> I oppose to the change as proposed, and support the
> explanation called for by John Klensin instead.  Two arguments:
> 
> 1)  The harm Barry exemplifies in the request
> --incompatibility with mailing list posting-- is going to
> be a feature of at least one of the other ways addressing
> that problem.  Indeed, "those who don't know history are
> destined to repeat it", and the explanation is needed to
> make history known.
> 
> 2)  A possible fix for ADSP is explained by John Levine
> himself:
> http://www.mail-archive.com/ietf-dkim@mipassoc.org/msg16969.ht
> ml I'm not proposing to mention it along with the
> explanation, but fixing is not the same as moving to
> historic.  It seems that it is just a part of RFC 5617,
> DNS records, that we want to move.

Ale,

Just to be clear about what I proposed because I'm not sure that
you actually agree:  If the situation is as described in the
write-up (and/or as described by John Levine, Murray, and some
other documents), then I'm in favor of deprecating ADSP.  The
_only_ issue I'm raising in this case is that I believe that
deprecating a feature or protocol element by moving things to
Historic by IESG action and a note in the tracker is appropriate
only for things that have been completely ignored after an
extended period or that have long ago passed out of public
consciousness.   When something has been implemented and
deployed sufficiently that the reason for deprecating it
includes assertions that it has not worked out in practice, I
believe that should be documented in an RFC both to make the
historical record clear and to help persuade anyone who is still
trying to use it to cease doing so.

There may well be arguments for not deprecating the feature, for
improving it in various ways, or for contexts in which its use
would be appropriate, but someone else will have to make them or
propose other remedies.  I have not done so nor am I likely to
do so.

  best,
john




Re: Last Call: Change the status of ADSP (RFC 5617) to Internet Standard

2013-10-02 Thread John C Klensin
I assume we will need to agree to disagree about this, but...

--On Wednesday, October 02, 2013 10:44 -0700 Dave Crocker
 wrote:

> If a spec is Historic, it is redundant to say not recommended.
> As in, duh...

"Duh" notwithstanding, we move documents to Historic for many
reasons.  RFC 2026 lists "historic" as one of the reasons a
document may be "not recommended" (Section 3.3(e)) but says only
"superceded... or is for any other reason considered to be
obsolete" about Historic (Section 4.2.4).  That is entirely
consistent with Maturity Levels and Requirement Levels being
basically orthogonal to each other, even if "Not Recommended"
and "Internet Standard" are presumably mutually exclusive.

> Even better is that an applicability statement is merely
> another place for the potential implementer to fail to look
> and understand.

Interesting.   If a potential implementer or other potential
user of this capability fails to look for the status of the
document or protocol, then the reclassification to Historic
won't be found and this effort is a waste of the community's
time.  If, by contrast, that potential user checks far enough to
determine that the document has been reclassified to Historic,
why is it not desirable to point that user to a superceding
document that explains the problem and assigns as requirement
status of "not recommended"?  

The situation would be different if a huge amount of additional
work were involved but it seems to me that almost all of the
required explanation is already in the write-up and that the
amount of effort required to approve an action consisting of a
document and a status change is the same as that required to
approve the status change only.  If creating an I-D from the
write-up is considered too burdensome and it would help, I'd be
happy to do that rather than continuing to complain.

> ADSP is only worthy of a small effort, to correct its status,
> to reflect its current role in Internet Mail.  Namely, its
> universal non-use within email filtering.

If the specification had been universally ignored, I'd think
that a simple status change without further documentation was
completely reasonable.  However, the write-up discusses "harm
caused by incorrect configuration and by inappropriate use",
"real cases", and effects from posts from users.  That strongly
suggests that this is a [mis]feature that has been sufficiently
deployed to cause problems, not someone that is "universally
non-used".  And that, IMO, calls for a explanation --at least to
the extent of the explanation in the write-up-- as to why ADSP
was a bad idea, should be retired where it is used, and should
not be further deployed.  

best,
   john



Re: Last Call: Change the status of ADSP (RFC 5617) to Internet Standard

2013-10-02 Thread John C Klensin
--On Wednesday, October 02, 2013 07:41 -0700 The IESG
 wrote:

> 
> The IESG has received a request from an individual participant
> to make the following status changes:
> 
> - RFC5617 from Proposed Standard to Historic
> 
> The supporting document for this request can be found here:
> 
> http://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-
> historic/

Hi.  Just to be sure that everyone has the same understanding of
what is being proposed here, the above says "to Historic" but
the writeup at
http://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-historic/
says "to Internet Standard".   Can one or the other be corrected?

After reading the description at the link cited above and
assuming that "Historic" is actually intended, I wonder,
procedurally, whether a move to Historic without document other
than in the tracker is an appropriate substitute for the
publication of an Applicability Statement that says "not
recommended" and that explains, at least in the level of detail
of the tracker entry, why using ADSP is a bad idea.  

If there were no implementations and no evidence that anyone
cared about this, my inclination would be to just dispose of RFC
5617 as efficiently and with as little effort as possible.  But,
since the tracker entry says that there are implementations and
that misconfiguration has caused harm (strongly implying that
there has even been deployment), it seems to me that a clear and
affirmative "not recommended" applicability statement is in
order.

thanks,
   john



Re: Last Call: Change the status of ADSP (RFC 5617) to Internet Standard

2013-10-02 Thread John Levine
>The IESG has received a request from an individual participant to make
>the following status changes:
>
>- RFC5617 from Proposed Standard to Historic
>
>The supporting document for this request can be found here:
>
>http://datatracker.ietf.org/doc/status-change-adsp-rfc5617-to-historic/

I'm one of the authors of this RFC and support the change.

ADSP was basically an experiment that failed.  It has no significant
deployment, and the problem it was supposed to solve is now being
addressed in other ways.

Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly


RE: [Tools-discuss] independant submissions that update standards track, and datatracker

2013-10-02 Thread John E Drake
Irrepressible

Yours Irrespectively,

John

From: ietf-boun...@ietf.org [mailto:ietf-boun...@ietf.org] On Behalf Of 
Abdussalam Baryun
Sent: Wednesday, October 02, 2013 5:19 AM
To: Michael Richardson
Cc: ietf; tools-disc...@ietf.org
Subject: Re: [Tools-discuss] independant submissions that update standards 
track, and datatracker

Hi Michael,

I agree that it should appear in related WG's field or area. I see in IETF we 
have WGs documents list but not areas' documents list, so the individual 
document may not be found or discovered. I think any document of IETF should be 
listed in its field area or related charter, but it seems like the culture of 
IETF focusing on groups work not on the IETF documents. For example, when I 
first joined MANET WG I thought that RFC3753 is related because it is IETF, but 
in one discussion one participant did not accept to use that document even 
though it was related. Fuethermore, some WGs don't comment on related documents 
to their WG, which I think this should change in future IETF culture (e.g. 
there was one individual doc that was requested by AD to comment on by the WG 
but no respond).

 Therefore, IMHO, the IETF is divided by groups with different point of 
views/documents and they force their WG Adopted-Work to list documents (not all 
related to Group-Charters), but it seems that managemnet does not see that 
there is a division in knowledge or in outputs of the IETF, which a new comer 
may see it clearly. I recommend to focus/list documents related to Charter, not 
related to WG adoptions, because all IETF document are examined by IESG.

AB

On Tue, Oct 1, 2013 at 7:29 PM, Michael Richardson 
mailto:mcr+i...@sandelman.ca>> wrote:

This morning I had reason to re-read parts of RFC3777, and anything
that updated it.  I find the datatracker WG interface to really be
useful, and so I visited http://datatracker.ietf.org/wg/nomcom/
first.  I guess I could have instead gone to:
   http://www.rfc-editor.org/info/rfc3777

but frankly, I'm often bad with numbers, especially when they repeat...
(3777? 3737? 3733?)

While http://datatracker.ietf.org/wg/nomcom/ lists RFC3777, and
in that line, it lists the things that update it, it doesn't actually list
the other documents.  Thinking this was an error, I asked, and Cindy kindly
explained:

>http://datatracker.ietf.org/wg/nomcom/ lists the documents that were
>published by the NOMCOM Working Group.  The NOMCOM Working Group was
>open from 2002-2004, and only produced one RFC, which is RFC 3777.
>
>The RFCs that update 3777 were all produced by individuals (that is,
>outside of the NOMCOM Working Group), and so aren't listed individually
>on the NOMCOM Working Group documents page.

I wonder about this as a policy.

Seeing the titles of those documents would have helped me find what I wanted
quickly (RFC5680 it was)...

While I think that individual submissions that are not the result of
consensus do not belong on a WG page.  But, if the document was the result of
consensus, but did not occur in a WG because the WG had closed, I think that
perhaps it should appear there anyway.

--
Michael Richardson mailto:mcr%2bi...@sandelman.ca>>, 
Sandelman Software Works



___
Tools-discuss mailing list
tools-disc...@ietf.org<mailto:tools-disc...@ietf.org>
https://www.ietf.org/mailman/listinfo/tools-discuss



RE: LC comments on draft-cotton-rfc4020bis-01.txt

2013-09-29 Thread John C Klensin


--On Sunday, 29 September, 2013 09:19 +0100 Adrian Farrel
 wrote:

> Hi John,
> 
> Thanks for the additions.
> 
>> Everything you say seems fine to me for the cases you are
>> focusing on, but I hope that any changes to 4020bis keep two
>> things in mind lest we find ourselves tangled in rules and
>> prohibiting some reasonable behavior (a subset of which is
>> used now).
> 
> 4020bis is certainly not intended to prohibit actions we do
> now. AFAICS it actually opens up more scope for early
> allocation that was not there before.
>...
> I am pretty sure that nothing in this document impacts on 5226
> at all except to define how early allocation works for certain
> allocation policies defined by 5226. You are correct that 5226
> is not a closed list. However we may observe that it is used
> in the significant majority of cases. I think that means that
> if some non-5226 policy is agreed by a WG and "the relevant
> approving bodies" together with IANA, then that policy needs
> to define its own early allocaiton procedure if one is wanted.
>...
> This document, therefore requires the WG chairs and the AD to
> be involved in the decision to do early allocation.

That was how I read the current version, so we are on the same
page.  Eric's note was considerably wider-ranging so, at you and
Michelle work on revisions, I just wanted to caution against
accidentally going too far in a direction that could be
construed as changing other things.   

>From my point of view, it would be a good thing to further
emphasize that a code point, once allocated through any process,
is, in a lot of cases, unlikely to be usable in the future for
anything else.   That is largely independent of whether the
allocation is identified as "early", "preliminary",
"provisional", "easy", or "final".  The current version is, I
think, good enough about that, but better would be, well, better.

thanks,
john





RE: LC comments on draft-cotton-rfc4020bis-01.txt

2013-09-28 Thread John C Klensin


--On Saturday, 28 September, 2013 23:44 +0100 Adrian Farrel
 wrote:

> Hi,
> 
> I am working with Michelle on responses and updates after IETF
> last call. Most of the issues give rise to relatively easy
> changes, and Michelle can handle them in a new revision with a
> note saying what has changed and why.
> 
> But Eric's email gives rise to a wider and more difficult
> point on which I want to comment.
>...
> This is, indeed, the fundamental issue. And it is sometimes
> called "squatting".
>...
> The way we have handled this in the past is by partitioning
> our code spaces and assigning each part an allocation policy.
> There is a good spectrum of allocation policies available in
> RFC 5226, but it is up to working group consensus (backed by
> IETF consensus) to assign the allocation policy when the
> registry is created, and to vary that policy at any time.
>...
> The early allocation thing was created to "blur the edges."
> That is, to cover the case where exactly the case that Eric
> describes arises, but where the registries require publication
> of a document (usually an RFC). The procedures were documented
> in RFC 4020, but that was almost in the nature of an
> experiment. The document in hand is attempting to tighten up
> the rules so that IANA knows how to handle early allocations.
>...

Adrian,

Everything you say seems fine to me for the cases you are
focusing on, but I hope that any changes to 4020bis keep two
things in mind lest we find ourselves tangled in rules and
prohibiting some reasonable behavior (a subset of which is used
now).

(1) RFC 5226 provides, as you put it, "a good spectrum of
allocation policies" but a WG (or other entity creating a
registry) can specify variations on them and, if necessary,
completely different strategies and methods.  As long as they
are acceptable to IANA and whatever approving bodies are
relevant, 5226 is not a closed list.  In particular, more than
one registry definition process has discovered that the 5226
language describing the role of a Designated Expert and the
various publication models are not quite right for their needs.

(2) We've discovered in several WGs and registries that early
allocation is just the wrong thing to do, often for reasons that
overlap some of Eric's concerns.  I don't see that as a problem
as long as 4020bis remains clear that early allocation is an
option that one can choose and, if one does, this is how it
works rather than appearing to recommend its broad use.

thanks,
   john



Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-22 Thread John C Klensin


--On Sunday, 22 September, 2013 12:59 -0400 Paul Wouters
 wrote:

>> Except that essentially all services other than email have
>> gained popularity  in centralized form, including IM.
> 
> Note that decentralising makes you less anonymous. If everyone
> runs
> their own jabber service with TLS and OTR, you are less
> anonymous than
> today. So "decentralising" is not a solution on its own for
> meta-data
> tracking.

Perhaps more generally, there may be tradeoffs between content
privacy and tracking who is talking with whom.  For the former,
decentralization is valuable because efforts to compromise the
endpoints and messages stored on them without leaving tracks is
harder.  In particular, if I run some node in a highly
distributed environment, a court order demanding content or logs
(or a call "asking" that I "cooperate") in disclosing data,
keys, etc., would be very difficult to keep secret from me (even
if it prevented me from telling my friends/ peers).   And a lot
more of those court orders or note would be required than in a
centralized environment.  On the other hand, as you point out,
traffic monitoring is lots easier if IP addresses identify
people or even small clusters of people.

The other interesting aspect of the problem is that, if we want
to get serious about distributing applications down to very
small scale, part of that effort is, I believe necessarily,
getting serious about IPv6 and avoidance of highly centralized
conversion and address translation functions.

john





RE: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-22 Thread John C Klensin


--On Sunday, 22 September, 2013 17:37 + Christian Huitema
 wrote:

>...
> It is very true that innovation can only be sustained with a
> revenue stream. But we could argue that several services have
> now become pretty much standardized, with very little
> additional innovation going on. Those services are prime
> candidates for an open and distributed implementation. I mean,
> could a WG design a service that provides a stream of personal
> updates and a store of pictures and is only accessible to my
> friends? And could providers make some business by selling
> personal servers, or maybe personal virtual servers? Maybe I
> am a dreamer, but hey, nothing ever happens if you don't dream
> of it!

I agree completely.  However, one could equally well say that
operations can only be sustained with a revenue stream and trust
models among parties that don't already have first-hand
relationships can get a tad complicated.  Setting up a
distributed email environment that supports secure communication
among a small circle of friends (especially
technically-competent ones) is pretty easy, even easier than the
service you posit above.  Things become difficult and start to
encourage centralized behavior when, e.g., (i) the community
allow basic Internet service providers to either prohibit
running "servers" or make it unreasonably expensive, (ii) one
wants the communications to be persistent enough that storage,
backup, and operations becomes a big deal, and/or (iii) one
wants on-net or in-band ways to introduce new parties to the
group when there are Bad Guys out there (which more or less
reinvents the PGP problem).  

Architecturally, one can make a case that the Internet is much
better designed for peer to peer arrangements than for client to
Big Centrally-Controlled Server ones, even though trends in
recent years run in the latter direction (and I still have
trouble telling the fundamental structural differences between a
centralized operation with extensive "web services" and users on
dumb machines on the one hand and the central computer services
operations of my youth on the other).

So, a good idea and one that should be, IMO, pursued.  But there
are a lot of interesting and complex non-technical barriers.

best,
   john




Re: [Fwd: I-D Action: draft-carpenter-prismatic-reflections-00.txt]

2013-09-22 Thread John C Klensin


--On Sunday, 22 September, 2013 07:02 -0400 Noel Chiappa
 wrote:

>...
> Yes. $$$. Nobody makes much/any money off email because it is
> so de-centralized. People who build wonderful new applications
> build them in a centralized way so that they can control them.
> And they want to control them so that they can monetize them.

That is even true of the large email providers who are happy to
provide "free" email in return for being able leverage their
other products and/or "sell" the users and user base to
advertisers.

And people, including, I've noticed, a lot of IETF participants,
go along.  Email is, in practice, a lot more centralized than it
was ten or 15 years ago and is at risk of getting more, not only
as more users migrate but as those providers decide it is easier
to trust only each other.  With DKIM, increasing use of
blacklists, and other things, the latter may be better (from a
distributed environment standpoint) than it was a half-dozen
years ago, but I'm concerned that the pattern may be cyclic with
new domains providing new challenges and incentives for "trust
those you know already" models.

   john




Re: Dotless in draft-ietf-homenet-arch-10

2013-09-20 Thread John Levine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

In article <6.2.5.6.2.20130920070952.0664d...@elandnews.com> you write:
>Hi Spencer,
>
>I read your DISCUSS about draft-ietf-homenet-arch-10:
>
>   'Is there a useful reference that could be provided for "dotless"?'

Another possibility is draft-hoffine-already-dotless.  It's in the ISE
queue, and I think it's reasoably likely it'll be published.

R's,
John
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.21 (FreeBSD)

iEYEARECAAYFAlI8fykACgkQkEiFRdeC/kV7igCfeOEJ1OJ+HEiMXuhQSpIYD9OD
3mkAniWmk1q1WZJZ2j8l+YxmPIhom/uB
=cIFH
-END PGP SIGNATURE-


Re: Transparency in Specifications and PRISM-class attacks

2013-09-20 Thread John C Klensin


--On Friday, September 20, 2013 10:15 -0400 Ted Lemon
 wrote:

> On Sep 20, 2013, at 9:12 AM, Harald Alvestrand
>  wrote:
>> From the stack I'm currently working on, I find the ICE spec
>> to be convoluted, but the SDP spec is worse, becaue it's
>> spread across so many documents, and there are pieces where
>> people seem to have agreed to ship documents rather than
>> agree on what they meant. I have not found security
>> implications of these issues.
> 
> This sort of thing is a serious problem; people do make
> efforts to address it by writing online guides to protocol
> suites, but this isn't always successful, and for that matter
> isn't always done.   We could certainly do better here.

Ted,

Based in part on experience with the specs of, and discussions
in, other standards bodies, the problem with guides (online or
not) is 

(1) They may contain errors and almost always have omissions.
The latter are often caused by the perfectly good intention of
simplifying things and making them understandable by covering
only the important cases.

(2) If they are comprehensible and the standard is not, people
tend to refer to them and not the standard.  That ultimately
turns them into the "real" standard as far as the marketplace is
concerned.   FWIW, the same problem can, and has, happened with
good reference implementations.

I don't know of any general solution to those problems, but I
think the community and the IESG have got to be a lot more
willing to push back on a spec because it is incomprehensible or
contains too many options than has been the case in recent years.

   john





Re: ORCID - unique identifiers for contributors

2013-09-19 Thread John Levine
>> I would even suggest that all I-D authors, at the very least, should
>> need to register with the IETF to submit documents. 
>
>Oddly enough, back in the Dark Ages (i.e. the ARPANET), the DDN maintained
>such a registry, and so if you Google 'NC3 ARPANET' you will see that that
>was the ID assigned to me back then. We could easily do something similar.

It carried over to the early NetSol registries, where I was JL7, but
please include me out this time.

It might be useful to have some way for RFC authors to create a
persistent forwarding address, if they want to do so.  We already have
a place for authors to include an ORCID URL, if they want to.

But it is really not our problem to make life easier for grad students
in the 2040s trying to create influence graphs from the
acknowledgement sections of dusty old RFCs.

R's,
George


Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe

2013-09-18 Thread John C Klensin


--On Wednesday, September 18, 2013 17:22 -0400 Alan Clark
 wrote:

> John, Brian
> 
> Most standards organizations require that participants who
> have, or whose company has, IPR relevant to a potential
> standard, disclose this at an early stage and at least prior
> to publication.
> 
> The participants in the IETF are individuals however RFC3979
> addresses this by stating that any individual participating in
> an IETF discussion "must" make a disclosure if they are aware
> of IPR from themselves, their employer or sponsor, that could
> be asserted against an implementation of a contribution. The
> question this raises is - what does participation in a
>...

Alan,

Variations on these themes and options have been discussed
multiple times.  Of course, circumstances change and it might be
worth reviewing them again, especially if you have new
information.  However, may I strongly suggest that you take the
question to the ipg-wg mailing list.   Most or all of the people
who are significantly interested in this topic, including those
who are most responsible for the current rules and conventions,
are on that list.  Your raising it there would permit a more
focused and educated discussion than you are likely to find on
the main IETF list.

Subscription and other information is at
https://www.ietf.org/mailman/listinfo/ipr-wg

best,
   john




Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe

2013-09-18 Thread John C Klensin


--On Thursday, September 19, 2013 07:57 +1200 Brian E Carpenter
 wrote:

> On 17/09/2013 05:34, Alan Clark wrote:
> ...
>> It should be noted that the duty to disclose IPR is NOT ONLY
>> for the authors of a draft, and the IETF "reminder" system
>> seems to be focused solely on authors. The duty to disclose
>> IPR lies with any individual or company that participates in
>> the IETF not just authors.
> 
> Companies don't participate in the IETF; the duty of
> disclosure is specifically placed on individual contributors
> and applies to patents "reasonably and personally known" to
> them.
> 
> IANAL but I did read the BCP.

Brian,

That isn't how I interpreted Alan's point.  My version would be
that, if the shepherd template writeup says "make sure that the
authors are up-to-date" (or anything equivalent) it should also
say "ask/remind the WG participants too".   IMO, that is a
perfectly reasonable and orderly suggestion (and no lawyer is
required to figure it out).   One inference from Glen's point
that authors have already certified that they have provided
anything they need to provide by the time an I-D is posted with
the "full compliance" language is that it may actually be more
important to remind general participants in the WG  than to ask
the authors.

   john





Re: ORCID - unique identifiers for contributors

2013-09-18 Thread John Levine
>There are, in the RfC I used as an example, far more acknowledged
>contributors, than authors. No addresses for those contributors are
>given.

As far as I can tell, nobody else considers that to be a problem.

I have written a bunch of books and looked at a lot of bibliographic
records, and I have never, ever seen any of them try to catalog the
acknowledgements.  Indeed, in many books the acknowledgemnents are
deliberately very informal, e.g., "thanks to Grandma Nell for all the
cookies and to George for his unwavering support."  George turns out
to be his dog.

R's,
John

PS: On the other hand:
http://www.condenaststore.com/-sp/On-the-Internet-nobody-knows-you-re-a-dog-New-Yorker-Cartoon-Prints_i8562841_.htm


Re: ORCID - unique identifiers for contributors

2013-09-18 Thread John C Klensin


--On Wednesday, September 18, 2013 14:30 +0100 Andy Mabbett
 wrote:

> On 18 September 2013 14:04, Tony Hansen  wrote:
>> I just re-read your original message to ietf@ietf.org. What I
>> had originally taken as a complaint about getting a way to
>> have a unique id (in this case, an ORCID) for the authors was
>> instead a complaint about getting a unique id for the people
>> listed in the acknowledgements.
>> 
>> I can't say I have a solution for that one.
> 
> It wasn't a complaint, but a suggested solution, for both
> authors and other named contributors.

Andy, we just don't have a tradition of identifying people whose
contributed to RFCs with either contact or identification
information.  It is explicitly possible when "Contributors"
sections are created and people are listed there, but contact or
identification information is not required in that section,
rarely provided, and, IIR, not supported by the existing tools.

That doesn't necessarily mean that doing so is a bad idea
(although I contend that getting it down to listings in
Acknowledgments would be) but that making enough changes to both
incorporate the information and make it available as metadata
would be a rather significant amount of work and would probably
reopen policy issues about who is entitled to be listed.

For those who want to use ORCIDs, the suggestion made by Tony
and others to just use the author URI field is the path of least
resistance and is usable immediately.  A URN embedding has
several things to recommend it over that (mostly technical
issues that would be clutter on this list).   You would need to
have a discussion with the RFC Editor as to whether, e.g.,
ORCIDs inserted as parenthetical notes after names in
Contributor sections or even acknowledgments would be tolerated
or, given a collection of rules about URIs in RFCs, removed, but
you could at least do that in I-Ds without getting community
approval.

If you want and can justify more formal recognition for ORCIDs
as special and/or required, you haven't, IMO, made that case
yet.  Perhaps more important from your point of view, if you
were, impossibly, to get that consensus tomorrow, it would
probably be years [1] before you'd see complete implementation.

best,
   john

[1] Slightly-informed guess but I no longer have visibility into
ongoing scheduling and priority decisions.




Re: PS Characterization Clarified

2013-09-18 Thread John C Klensin


--On Wednesday, September 18, 2013 10:59 +0200 Olaf Kolkman
 wrote:

>> However, because the document will be read externally, I
>> prefer that it be "IETF" in all of the places you identify.
>> If we have to hold our noses and claim that the community
>> authorized the IESG actions by failing to appeal or to recall
>> the entire IESG, that would be true if unfortunate.  I would
>> not like to see anything in this document that appears to
>> authorize IESG actions or process changes in the future that
>> are not clearly authorized by community consensus regardless
>> of how we interpret what happened in the past.
>...

> But one of the things that we should try to maintain in making
> that change is the notion that the IESG does  have a almost
> key-role in doing technical review. You made the point that
> that is an important distinction between 'us' and formal SDOs.


It doesn't affect the document but can we adjust or vocabulary
and thinking to use, e.g., "more traditional" rather than
"formal".  There is, IMO, too little that we do that is
"informal" any more, but that isn't the point.

> Therefore I propose that that last occurrence reads:

>...

> I think that this language doesn't set precedence and doesn't
> prescribe how the review is done, only that the IESG does do
> review.
>...
> 
> In full context:
> 
> In fact, the IETF review is more extensive than that done
> in other SDOs owing to the cross-area technical review
> performed by the IETF,exemplified by technical review by
> the full IESG  at last stage of specification development.
> That position is further strengthened by the common
> presence of interoperable running code and implementation
> before publication as a Proposed Standard.

> Does that work?

The new sentence does work and is, IMO, excellent.

I may be partially responsible for the first sentence but, given
other comments, suggest that you at least insert "some" so that
it ends up being "...more extensive than that done in some other
SDOs owing...".  That makes it a tad less combative and avoids a
potentially-contentious argument about counterexamples.

The last sentence is probably ok although, if we were to do an
actual count, I'd guess that the fraction of Proposed Standards
for which implemented and interoperability-tested conforming
running code exists at the time of approval is somewhat less
than "common".

john



Re: [IETF] Re: ORCID - unique identifiers for contributors

2013-09-17 Thread John Levine
>It's practically essential for academics whose career depends on
>attribution of publications and on citation counts (and for the
>people who hire or promote them).

Gee, several of the other John Levines have published way more than I
have.  If what we want is citation counts, confuse away.

R's,
John

PS: If you think I think this topic has been beaten to death and back,
you wouldn't be mistaken.


Re: ORCID - unique identifiers for contributors

2013-09-17 Thread John Levine
>Having an IETF identity is OK if all you ever publish is in the IETF. Some of 
>our
>participants also publish at other SDOs such as IEEE, W3C, ITU, and quite a 
>few publish
>Academic papers. Using the same identifier for all these places would be 
>useful, and
>that single identifier is not going to be an @ietf.org email address.

If you want Yahoo mail or gmail or pobox.com, you know where to find it.

Or people here are, I expect, mostly able to arrange for their own
vanity domains.

R's,
John, ab...@no.sp.am


Re: PS Characterization Clarified

2013-09-17 Thread John C Klensin
Pete,

I generally agree with your changes and consider them important
-- the IESG should be seen in our procedural documents as
evaluating and reflecting the consensus of the IETF, not acting
independently of it.

Of the various places in the document in which "IESG" now
appears, only one of them should, IMO, even be controversial.
It is tied up with what I think is going on in your exchange
with Scott:

--On Tuesday, September 17, 2013 18:10 -0500 Pete Resnick
 wrote:

>>> Section 2:
>...
>>> "the IESG strengthened its review"
>...
>>> The IETF as a whole, through directorate reviews, area
>>> reviews, doctor reviews, *and* IESG reviews, has evolved,
>>> strengthened, ensured, etc., its reviews.
>>>  
>> I believe that change would be factually incorrect
> 
> Which part of the above do you think is factually incorrect?

The issue here --about which I mostly agree with Scott but still
believe your fix is worth making-- is that the impetus for the
increased and more intense review, including imposing a number
of requirements that go well beyond those of 2026, did not
originate in the community but entirely within the IESG.  It
didn't necessarily originate with explicit decisions.  In many
cases, it started with an AD taking the position that, unless
certain changes were made or things explained to his (or
occasionally her) satisfaction, the document would rot in the
approval process.  Later IESG moves to enable overrides and
clarify conditions for "discuss" positions can be seen as
attempts to remedy those abuses but, by then, it was too late
for Proposed Standard.  And, fwiw, those changes originated
within the IESG and were not really subject to a community
consensus process either.

However, because the document will be read externally, I prefer
that it be "IETF" in all of the places you identify.  If we have
to hold our noses and claim that the community authorized the
IESG actions by failing to appeal or to recall the entire IESG,
that would be true if unfortunate.  I would not like to see
anything in this document that appears to authorize IESG actions
or process changes in the future that are not clearly authorized
by community consensus regardless of how we interpret what
happened in the past.

john





Re: ORCID - unique identifiers for contributors

2013-09-17 Thread John Levine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

>Asking for ORCID support in the tool set and asking for IETF endorsement
>are two very different things.
>
>Having tool support for it is a necessary first step to permitting IETF
>contributors to gain experience with it.   We need that experience before we
>can talk about consensus.

The toolset already lets you put in URIs.  What else do you think it needs?

R's,
John
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.21 (FreeBSD)

iEYEARECAAYFAlI4gwcACgkQkEiFRdeC/kXqJQCfRBk5uNBf+EEMHlj6BWSRvQCL
YsUAnA6ynSuwlRSigpjw/dhQgQNZltmy
=1xR4
-END PGP SIGNATURE-


Re: ORCID - unique identifiers for contributors

2013-09-17 Thread John C Klensin


--On Tuesday, September 17, 2013 11:20 -0400 Michael Richardson
 wrote:

> 
> I did not know about ORCID before this thread.
> I think it is brilliant, and what I've read about the mandate
> of orcid.org, and how it is managed, I am enthusiastic.
> 
> I agree with what Joel wrote:
> 
> Asking for ORCID support in the tool set and asking for IETF
> endorsement are two very different things.
> 
> Having tool support for it is a necessary first step to
> permitting IETF contributors to gain experience with it.   We
> need that experience before we can talk about consensus.
> 
> So, permit ORCID, but not enforce.

The more I think about it, the more I think that Andy or someone
else who understands ORCIDs and the relevant organizations,
etc., should be working on a URN embedding of the things.  Since
we already have provisions for URIs in contact information, an
ORCID namespace would permit the above without additional
tooling or special RFC Editor decision making.  It would also
avoid entanglement with and controversies about the rather long
RFC Editor [re]tooling queue.

Doing the write-up would require a bit of effort but, in
principle,
URN:ORICD:
is pretty close to trivially obvious.

Comments about dogfood-eating and not inventing new mechanisms
when we have existing ones might be inserted by reference here.

> An interesting second (or third) conversation might be about
> how I could insert ORCIDs into the meta-data for already
> published documents. 

With a URN embedding that question would turn into the much more
general one about how URIs in contact metadata could be
retroactively inserted and updated. In some ways, that is
actually an easier question.

best,
   john







RE: ORCID - unique identifiers for bibliographers

2013-09-17 Thread John C Klensin


--On Monday, September 16, 2013 22:28 -0400 John R Levine
 wrote:

>> I do have an identical twin brother, and hashing the DNA
>> sequence collides more regularly than either random or
>> MAC-based interface-identifiers in IPv6.
>> 
>> Also, he doesn't have the same opinions.
> 
> Clearly, one of you needs to get to know some retroviruses.

Or you aren't identical enough.  Clearly the hash should be
computed over both your DNA sequence and a canonical summary of
your opinions.

Are we far enough down this rathole?

john





Re: ORCID - unique identifiers for contributors

2013-09-17 Thread John C Klensin
Hi.  I agree completely with Joel, but let me add a bit more
detail and a possible alternative...

--On Tuesday, September 17, 2013 08:56 -0400 "Joel M. Halpern"
 wrote:

> If you are asking that she arrange for the tools
> to include provision for using ORCHIDs, that is a reasonable
> request.  SUch a request would presumably be prioritized along
> with the other tooling improvement that are under
> consideration.

And either explicit provision for ORCID or more general
provisions for other identifying characteristics might easily be
added as part of the still-unspecified conversions to support
non-ASCII characters.  

That said, you could get ORCID IDs into RFCs on your own
initiative by defining and registering a URN type that embedded
the ORCID and then, in xml2rfc terms, using the  element of
 to capture it.  If you want to pursue that
course, RFCs 3044 and 3187 (and others) provide examples of how
it is done although I would suggest that you also consult with
the URNBIS WG before proceeding because some of the procedures
are proposed to be changed.  The RFC Editor (at least) would
presumably need to decide that ORCID-based URNs were
sufficiently stable, but no extra tooling would be required.

> On the other hand, if youa re asking that the IETF endorse or
> encourage such uses, there are two problems.  First, the RFC
> Editor does not speak for the IETF.  You need to actually get
> a determination of IETF rough consensus on the ietf email
> list.  That consensus would need to be based on a more
> specific question than "do we want to allow ORCHIDs", and then
> would be judged on that question by the IETF chair.

And, if you asked that the ORCID be used _instead_ of other
contact information, the issues and several people have raised
would apply in that discussion and, at minimum, would make
getting consensus harder.

john





Re: PS Characterization Clarified

2013-09-17 Thread John C Klensin


--On Tuesday, September 17, 2013 11:32 +0100 Dave Cridland
 wrote:

> I read John's message as being against the use of the phrase
> "in exceptional cases". I would also like to avoid that; it
> suggests that some exceptional argument may have to be made,
> and has the implication that it essentially operates outside
> the process.

Exactly.

> I would prefer the less formidable-sounding "on occasion",
> which still implies relative rarity.

And "on occasion" is at least as good or better than my
suggestions of "usually", "commonly", or "normally" although I
think any of the four would be satisfactory.


--On Tuesday, September 17, 2013 07:06 -0400 Scott Brim
 wrote:

>...
> Exceptions and arguments for and against are part of the
> process. Having a process with no consideration for exceptions
> would be exceptional.

Scott, it an IETF technical context, I'd completely agree,
although some words like "consideration for edge cases" would be
much more precise if that is actually what you are alluding to.
But part of the intent of this revision to 2026 is to make what
we are doing more clear to outsiders who are making a good-faith
effort to understand us and our standards.  In that context,
what you say above, when combined with Olaf's text, is likely to
be read as:

"We regularly, and as a matter of course, consider
waiving our requirements for Proposed Standard entirely
and adopt specifications using entirely different (and
undocumented) criteria."  

That is misleading at best.  In the interest of clarity, I don't
think we should open the door that sort of interpretation if we
can avoid it.

I don't think it belongs in this document (it is adequately
covered by Olaf's new text about other sections), but it is
worth remembering that we do have a procedure for making
precisely the type of exceptions my interpretation above
implies: the Variance Procedure of Section 9.1 of 2026.   I
cannot remember that provision being invoked since 2026 was
published -- it really is "exceptional" in that sense.  Its
existence may be another reason for removing "exceptional" from
the proposed new text because it could be read as implying that
we have to use the Section 9.1 procedure for precisely the cases
of a well-documented, but slightly incomplete, that most of us
consider normal.  In particular, it would make the approval of
the specs that Barry cited in his examples invalid without
invoking the rather complex procedure of Section 9.1.  I'd
certainly not like to have text in this update that encourages
that interpretation and the corresponding appeals -- it would
create a different path to the restriction Barry is concerned
about.

john



Re: PS Characterization Clarified

2013-09-17 Thread John C Klensin


--On Tuesday, September 17, 2013 11:47 +0200 Olaf Kolkman
 wrote:

> 
> 
> Based on the conversation below I converged to:
> 
> 
>
>   While less mature specifications will usually be
> published as   Informational or Experimental RFCs, the
> IETF may, in exceptional   cases, publish a specification
> that still contains areas for   improvement or certain
> uncertainties about whether the best   engineering choices
> are made.  In those cases that fact will be   clearly and
> prominently communicated in the document e.g. in the
> abstract, the introduction, or a separate section or statement.
> 

I suggest that "communicated in the document e.g. in..." now
essentially amounts to "... communicated in the document, e.g.
in the document." since the examples span the entire set of
possibilities.   Consequently, for editorial reasons and in the
interest of brevity, I recommend just stopping after
"prominently communicated in the document.".  But, since the
added words are not harmful, I have no problem with your leaving
them if you prefer.

   john




RE: ORCID - unique identifiers for bibliographers

2013-09-16 Thread John R Levine

I do have an identical twin brother, and hashing the DNA sequence collides more 
regularly than either random or MAC-based interface-identifiers in IPv6.

Also, he doesn't have the same opinions.


Clearly, one of you needs to get to know some retroviruses.

Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
"I dropped the toothpaste", said Tom, crestfallenly.


Re: ORCID - unique identifiers for contributors

2013-09-16 Thread John Levine
>How do I know that the sender of this message actually has the right
>to claim the ORCID in question (-0001-5882-6823)? The web page
>doesn't present anything (such as a public key) that could be used
>for authentication.

I dunno.  How do we know who brian.e.carpen...@gmail.com is?  I can
tell you from experience that a lot of people think they are
john.lev...@gmail.com, and all but one of them are mistaken.

R's,
John

PS: Now that I think about it, you can already put in a personal URL
in rfc2xml, so if someone wants to use an ORCID URL, they can do so
right now.







Re: ORCID - unique identifiers for bibliographers

2013-09-16 Thread John Levine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

>Since this has turned out to be ambiguous, I have decided to instead use a
>SHA-256 hash of my DNA sequence:
>
>9f00a4-9d1379-002a03-007184-905f6f-796534-06f9da-304b11-0f88d7-92192e-98b2

How does your identical twin brother feel about this?

-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.21 (FreeBSD)

iEYEARECAAYFAlI3X+8ACgkQkEiFRdeC/kWmMQCfS56oZrlO5XXrMiS+fyJgA3W+
mZUAoKL5JAFfSAZgKWq8Le+9IvtzLuXv
=E9r0
-END PGP SIGNATURE-


Re: ORCID - unique identifiers for bibliographers

2013-09-16 Thread John Levine
>* The purpose of ORCID is to /uniquely/ identify individuals, both to
>differentiate between people with similar names, and to unify works
>where the author uses variant or changed names

If you think that's a good idea, I don't see any reason to forbid
people from including an ORCID along with the real contact info, but I
would be extremely unhappy if the IETF were to mandate it or anything
like it.

My name turns out to be fairly common.  Over the years, I have been
confused with a comp sci professor in Edinburgh, a psychology
professor in Pittsburgh, another comp sci researcher in Georgia, a
psychiatrist in Cambridge MA, a composer in Cambridge UK, a car buyer
in Phoenix, and some random guy in Brooklyn, all of whom happen to be
named John Levine.  Tough.  Not my problem.

I also think that it's time for people to get over the "someone might
spam me so I'm going to hide" nonsense.  The point of putting contact
info in an RFC is so that people can contact you, and the most
ubiquitous contact identifiers we have remain e-mail addresses.  I
still use the same e-mail address I've had since 1993 (the one in the
signature below), and my garden variety spam filters are quite able to
keep it usable.  If I can do it, so can you.

Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly
 


Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe

2013-09-16 Thread John C Klensin


--On Monday, September 16, 2013 19:35 +0700 Glen Zorn
 wrote:

>... 
>> The wording of this question is not a choice. As WG chairs we
>> are required to answer the following question which is part
>> of the Shepherd write-up as per the instructions from the
>> IESG http://www.ietf.org/iesg/template/doc-writeup.txt:
>> 
>>> (7) Has each author confirmed that any and all appropriate
>>> IPR
>>> disclosures required for full conformance with the provisions
>>> of BCP 78
>>> and BCP 79 have already been filed. If not, explain why.

>> We have no choice but to relay the question to the authors.
> 
> I see, just following orders.

For whatever it is worth, I think there is a rather different
problem here.  I also believe it is easily solved and that, if
it is not, we have a far deeper problem.

I believe the document writeup that the IESG posts at a given
time is simply a way of identifying the information the IESG
wants (or wants to be reassured about) and a template for a
convenient way to supply that information.  If that were not the
case:

 (i) We would expect RFC 4858 to be a BCP, not an
Informational document.
 (ii) The writeup template would need to represent
community consensus after IETF LC, not be something the
IESG put together and revises from time to time.
 (iii) The various experiments in alternative template
formats and shepherding theories would be improper or
invalid without community consensus, probably expressed
through formal "process experiment" authorizations of
the RFC 3933 species.

The first sentence of the writeup template, "As required by RFC
4858, this is the current template..." is technically invalid
because RFC 4858, as an Informational document, cannot _require_
anything of the standards process.  Fortunately, it does not say
"you are required to supply this information in this form" or
"you are required to ask precisely these questions", which would
be far worse.

>From my point of view, an entirely reasonable response to the
comments above that start "As WG chairs we are required to
answer the following question..." and "We have no choice but to
relay..." is that you are required to do no such thing.  The
writeup template is guidance to the shepherd about information
and assurances the IESG wants to have readily available during
the review process, nothing more.   I also believe that any AD
who has become sufficiently impressed by his [1] power and the
authority of IETF-created procedures to insist on a WG chair's
asking a question and getting an answer in some particular form
has been on the IESG, or otherwise "in the leadership" much too
long [2].

In fairness to the IESG, "Has each author confirmed..." doesn't
require that the document shepherd or WG Chair ask the question
in any particular way.   Especially if I knew that some authors
might be uncomfortable being, in Glen's words, treated as
8-year-old children, I think I would ask the question in a form
similar to "since the I-Ds in which you were involved were
posted, have you had any thoughts or encountered any information
that would require filing of additional IPR disclosures?".
That question is a reminder that might be (and occasionally has
been) useful.  A negative answer to it would be fully as much
"confirming that any and all appropriate IPR disclosures..."
have been filed as one whose implications are closer to "were
you telling the truth when you posted that I-D".  I think Glen's
objections to the latter are entirely reasonable, but there is
no need to go there.

Finally, I think a pre-LC reminder is entirely appropriate,
especially for revised documents or older ones for which some of
the listed authors may no longer be active. I assume, or at
least hope, that concern is were this item in the writeup
template came from.   Especially for authors who fall into those
categories, asking whether they have been paying attention and
have kept IPR disclosures up to date with the evolving document
is, IMO, both reasonable and appropriate.  Personally, I'm
inclined to ask for an affirmative commitment about willingness
to participate actively in the AUTH48 signoff process at the
same time -- non-response to that one, IMO, justifies trimming
the author count and creating a Contributors section.

It seems to me that, in this particular case, too many people
are assuming a far more rigid process than actually exists or
can be justified by any IETF consensus procedure.  Let's just
stop that.

best,
   john

[1] pronoun chosen to reflect current IESG composition and with
the understanding that it might be part of the problem.

[2] Any WG with strong consensus about these issues and at least
20 active, nomcom-eligible participants knows what to do about
such a problem should it ever occur.  Right?




Re: PS Characterization Clarified

2013-09-16 Thread John C Klensin


--On Monday, September 16, 2013 10:43 -0400 Barry Leiba
 wrote:

>...
> I agree that we're normally requiring much more of PS
> documents than we used to, and that it's good that we document
> that and let external organizations know.  At the same time,
> we are sometimes proposing things that we know not to be fully
> baked (some of these came out of the sieve, imapext, and morg
> working groups, for example), but we *do* want to propose them
> as standards, not make them Experimental.  I want to be sure
> we have a way to continue to do that.  The text Olaf proposes
> is, I think, acceptable for that.

In case it wasn't clear, I have no problems with that at all.  I
was objecting to three things that Olaf's newer text has fixed:

(1) It is a very strong assertion to say that the above is
"exceptional".  In particular, "exceptional" would normally
imply a different or supplemental approval process to make the
exception.  If all that is intended is to say that we don't do
it very often, then "commonly" (Olaf's term), "usually", or
perhaps even "normally" are better terms.

(2) While it actually may be the common practice, I have
difficulty with anything that reinforces the notion that the
IESG makes standardization decisions separate from IETF
consensus.  While it isn't current practice either, I believe
that, were the IESG to actually do that in an area of
significance, it would call for appeals and/or recalls.   Olaf's
older text implied that the decision to publish a
not-fully-mature or incomplete specification was entirely an
IESG one.   While the text in 2026, especially taken out of
context, is no better (and Olaf just copied the relevant bits),
I have a problem with any action that appears to reinforce that
view or to grant the IESG authority to act independently of the
community.

(3) As a matter of policy and RFCs of editorially high quality,
I think it is better to have explanations of loose ends and
not-fully-baked characteristics of standards integrated into the
document rather than using IESG Statements.  I don't think
Olaf's new "front page" requirement is correct (although I can
live with it) -- I'd rather just say "clearly and prominently
communicated in the document" and leave the "is it clear and
prominent enough" question for Last Call -- but don't want to
see it _forced_ into an IESG statement.

I do note that "front page" and "Introduction" are typically
inconsistent requirements (header + abstract + status and
copyright boilerplate + TOC usually force the Introduction to
the second or third page).  More important, if a real
explanation of half-baked features (and why they aren't fully
baked) may require a section, or more than one, on it own.  One
would normally like a cross reference to those sections in the
Introduction and possibly even mention in the Abstract, but
forcing the text into the Introduction (even with "preferably"
given experience with how easily that turns into a nearly-firm
requirement) is just a bad idea in a procedures document.  We
should say "clearly", "prominently", or both and then leave
specifics about what that means to conversations between the
authors, the IESG and community, and the RFC Editor.

best,
john




Re: PS Characterization Clarified

2013-09-16 Thread John C Klensin


--On Monday, September 16, 2013 15:58 +0200 Olaf Kolkman
 wrote:

> [Barry added explicitly to the CC as this speaks to 'his'
> issue]
> 
> On 13 sep. 2013, at 20:57, John C Klensin 
> wrote:
> 
> [… skip …]
> 
>>> *   Added the Further Consideration section based on
>>> discussion on themailinglist.
>> 
>> Unfortunately, IMO, it is misleading to the extent that you
>> are capture existing practice rather than taking us off in new
>> directions.  
> 
> Yeah it is a thin line. But the language was introduced to
> keep a  current practice possible (as argued by Barry I
> believe).

Understood.  Barry and I are on the same page wrt not wanting to
accidentally restrict established existing practices.

>> You wrote:
>> 
>>> While commonly less mature specifications will be published
>>> as Informational or Experimental RFCs, the IETF may, in
>...

> I see where you are going. 
> 
> 
> 
> While commonly less mature specifications will be published as
> Informational or Experimental RFCs, the IETF may, in
> exceptional cases, publish a specification that still contains
> areas for improvement or  certain uncertainties about whether
> the best engineering choices are made.  In those cases that
> fact will be clearly communicated in the document prefereably
> on the front page of the RFC e.g. in the introduction or a
> separate statement.
> 
> 
> 
> I hope that removing the example of the IESG statement makes
> clear that this is normally part of the development process.

Yes.

Editorial nits:

* "While commonly less mature specifications will be
published..." has "commonly" qualifying "less mature".  It is
amusing to think about what that might mean, but it isn't what
you intended.  Try "While less mature specifications will
usually be published...".  Replace "usually" with "commonly" or
"normally" if you like, but I think "usually" is closest to what
you are getting at.

* "prefereably" -> "preferably"

>> Additional observations based on mostly-unrelated recent
>> discussions:  
>> 
>> If you are really trying to clean 2026 up and turn the present
>> document into something that can be circulated to other groups
>> without 2026 itself, then the "change control" requirement/
>...
>> Along the same lines but more broadly, both the sections of
>> 2026 you are replacing and your new text, if read in
>> isolation, strongly imply that these are several decisions,
>> including those to approve standardization, that the IESG
>> makes on its own judgment and discretion.  I think it is
>...
>> More important --and related to some of my comments that you
>> deferred to a different discussion-- the "IESG as final
>> _technical_ review and interpreter of consensus" model is very
>> different from that in some other SDOs in which the final
>> approval step is strictly a procedural and/or legal review
>> that is a consensus review only in the sense of verifying
>...
> So noted. 
> 
> As actionable for this draft I take that I explicitly mention
> that Section 4.1 2026 is exclusively updated.

While I understand your desire to keep this short, the pragmatic
reality is that your non-IETF audience is likely to read this
document (especially after you hand it to them) and conclude
that it is the whole story.  Since the natural question that
immediately follows "why should we accept your standards at all"
is "why can't you hand them off to, e.g., ISO, the way that many
national bodies and organizations like IEEE do with many of
their documents".  

Suggestion in the interest of brevity: in addition to mentioning
the above, mention explicitly that there are requirements in
other sections of 2026 that affect what is standardized and how. 

By the way, while I understand all of the reasons why we don't
want to actually replace 2026 (and agree with most of them),
things are getting to the point that it takes far too much
energy to actually figure out what the rules are.  Perhaps it is
time for someone to create an unofficial redlined version of
2026 that incorporates all of the changes and put it up on the
web somewhere.   I think we would want a clear introduction and
disclaimer that it might be be exactly correct and that only the
RFCs are normative, but the accumulation of changes may
otherwise be taking us too far into the obscure.  If we need a
place to put it, it might be a good appendix to the Tao.  And
constructing it might be a good job for a relative newcomer who
is trying to understand the ins and outs of our formal
procedures.

best,
   john



Re: IPR Disclosures for draft-ietf-xrblock-rtcp-xr-qoe

2013-09-16 Thread John C Klensin


--On Monday, September 16, 2013 07:14 -1000 Randy Bush
 wrote:

> can we try to keep life simple?  it is prudent to check what
> (new) ipr exists for a draft at the point where the iesg is
> gonna start the sausage machine to get it to rfc.  if the iesg
> did not do this, we would rightly worry that we were open to a
> submarine job.  this has happened, which is why this formality
> is in place.

Agreed.  I hope there are only two issues in this discussion:

(1) Whether the IESG requires that the question be asked in some
particular form, especially a form that would apply to
other-than-new IPR.  I think the answer to that question is
clearly "no".

(2) Whether the "submitted in full conformance..." statement in
I-Ds is sufficient to cover IPR up to the point of posting of
the I-D.  If the answer is "no", then there is a question of why
we are wasting the bits.  If it is "yes", as I assume it is,
then any pre-sausage questions can and should be limited to IPR
that might be new to one or more of the authors.

> if some subset of the authors prefer to play cute, my alarms
> go off. stuff the draft until they can give a simple direct
> answer.

Agreed.  While I wouldn't make as big an issue of it as he has
(personal taste), I agree with him that asking an author to
affirm that he or she really, really meant it and told the truth
when posting a draft "submitted in full conformance..." is
inappropriate and demeaning.  While I think there might have
been other, more desirable, ways to pursue it, I don't think
that raising the issue falls entirely into the "cute" range.

   john








Re: ORCID - unique identifiers for contributors

2013-09-16 Thread John C Klensin


--On Monday, September 16, 2013 18:34 +0100 Andy Mabbett
 wrote:

>> If the goal is to include contact info for the authors in the
>> document and in fact you can't be contacted using the info is
>> it contact info?
> 
> While I didn't say that the goal was to provide contact
> info[*], an individual can do so through their ORCID profile,
> which they manage and can update at any time.

The goal of the "author's address" section of the RFCs is
_precisely_ contact information.  See, e.g.,
draft-flanagan-style-02 and its predecessors.  

I can see some advantages in including ORCID or some similar
identifier along with the other contact information.  I've been
particularly concerned about a related issue in which we permit
non-ASCII author names and then have even more trouble keeping
track of equivalences than you "J. Smith" example implies and
for which such an identifier would help.   But, unless we were
to figure out how to require, not only that people have ORCIDs
but that they have and maintain contact information there (not
just "can do so"), I'd consider it useful supplemental
information, not a replacement for the contact information that
is now supposed to be present.

Treating an ORCID (or equivalent) as supplemental would also
avoid requiring the RSE to inquire about guarantees about the
permanence and availability of the relevant database.  It may be
fine; I'd just like to avoid having to go there.

best,
   john




Re: PS Characterization Clarified

2013-09-13 Thread John C Klensin


--On Friday, September 13, 2013 16:56 +0200 Olaf Kolkman
 wrote:

>...
> Based on the discussion so far I've made a few modifications
> to the draft.  I am trying to consciously keep this document
> to the minimum that is needed to achieve 'less is more' and
> my feeling is that where we are now is close to the sweetspot
> of consensus.

Olaf,

I'm afraid I need to keep playing "loyal opposition" here.

> *   Added the Further Consideration section based on
> discussion on themailinglist.

Unfortunately, IMO, it is misleading to the extent that you are
capture existing practice rather than taking us off in new
directions.  You wrote:

> While commonly less mature specifications will be published as
> Informational or Experimental RFCs, the IETF may, in
> exceptional cases, publish a specification that does not match
> the characterizations above as a Proposed Standard.  In those
> cases that fact will be clearly communicated on the front page
> of the RFC e.g. means of an IESG statement.

On the one hand, I can't remember when the IESG has published
something as a Proposed Standard with community consensus and
with an attached IESG statement that says that they and the
community had to hold our collective noses, but decided to
approve as PS anyway.  Because, at least in theory, a PS
represents community consensus, not just IESG consensus (see
below), I would expect (or at least hope for) an immediate
appeal of an approval containing such as statement unless it
(the statement itself, not just the opinion) matched community
consensus developed during Last Call.

Conversely, the existing rules clearly allow a document to be
considered as a Proposed Standard that contains a paragraph
describing loose ends and points of fragility, that expresses
the hope that the cases won't arise very often and that a future
version will clarify how the issues should be handled based on
experience.   That is "no known technical omissions" since the
issues are identified and therefore known and not omissions.  In
the current climate, I'd expect such a document to have a very
hard time on Last Call as people argued for Experimental or even
keeping it as an I-D until all of the loose ends were tied up.
But, if there were rough consensus for approving it, I'd expect
it to be approved without any prefatory, in-document, IESG notes
(snarky or otherwise).

The above may or may not be tied up with the "generally stable"
terminology.  I could see a spec with explicit "this is still
uncertain and, if we are wrong, might change" language in it on
the same basis as the loose end description above.  Such
language would be consistent with "generally stable" but, since
it suggests a known point of potential instability, it is not
consistent with "stable".

Additional observations based on mostly-unrelated recent
discussions:  

If you are really trying to clean 2026 up and turn the present
document into something that can be circulated to other groups
without 2026 itself, then the "change control" requirement/
assumption of RFC 2026 Section 7.1.3 needs to be incorporated
into your new Section 3.  It is not only about internal debates,
it is our rule against why we can't just "endorse" a standard
developed elsewhere as an IETF standards track specification.

Along the same lines but more broadly, both the sections of 2026
you are replacing and your new text, if read in isolation,
strongly imply that these are several decisions, including those
to approve standardization, that the IESG makes on its own
judgment and discretion.  I think it is fairly clear from the
rest of 2026 (and 2028 and friends and IETF oral tradition) that
the IESG is a collector and interpreter of community consensus,
not a body that is someone delegated to use its own judgment.  I
believe that, if an IESG were ever to say something that
amounted to "the community consensus is X, but they are wrong,
so we are selecting or approving not-X", we would either see a
revolution of the same character that brought us to 2026 or the
end of the IETF's effectiveness as a broadly-based standards
body.  

More important --and related to some of my comments that you
deferred to a different discussion-- the "IESG as final
_technical_ review and interpreter of consensus" model is very
different from that in some other SDOs in which the final
approval step is strictly a procedural and/or legal review that
is a consensus review only in the sense of verifying that the
process in earlier stages followed the consensus rules and is
not technical review at all.  I don't think you need to spend
time on that, but you need to avoid things that would make your
document misleading to people who start with that model of how
standards are made as an initial assumption.

best,
 john



Re: not really pgp signing in van

2013-09-10 Thread John R Levine

You go to a Web page that has the HTML or Javascript control for generating a 
keypair. But the keypair is generated on the end user's computer.


So I run Javascript provided by Comodo to generate the key pair.   This means 
that my security depends on my willingness and ability to read possibly 
obfuscated Javascript to make sure that it only uploads the public half of the 
key pair.


I think we're entering the tinfoil zone here.  Comodo is one of the 
largest CAs around, with their entire income depending on people paying 
them to sign web and code certs because they are seen as trustworthy.


How likely is it that they would risk their reputation and hence their 
entire business by screwing around with free promo S/MIME certs?


Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
"I dropped the toothpaste", said Tom, crestfallenly.

smime.p7s
Description: S/MIME Cryptographic Signature


Re: not really pgp signing in van

2013-09-09 Thread John R Levine

Typical S/MIME keys are issued by CAs that verify them by
sending you mail with a link.  While it is easy to imagine ways that
could be subverted, in practice I've never seen it.


The most obvious way that it can be subverted is that the CA issues you a key 
pair and gives a copy of the private key to one or more others who would like 
either to be able to pretend to be you, or to intercept communication that you 
have encrypted.   I would argue that this is substantially less trustworthy 
than a PGP key!


Like I said, it's easy to imagine ways it could be subverted.  If you 
believe all CAs are crooks, you presumably don't use SSL or TLS either, 
right?


Of course you can _do_ S/MIME with a non-shared key, but not for free, 
and not without privacy implications.  (I'm just assuming that an 
individual can get an S/MIME Cert on a self-generated public key—I 
haven't actually found a CA who offers that service.)



Same issue.  I can send signed mail to a buttload more people with
S/MIME than I can with PGP, because I have their keys in my MUA.
Hypothetically, one of them might be bogus.  Realistically, they aren't.


Very nearly that same degree of assurance can be obtained with PGP; the 
difference is that we don't have a ready system for making it happen.

E.g., if my MUA grabs a copy of your key from a URL where you've published it, 
and validates email from you for a while, it could develop a degree of 
confidence in your key without requiring an external CA, and without that CA 
having a copy of your private key.   Or it could just do ssh-style 
leap-of-faith authentication of the key the first time it sees it; a fake key 
would be quickly detected unless your attacker controls your home MTA or the 
attacked identity's home MTA.


That would be great if MUAs did that, but they don't.

As I think I've said three times now, the actual support for S/MIME in 
MUAs is a lot better than the support for PGP.  It helps that you can 
extract a correspondent's key from every S/MIME message, rather than 
having to go to a keyserver of some (likely untrustworthy) sort to get the 
PGP keys.


If we think that PGP is so great, how about writing native PGP support for 
Thunderbird and Evolution, and contribute them to the open source 
codebase?


Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly

smime.p7s
Description: S/MIME Cryptographic Signature


Re: not really pgp signing in van

2013-09-09 Thread John R Levine

> Yes, and no.  PGP and S/MIME each have their own key distribution
> problems.  With PGP, it's easy to invent a key, and hard to get other
> people's software to trust it.  With S/MIME it's harder to get a key,
> but once you have one, the software is all happy.

That's a bug, not a feature.   The PGP key is almost certainly more trust=

worthy than the S/MIME key.

Um, didn't this start out as a discussion about how we should try to get
people using crypto, rather than demanding perfection that will never
happen?  Typical S/MIME keys are issued by CAs that verify them by
sending you mail with a link.  While it is easy to imagine ways that
could be subverted, in practice I've never seen it.


> The MUAs I use (Thunderbird, Alpine, Evolution) support S/MIME a lot
> better than they support PGP.  There's typically a one key command or
> a button to turn signing and encryption on and off, and they all
> automagically import the certs from on incoming mail.


Yup.  That's also a bug, not a feature.  I was just wondering why that 
is.  The only implementation I've seen a reference to is Sylpheed, which 
is not widely used


Same issue.  I can send signed mail to a buttload more people with
S/MIME than I can with PGP, because I have their keys in my MUA.
Hypothetically, one of them might be bogus.  Realistically, they aren't.

R's,
John

smime.p7s
Description: S/MIME Cryptographic Signature


Re: What real users think [was: Re: pgp signing in van]

2013-09-09 Thread John Levine
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

>Believe it or not Ted Nelson had a similar idea when he invented Xanadu
>Hypertext. He was obsessed by copyright and the notion that it would be
>wrong to copy someone else's text to another machine, hence the need for
>links.

Well, yes, but he's never been able to implement it, despite decades
of trying.  (I've known Ted since 1972, so I watched a lot of it
happen.)  Xanadu was always envisioned as a monolithic system that
didn't scale over large numbers of machines or wide geographic areas.
It's really interesting as a conceptual design, but the closest
working implentation is the WWW and that, to put it mildly, left out a
lot.

On the other hand, MIME can do multipart messages consisting of a
sequence of signed bodies right now, and most MUAs display them pretty
well.  I've never seen anything create one other than a list manager
like Mailman or mj2 adding a signature part after a signed body.

R's,
John



-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.21 (FreeBSD)

iEYEARECAAYFAlIucekACgkQkEiFRdeC/kXrfgCfYFyXhGaXoIKHiuJg1bYns/sf
6JcAn2qSoWfT/9+9LadEUbG6oHf5YvPy
=RwJq
-END PGP SIGNATURE-


Re: not really pgp signing in van

2013-09-09 Thread John Levine
>> Sounds like we're on our way to reinventing S/MIME.  Other than the
>> key signing and distribution (which I agree is a major can of worms)
>> it works remarkably well.
>
>Which sounds kind of like, "Other than that Mrs. Lincoln, how was the play?"

Yes, and no.  PGP and S/MIME each have their own key distribution
problems.  With PGP, it's easy to invent a key, and hard to get other
people's software to trust it.  With S/MIME it's harder to get a key,
but once you have one, the software is all happy.

The MUAs I use (Thunderbird, Alpine, Evolution) support S/MIME a lot
better than they support PGP.  There's typically a one key command or
a button to turn signing and encryption on and off, and they all
automagically import the certs from on incoming mail.

R's,
John


Re: not really pgp signing in van

2013-09-09 Thread John Levine
>> Yes, they should have made that impossible.
>
>Oh my, I _love_ this!   This is actually the first non-covert use case I've 
>heard described,
>although I'm not convinced that PGP could actually do this without message 
>format tweaks.

Sounds like we're on our way to reinventing S/MIME.  Other than the
key signing and distribution (which I agree is a major can of worms)
it works remarkably well.

R's,
John





Re: What real users think [was: Re: pgp signing in van]

2013-09-09 Thread John R. Levine

To be clear, what I would like to see in an MUA that addresses the use case 
Brian described is that it is just a new mime encoding that allows a message to 
be pieced together from a collection of signed attachments.   So in this 
message, the mail would be encoded as two parts. The first would be the 
complete message you wrote, with its signature.   The second would be the text 
I have written here.   The quoted text above would be represented as a 
reference to the attached message.

This should be very easy to accomplish in the UI—the UI should look exactly 
like the current UI.   It's just a tweak to how copy, cut and paste work.

There's no reason to get rid of MIME—I think it's a pretty good solution.   I 
mentioned the other solutions not because I prefer them but because they exist 
and do demonstrate that replacements for IETF standards can and do catch on in 
the marketplace, and that we ought not to just be smug about how great SMTP, 
RFC822 and MIME are and pretend that we don't have competition.


S/MIME handles this case pretty well, but I've never seen anything other 
than a list manager such as Mailman wrap signed parts together.


Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly

smime.p7s
Description: S/MIME Cryptographic Signature


Re: What real users think [was: Re: pgp signing in van]

2013-09-09 Thread John C Klensin


--On Tuesday, September 10, 2013 08:09 +1200 Brian E Carpenter
 wrote:

>...
> True story: Last Saturday evening I was sitting waiting for a
> piano recital to start, when I overheard the person sitting
> behind me (who I happen to know is a retired chemistry
> professor) say to his companion "Email is funny, you know -
> I've just discovered that when you forward or reply to a
> message, you can just change the other person's text by typing
> over it! You'd have thought they would make that impossible."

There is another interesting detail about this in addition to
the part of it addressed by the brothers Crocker.

When MIME was designed, there were a number of implicit
assumptions to the effect that, if an original message was
included in a reply or a message was forwarded, the original
would be a separate body part from the reply or forwarding
introduction.   Structurally, that arrangement not only would
have preserved per-body-part signatures but would have largely
avoided a number of annoyances that have caught up with us such
as an incoming message that uses different charset values than
the replying or forwarding user is set up to support.
Obviously, that would not help with replies interleaved with the
original text, but that is a somewhat different problem
(although it might take a bit of effort to explain the reasons
to your chemistry professor).  When things are interleaved,
preventing charset conflicts, modification of quoted text, and
other problems is pretty much impossible, at least, as Dave more
or less points out, if the composing MUA is under the control of
the user rather than being part of a centrally-controlled
environment that can determine what gets typed where.

It didn't work out that way.  Indeed, more than 20 years later,
forwarded messages and "reply with original included" ones are
the primary vestiges of the popular pre-MIME techniques for
marking out parts of a message.  Perhaps we should have
predicted that better, perhaps not.  But the reasons why "make
that impossible" are hard are not just security/ signature or
legacy/installed base issues.

best,
   john





Re: pgp signing in van

2013-09-09 Thread John Levine
>Why do you think that cryptographic doubt = legal doubt? I've heard
>that claim many times, but I've never heard an argument for it.

Having attempted to explain technology in court as an expert witness,
I find the assertion risible.

R's,
John


Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-08 Thread John C Klensin


--On Friday, September 06, 2013 17:11 +0100 Tony Finch
 wrote:

> John C Klensin  wrote:
>> 
>> Please correct me if I'm wrong, but it seems to me that
>> DANE-like approaches are significantly better than traditional
>> PKI ones only to the extent to which:
>...
> Yes, but there are some compensating pluses:

Please note that I didn't say "worse", only "not significantly
better".  

> You can get a meaningful improvement to your security by good
> choice of registrar (and registry if you have flexibility in
> your choice of name). Other weak registries and registrars
> don't reduce your DNSSEC security, whereas PKIX is only as
> secure as the weakest CA.

Yes and no.  Certainly I can improve my security as you note.  I
can also improve the security of a traditional certificate by
selecting from only those CAs who require a high degree of
assurance that I am who I say I am.  But, from the standpoint of
a casual user using readily-available and understandable tools
(see my recent note) and encountering a key or signature from
someone she doesn't know already, there is little or no way to
tell whether the owner of that key used a reliable registrar or
a sleazy one or, for the PKI case, a high-assurance and reliable
CA or one whose certification criterion is the applicant's
ability to pay.  There are still differences and I don't mean to
dismiss them.I just don't think we should exaggerate their
significance.

And, yes, part of what I'm concerned about is the very ugly
problem of whether, if I encounter an email address and key for
tonyfi...@email-expert.pro or, (slightly) worse, in one of the
thousand new TLDs that ICANN assures us will improve the quality
of their lives, how I determine whether that is you, some other
Tony Finch who claims expertise in email, or Betty Attacker
Bloggs pretending to be one of you.  As Pete has suggested, one
way to do that is to set up an encrypted connection without
worrying much about authentication and then quiz each other
about things that Tony(2), Betty, or John(2) are unlikely to
know until we are confident enough for the purposes.  But,
otherwise

By contrast, if I know a priori that the Tony Finch I'm
concerned about is the person who controls dotat.at and you know
that the John Klensin you are concerned about is the person who
controls jck.com, and both of us are using addresses in those
domains with which we have been familiar for years, then the
task is much easier with either a PKI or DANE -- and certainly
more convenient and reliable with the latter because we know
each other well enough, even if mostly virtually, to be
confident that the other is unlikely to be dealing with
registrars or registries who would deliberately enable domain or
key impersonation.  Nor would either of us be likely to be quiet
about such practices if they were discovered.

> An attacker can use a compromise of your DNS infrastructure to
> get a certificate from a conventional CA, just as much as they
> could compromise DNSSEC-based service authentication.

Exactly.  Again, my point in this note and the one I sent to the
list earlier today about the PGP-PKI relationship is that we
should understand and take advantage of the differences among
systems if and when we can, but that it is a bad idea to
exaggerate those advantages or differences.

john





Re: pgp signing in van

2013-09-08 Thread John C Klensin


--On Friday, September 06, 2013 19:50 -0800 Melinda Shore
 wrote:

> On 9/6/13 7:45 PM, Scott Kitterman wrote:
>> They have different problems, but are inherently less
>> reliable than web of  trust GPG signing.  It doesn't scale
>> well, but when done in a defined context  for defined
>> purposes it works quite well.  With external CAs you never
>> know  what you get.
> 
> Vast numbers of bits can be and have been spent on the problems
> with PKI and on vulnerabilities around CAs (and the trust
> model). I am not arguing that PKI is awesome.  What I *am*
> arguing is that the semantics of the trust assertions are
> pretty well-understood and agreed-upon, which is not the case
> with pgp.  When someone signs someone else's pgp key you
> really don't know why, what the relationship is, what they
> thought they were attesting to, etc.

I think you are both making more of a distinction than exists,
modulo the scaling problem with web of trust and something the
community has done to itself with CAs.

The web of trust scaling issue is well-known and has been
discussed repetitively.  

But the assumption about CAs has always been, more or less, that
they can all be trusted equally and that one that couldn't be
trusted would and could be held accountable.  Things just
haven't worked out that way with the net result that, as with
PGP, it is hard to deduce "why, what the relationship is, what
they thought they were attesting to", and so on.  While those
statements are in the certs or pointed to from them in many
cases, there is the immediate second-level problem of whether
those assertions can be trusted and what they mean.  For
example, if what a cert means is "passed some test for owning a
domain name", it and DANE are, as far as I can tell, identical
except for the details of the test ... and some are going to be
a lot better for some domains and registrars than others.
Assorted vendors have certainly made the situation worse by
incorporating CA root certificates in systems based on business
relationships (or worse) rather than on well-founded beliefs
about trust.

On the CA side, one of the things I think is needed is a rating
system (or collection of them on a "pick the rating service you
trust" basis) for CAs, with an obvious extension to PGP-ish key
signers.  In itself, that isn't a problem with which the IETF
can help.

Where I think the IETF and implementer communities have fallen
down is in not providing a framework that would both encourage
rating systems and tools and make them accessible to users.  In
our current environment, everything is binary in a world in
which issues like trust in a certifier is scaled and
multidimensional.   As Joe pointed out, we don't use even what
information is available in PGP levels of confidence and X.509
assertions about strength.  In the real world, we trust people
and institutions in different ways for different purposes --
I'll trust someone to work on my car, even the safety systems,
whom I wouldn't trust to do my banking... and I wouldn't want my
banker anywhere near my brakes.  In both cases, I'm probably
more interested in institutional roles and experience than I am
in whether a key (or signature on paper) binds to a hard
identity.  In some cases, binding a key to persistence is more
important than binding it to actual identity; in others, not.  I
trust my sister in most things, but wouldn't want her as a
certifier because I know she don't have sufficient clues about
managing keys.  And the amount of authentication of identity I
think I need differs with circumstances and uses too.  We
haven't designed the data structures and interfaces to make it
feasible for a casual user to incorporate judgments --her own or
those of someone she trusts -- to edit the CA lists that are
handed to her, or a PGP keyring she has constructed, and assign
conditions to them.  Nor have we specified the interface support
that would make it easy for a user to set up and get, e.g.,
warnings about low-quality certification (or keys linked to
domains or registrars that are known to be sloppy or worse) when
one is about to use them for some high-value purpose.  We have
web of trust and rating models (including PICS, which
illustrates some of difficulties with these sorts of things)
models for web pages and the like, but can't manage them for the
keys and certs that are arguably more important.

So, anyone ready to step up rather than just lamenting the state
of the world?

 best,
john








Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 10:43 -0400 Joe Abley
 wrote:

>> Can someone please tell me that BIND isn't being this stupid?
> 
> This thread has mainly been about privacy and confidentiality.
> There is nothing in DNSSEC that offers either of those,
> directly (although it's an enabler through approaches like
> DANE to provide a framework for secure distribution of
> certificates). If every zone was signed and if every response
> was validated, it would still be possible to tap queries and
> tell who was asking for what name, and what response was
> returned.

Please correct me if I'm wrong, but it seems to me that
DANE-like approaches are significantly better than traditional
PKI ones only to the extent to which:

- The entities needing or generating the certificates
are significantly more in control of the associated DNS
infrastructure than entities using conventional CAs are
in control of those CAs.

- For domains that are managed by registrars or other
third parties (I gather a very large fraction of them at
the second level), whether one believes those registrars
or other operators have significantly more integrity and
are harder to compromise than traditional third party CA
operators.

best,
   john




Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 07:38 -0700 Pete Resnick
 wrote:

> Actually, I think the latter is really what I'm suggesting.
> We've got do the encryption (for both the minimal protection
> from passive attacks as well as setting things up for doing
> good security later), but we've also got to design UIs that
> not only make it easier for users to deal with encrpytion, but
> change the way people think about it.
> 
> (Back when we were working on Eudora, we got user support
> complaints that "people can read my email without typing my
> password". What they in fact meant was that if you started the
> application, it would normally ask for your POP password in
>...

Indeed.  And I think that one of the more important things we
can do is to rethink UIs to give casual users more information
about what it going on and to enable them to take intelligent
action on decisions that should be under their control.  There
are good reasons why the IETF has generally stayed out of the UI
area but, for the security and privacy areas discussed in this
thread, there may be no practical way to design protocols that
solve real problems without starting from what information a UI
needs to inform the user and what actions the user should be
able to take and then working backwards.  As I think you know,
one of my personal peeves is the range of unsatisfactory
conditions --from an older version of certificate format or
minor error to a verified revoked certificate -- that can
produce a message that essentially says "continuing may cause
unspeakable evil to happen to you" with an "ok" button (and only
an "ok" button).  

Similarly, even if users can figure out which CAs to trust and
which ones not (another issue and one where protocol work to
standardize distribution of CA reputation information might be
appropriate) editing CA lists whose main admission qualification
today seems to be cosy relationships with vendors (and maybe the
US Govt) to remove untrusted ones and add trusted ones requires
rocket scientist-level skills.  If we were serous, it wouldn't
be that way.  

And the fact that those are 75% of more UI issues is probably no
longer an excuse.

john





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 08:41 -0700 Pete Resnick
 wrote:

>...
> Absolutely. There is clearly a good motivation: A particular
> UI choice should not *constrain* a protocol, so it is
> essential that we make sure that the protocol is not
> *dependent* on the UI. But that doesn't mean that UI issues
> should not *inform* protocol design. If we design a protocol
> such that it makes assumptions about what the UI will be able
> to provide without verifying those assumptions are realistic,
> we're in serious trouble. I think we've done that quite a bit
> in the security/application protocol space.

Yes.  It also has another implication that goes to Dave's point
about how the IETF should interact with UI designers.   In my
youth I worked with some very good early generation HCI/ UI
design folks.  Their main and most consistent message was that,
from a UI functionality standpoint, the single most important
consideration for a protocol, API, or similar interface was to
be sure that one had done a thorough analysis of the possible
error and failure conditions and that sufficient information
about those conditions could get to the outside to permit the UI
to report things and take action in an appropriate way.From
that point of view, any flavor of a "you lose" -> "ok" message,
including blue screens and "I got irritated and disconnected
you"  is a symptom of bad design and much more commonly bad
design in the protocols and interfaces than in the UI.  

Leaving the UI designs to the UI designers is fine but, if we
don't give them the tools and information they need, most of the
inevitable problems are ours.


> OK, one last nostalgic anecdote about Eudora before I go back
> to finishing my spfbis Last Call writeup:
>...
> Working for Steve was a hoot.

I can only imagine, but the story is not a great surprise.

   john





Re: Bruce Schneier's Proposal to dedicate November meeting to savingthe Internet from the NSA

2013-09-06 Thread John C Klensin


--On Friday, September 06, 2013 06:20 -0700 Pete Resnick
 wrote:

> Actually, I disagree that this fallacy is at play here. I
> think we need to separate the concept of end-to-end encryption
> from authentication when it comes to UI transparency. We
> design UIs now where we get in the user's face about doing
> encryption if we cannot authenticate the other side and we
> need to get over that. In email, we insist that you
> authenticate the recipient's certificate before we allow you
> to install it and to start encrypting, and prefer to send
> things in the clear until that is done. That's silly and is
> based on the assumption that encryption isn't worth doing
> *until* we know it's going to be done completely safely. We
> need to separate the trust and guarantees of safeness (which
> require *later* out-of-band verification) from the whole
> endeavor of getting encryption used in the first place.

Pete,

At one level, I completely agree.  At another, it depends on the
threat model.  If the presumed attacker is skilled and has
access to packets in transit then it is necessary to assume that
safeguards against MITM attacks are well within that attacker's
resource set.  If those conditions are met, then encrypting on
the basis of a a key or certificate that can't be authenticated
is delusional protection against that threat.  It may still be
good protection against more casual attacks, but we do the users
the same disservice by telling them that their transmissions are
secure under those circumstances that we do by telling them that
their data are secure when they see a little lock in their web
browsers.

Certainly "encrypt first, authenticate later" is reasonable if
one doesn't send anything sensitive until authentication has
been established, but it seems to me that would require a rather
significant redesign of how people do things, not just how
protocols work.

best,
   john



Re: Last Call: (Retirement of the "Internet Official Protocol Standards" Summary Document) to Best Current Practice

2013-09-05 Thread John C Klensin


--On Thursday, September 05, 2013 15:20 -0700 Pete Resnick
 wrote:

>> IESG minutes as the publication of record
>>
> 
> The only reason I went with the IESG minutes is because they
> do state the "pending" actions too, as well as the completed
> ones, which the IETF Announce list does not. For instance, the
> IESG minutes say things like:
>...
> The minutes also of course reflect all of the approvals. So
> they do seem to more completely replace what that paragraph as
> talking about. And we have archives of IESG minutes back to
> 1991; we've only got IETF Announce back to 2004.
> 
> I'm not personally committed to going one way or the other.
> The minutes just seemed to me the more complete record.

Pete, Scott,

The purpose of the "Official Protocol Status" list was, at least
IMO, much more to provide a status snapshot and index than to
announce what had been done.  I think the key question today is
not "where is it announced?" but "how do I find it?".  In that
regard, the minutes are a little worse than the announcement
list today, not because the announcement list contains as much
information, but because the S/N ratio is worse.

With the understanding that the Official Protocol Standards list
has not been issued/updated in _many_ years, wouldn't it make
sense to include a serious plan about information locations,
navigation, and access in this?  For example, if we are going to
rely on IETF minutes, shouldn't the Datatracker be able to
thread references to particular specifications through it?  The
tracker entries that it can access appear to be only a tiny
fraction of the information to which Pete's note refers.

   john



Re: PS Characterization Clarified

2013-09-02 Thread John C Klensin


--On Monday, 02 September, 2013 14:09 -0400 Scott O Bradner
 wrote:

>> There is at least one ongoing effort right now that has the
>> potential to reclassify a large set of Proposed Standard RFCs
>> that form the basis of widely used technology. These types of
>> efforts can have a relatively big effect on the standards
>> status of the most commonly used RFCs. Do we want to do more?
>> Can we do more?
> 
> seems like a quite bad idea (as Randy points out)
> 
> take extra effort and get some interoperability data

More than that.  Unless we want to deserve the credibility
problems we sometimes accuse others of having, nothing should be
a full standard, no matter how popular, unless it reflects good
engineering practice.  I think there is more flexibility for
Proposed Standards, especially if they come with commentary or
applicability statements, but I believe that, in general, the
community should consider "bad design" or "bad engineering
practice" to fall into the "known defect" category of RFC 2026.
If RFC 6410 requires, or even allows, that we promote things
merely because they are popular, then I suggest there is
something seriously wrong with it.

   john





Re: Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-09-02 Thread John Levine
>The engineering solution to this deployment problem is to generalize the
>problem and use a new record for that.

Either that or figure out how to make it easy enough to deploy new
RRTYPEs that people are willing to do so.

The type number is 16 bits, after all.  We're not in any danger of running out.

Regards,
John Levine, jo...@iecc.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. http://jl.ly


Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-09-01 Thread John C Klensin


--On Saturday, August 31, 2013 23:50 +0900 Masataka Ohta
 wrote:

> The draft does not assure that existing usages are compatible
> with each other.

It absolutely does not.  I actually expect it to help identify
some usages that are at least confusing and possible
incompatible.

> Still, the draft may assure new usages compatible with each
> other.

That is the hope.

> However, people who want to have new (sub)types for the new
> usages should better simply request new RRTYPEs.

I agree completely.  But that has nothing to do with this draft:
the registry is simply addressed to uses that overload TXT, not
to arguing why they shouldn't (or why the use of label prefixes
or suffixes is sufficient to make protocol use of TXT reasonable.

> If we need subtypes because 16bit RRTYPE space is not enough
> (I don't think so), the issue should be addressed by itself
> by introducing a new RRTYPE (some considerations on subtype
> dependent caching may be helpful), not TXT, which can assure
> compatibilities between subtypes.

Again, I completely agree.  But it isn't an issue for this
proposed registry.

> For the existing usages, some informational RFC, describing
> compatibilities (or lack of them) between the existing usages,
> might help.

Yes, I think so.

thanks,
   john





Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-08-31 Thread John C Klensin


--On Saturday, August 31, 2013 02:52 -0700 manning bill
 wrote:

> given the nature of the TXT RR, in particular the RDATA field,
> I presume it is the path of prudence to set the barrier to
> registration in this new IANA registry to be -VERY- low.

That is indeed the intent.  If the document isn't clear enough
about that, text would be welcome.  I'm still searching for the
right words (and hoping that the discussion will interact in
both directions with the 5226bis effort
(draft-leiba-cotton-iana-5226bis), but our thought is that the
"expert reviewer" will provide advice and education about the
desirability of good quality registrations back up by good
quality and stable documents.  But, if the best we can get is
registrant contact info, name of a protocol, and a clue about
distinguishing information, then that is the best we can get.

> Or is the intent to create a "two" class system, registered
> and unregistered types?

In one sense, that result is inevitable because some of the
locally-developed and used stuff that lives in TXT records will
probably not be registered no matter what we do.  That is still
better than the current situation of a one-class system in which
nothing is registered.  But the intent is to get as much
registered as possible.

Again, if the I-D isn't clear, text would be welcome.

   john



Re: Last Call: (A Reputation Query Protocol) to Proposed Standard

2013-08-30 Thread John C Klensin


--On Friday, August 30, 2013 09:56 -0700 Bob Braden
 wrote:

> CR LF was first adopted for the Telnet NVT (Network Virtual
> Terminal). I think it was Jon
> Postel's choice, and no one disagreed.

A tad more complicated, IIR.  It turns out that, with some
systems interpreting LF as "same position next line" and some as
"first position, next line", some interpreting CR as "same
position, this line" and some as "first position next line", CR
LF was the only safe universal choice.  At least one of those
four interpretations was a clear violation of the early versions
of the ASCII standard but the relevant vendor didn't care or was
too ignorant to notice.  That particular bit of analysis was
known pre-ARPANET' I wouldn't be surprised to find it on some
earlier Teletype documentation.  

I have no idea who make the decision for Telnet and friends, but
I wouldn't be at all surprised if it were Jon.  The decision
was, however, pretty constrained.

Similarly, it was an important design constraint for FTP and
later for SMTP, WHIS, FINGer, and a bunch of other things that
they (the control connection for the form) be able to run over
Telnet connections (on a different port).  I don't know whether
that was cause or effect wrt  the CRLF choices for those
protocols, but it probably figured in.  I've suspected whether
that was part of what drove the port mode in preference to some
fancy service-selection handshaking at the beginning of the
connection but I have no idea how that set of decisions were
made. 

> Then when FTP was
> defined, it seemed most economical
> to use the same. In fact, doesn't the FTP spec explicitly say
> that the conventions on the control
> connection should be those of Telnet? 

Yep.  RFC 959, Page 34 (snicker) and RFC 1123 Section 4.1.2.10.
There were even some discussions about the interactions between
Telnet option negotiation and FTP (Section 4.1.2.12 of RFC 1123
was, I think, intended to definitively settle those).

> Later, when Jon defined
> SMTP, I am sure that
> Jon would not have dreamed of using different end-of-line
> conventions in different protocols.
> 
> I would hope that you would not dream of it, either.

Indeed.

   john



Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-08-30 Thread John C Klensin
Hi.  I'm going to comment very sparsely on responses to this
draft, especially those that slide off into issues that seem
basically irrelevant to the registry and the motivation for its
creation.   My primary reason is that I don't want to burden the
IETF list with a back-and-forth exchange, particularly about
religious matters.  The comments below are as much to clarify
that plan as they are to response to the particular comments in
Phillip's note.

--On Friday, August 30, 2013 10:16 -0400 Phillip Hallam-Baker
 wrote:

> RFC 5507 does indeed say that but it is an IAB document, not
> an IETF consensus document and it is wrong.

Yes. it is an IAB document.  And, yes, one of the coauthors of
this draft is a coauthor of 5507 and presumably doesn't share
your opinion of its wrongness.

But that is irrelevant to this particular document.  5507 is
used to set context for discussing the need for the registry and
to establish some terminology.  If the Hallam-Baker Theory of
DNS Extensions had been published in the RFC Series, we might
have drawn on it for context and terminology instead.  Or not,
but we didn't have the choice.   Your description of the motives
of the authors of 5507 is an even better example.  Perhaps you
are correct.  Perhaps you aren't.  But whether you are correct
or not makes absolutely no difference to the actionable content
of this draft.  

The "more prefixes" versus "more RRTYPES" versus subtypes versus
pushing some of these ideas into a different CLASS versus
whatever else one can think of are also very interesting... and
have nothing to do with whether this registry should be created
or what belongs in it.

Since you obviously don't like 5507, I would suggest that you
either prepare a constructive critique and see if you can get it
published or, even better, prepare an alternate description of
how things should be handled and see if you can get consensus
for it.   This document is not a useful place to attack it
because its main conclusions just don't make any difference to
it.  If you have contextual or introductory text that you prefer
wrt justifying or setting up the registry, by all means post it
to the list.  If people prefer it to the 5507 text, there is no
reason why it shouldn't go into a future version of the draft.


> The consequence of this is that we still don't seem to have a
> registry for DNS prefixes, or at least not in the place I
> expect it which is
> 
> Domain Name System (DNS) Parameters
>...

Yes.  It it too hard to find.  That also has nothing to do with
this particular registry.  It is connected to why the I-D
contains a temporary and informative appendix about DNS-related
registries and where one might expect to find them.  Our hope is
that the appendix will motivate others (or IANA) to do some work
to organize things differently or otherwise make them easier to
find.  But whether action is taken on the appendix or not has
nothing to do with whether this registry is created.

> The IANA should be tracking SRV prefix allocations and DNSEXT
> seems to have discussed numerous proposals. I have written
> some myself. But I can't find evidence of one and we certainly
> have not updated SRV etc. to state that the registry should be
> used.

The IANA is extremely constrained about what it can do without
some direction from the IETF.  The appendix is there to provide
a preliminary pointer to some areas that might need work (at
least as much or more by the IETF as by IANA).  If you have
specific things that belong on the list (the working version of
what will become -01 already is corrected to point to the SRV
registry), I'd be happy to add them, with one condition: if we
end up in a discussion of the details of the appendix rather
than the particular proposed registry, the appendix will
disappear.  It is a forward pointer to work that probably should
be done, not the work itself.

> Fixing TXT is optional, fixing the use of prefixes and having
> a proper registry that is first come first served is
> essential. Right now we have thousands of undocumented ad-hoc
> definitions.

Let me restate that.  Some of us believe that it is time to "fix
TXT" or, more specifically, to create a registry that can
accommodate identification of what is being done, whether one
approves of it or not.  If other things need fixing too --and I
agree that at least some of them do-- please go to it.  If the
appendix is useful in that regard, great.  If not, I'm not
particularly attached to it.
 
Neither the structure and organization of IANA registries
generally nor the future of service discovery have anything to
do with this draft.  If you want to discuss them, please start
another thread.

  thanks,
john




Re: An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-08-30 Thread John C Klensin


--On Friday, August 30, 2013 11:48 -0400 Phillip Hallam-Baker
 wrote:

>> I believe that draft was superseded by RFC6335 and all
>> service names (SRV prefix labels) are now recorded at
>> <http://www.iana.org/**
>> assignments/service-names-**port-numbers/service-names-**
>> port-numbers.xhtml<http://www.iana.org/assignments/service-na
>> mes-port-numbers/service-names-port-numbers.xhtml>> - indeed
>> several of those come from RFCs I have written that add new
>> SRV names.
> 
> 
> Ah, its there but not in the DNS area where I was looking.

And that is exactly the reason why that temporary appendix calls
for some rethinking and reorganization of how the registries are
organized so as to make that, and similar registries, easier to
find.  

While I continue to believe that doing the work would be a good
exercise for a relative newcomer, if one of you wants to go at
it, please do so with my blessings.

   john






An IANA Registry for DNS TXT RDATA (I-D Action: draft-klensin-iana-txt-rr-registry-00.txt)

2013-08-30 Thread John C Klensin
Hi.

Inspired by part of the SPF discussion but separate from it,
Patrik, Andrew, and I discovered a shortage of registries for
assorted DNS RDATA elements.  We have posted a draft to
establish one for TXT RDATA.  If this requires significant
discussion, we seek guidance from relevant ADs as to where they
would like that discussion to occur.

Three notes:

* As the draft indicates, while RFC 5507 and other documents
explain why subtypes are usually a bad idea, the registry
definition tries to be fairly neutral on the subject -- the idea
is to identify and register what is being done, not to pass
judgment. 

* While the use of special labels (in the language of 5507,
prefixes and suffixes) mitigates many of the issues with
specialized use of RDATA fields, they do not eliminate the
desirability of a registry (especially for debugging and
analysis purposes).

* While examining the DNS-related registries that exist today,
we discovered that some other registries seemed to be missing
and that the organization of the registries seemed to be
sub-optimal.  We considered attempting a "fix everything" I-D,
but concluded that the TXT RDATA registry was the most important
need and that it would be unwise to get its establishment bogged
down with other issue.  The I-D now contains a temporary
appendix that outlines the other issues we identified.  IMO,
thinking through the issues in that appendix, generating the
relevant I-D(s), and moving them through the system would be a
good exercise for someone who has little experience in the IETF
and who is interested in IANA registries and/or DNS details.  I
am unlikely to find time to do the work myself but would be
happy to work with a volunteer on pulling things together.

best,
  john


-- Forwarded Message --
Date: Friday, August 30, 2013 05:52 -0700
From: internet-dra...@ietf.org
To: i-d-annou...@ietf.org
Subject: I-D Action: draft-klensin-iana-txt-rr-registry-00.txt


A New Internet-Draft is available from the on-line
Internet-Drafts directories.


Title   : An IANA Registry for Protocol Uses of Data
with the DNS TXT RRTYPE Author(s)   : John C Klensin
  Andrew Sullivan
  Patrik Faltstrom
Filename: draft-klensin-iana-txt-rr-registry-00.txt
Pages   : 8
Date: 2013-08-30

Abstract:
   Some protocols use the RDATA field of the DNS TXT RRTYPE for
holdingdata to be parsed, rather than for unstructured free
text.  Thisdocument specifies the creation of an IANA
registry for protocol-specific structured data to minimize
the risk of conflicting orinconsistent uses of that RRTYPE
and data field.


The IETF datatracker status page for this draft is:
https://datatracker.ietf.org/doc/draft-klensin-iana-txt-rr-registry

[...]


Re: Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-29 Thread John C Klensin


--On Thursday, August 29, 2013 12:28 -0700 Dave Crocker
 wrote:

> On 8/29/2013 9:31 AM, John C Klensin wrote:
>> I may be violating my promise to myself to stay out of
>> SPF-specific issues,
> 
> 
> Probably not, since your note has little to do with the
> realities of the SPFbis draft, which is a chartered working
> group product.  You might want to review its charter:
> 
>   http://datatracker.ietf.org/wg/spfbis/charter/
 
> Note the specified goal of standards track and the /very/
> severe constraints on work to be done.  Please remember that
> this is a charter that was approved by the IESG.  The working
> group produced was it was chartered to produce, for the
> purpose that was chartered.

I have reviewed the charter, Dave.  THe reasons I've wanted to
stay out of this discussion made me afraid to make a posting
without doing so.   But the last I checked, WG charters are
approved by the IESG after reviewing whatever comments they
decide to solicit.  They are not IETF Consensus documents.  Even
if this one was, the WG co-chair and document shepherd have made
it quite clear that the WG carefully considered the design issue
and alternatives at hand.  I applaud that but, unless you are
going to argue that the charter somehow allows the WG to
consider some issues that cannot be reviewed on IETF Last Call,
either the design issue is legitimate or the WG violated its
charter.  I, at least, can't read the charter that way.

> More broadly, you (and others) might want to review that
> actual criteria the IETF has specified for Proposed in
> RFC2026.  Most of us like to cite all manner of personal
> criteria we consider important.  Though appealing, none of
> them is assigned formal status by the IETF, with respect to
> the Proposed Standards label; I believe in fact that there is
> nothing that we can point to, for such other criteria,
> represents IETF consensus for them.  The claim that we can't
> really document our criteria mostly means that we think it's
> ok to be subjective and whimsical.

The statement to which I objected was one in which you claimed
(at least as I understood it) that it was inappropriate to raise
a design consideration because the protocol was already widely
deployed.  Your paragraph above makes an entirely different
argument.  As I understand it, your argument above is that it
_never_ appropriate to object during IETF Last Call on the basis
of design considerations (whether it is desirable to evaluate
design considerations in a WG or not).  I believe that design
issues and architectural considerations can sometimes be
legitimate examples of "known technical defects".  If they were
not, then I don't know why the community is willing to spend
time on such documents (or even on having an IAB).

Again, it think it is perfectly reasonable to argue that a
particular design or architectural consideration should not be
applied to a particular specification.  My problem arises only
when it is claimed that such considerations or discussions are a
priori inappropriate.
 
> Also for the broader topic, you also might want to reevaluate
> much of what your note does say, in light of the realities of
> Individual Submission (on the IETF track) which essentially
> never conforms to the criteria and concerns you seem to be
> asserting.

If that were the case, either you are massively misunderstanding
what I am asserting or I don't see your point.   I believe that
my prior note, and this one, assert only one thing, which is
that it is inappropriate to bar any discussion --especially
architectural or design considerations-- from IETF Last Call
unless it addresses a principle that has already been
established for the particular protocol by IETF Consensus.  I
remain completely comfortable, modulo the various "rude
language" topics, with a discussion of why some architectural
principle is irrelevant to a particular specification or even
that trying to apply that principle would be stupid.  But a
discussion along those lines is still a discussion, not an
attempt to prevent a discussion.

And, yes, I believe that Individual Submissions should generally
be subject to a much higher degree of scrutiny on IETF Last Call
than WG documents.  I also believe that, if there appears to be
no community consensus one way or the other, that the IESG
should generally defer to the WG on WG documents but default to
non-approval of Individual Submissions.  But, unless I'm
completely misunderstanding the point you are trying to make, I
don't see what that has to do with this topic.

Dave, we had these sorts of discussions before.  If there are a
common patterns about them, they are that neither of us is
likely to convince the other and that both of us soon get to the
point of either muttering "he just doesn't get it" (or worse)
into our beards or 

Re: Last Call: (Early IANA Allocation of Standards Track Code Points) to Best Current Practice

2013-08-29 Thread John C Klensin


--On Thursday, August 29, 2013 12:43 -0400 Barry Leiba
 wrote:

>> In Section 2:
>> 
>>   'a.  The code points must be from a space designated as
>>   "Specification Required" (where an RFC will be used as the
>>stable reference), "RFC Required", "IETF Review", or
>>"Standards Action".'
>> 
>> I suggest not having the comment (where) and leaving it to
>> RFC 5226 to define "Specification Required".
> 
> Yes, except that's not what this means.
> 
> I tripped over the same text, and I suggest rephrasing it this
> way:
> 
> NEW
>The code points must be from a space designated as
> "SpecificationRequired" (in cases where an RFC will be
> used as the stable reference),"RFC Required", "IETF
> Review", or "Standards Action".

Barry, that leaves me even more confused because it seems to
essentially promote "Specification Required" into "RFC Required"
by allowing only those specifications published as RFCs.  

Perhaps, given that this is about Standards Track code points,
that is just what is wanted.  If so the intent would be a lot
more clear if the text went a step further and said:

NEWER:

The code points must normally be from a space designated
as "RFC Required", "IETF Review", or "Standards Action".
In addition, code points from the "Specification
Required" are allowed if the specification will be
published as an RFC.

There is still a small procedural problem here, which is that
IANA is asking that someone guarantee RFC publication of a
document (or its successor) that may not be complete.  There is
no way to make that guarantee.  In particular, the guarantee of
Section 2 (c) without constraining the actions that IETF LC can
reasonably consider.  As I have argued earlier today in another
context, language that suggests really strong justification for
the tradeoff may be acceptable, but a guarantee to IANA by the
WG Chairs and relevant ADs, or even the full IESG, that
constrains a Last Call is not.   Section 3.2 begins to examine
that issue, but probably doesn't go quite far enough, especially
in the light of the "four conditions" of Section 2.

It would probably be appropriate to identify those conditions as
part of good-faith beliefs.

It might even be reasonable to require at least part of tee
"what if things change" analysis that Section 3.2 calls for
after the decision is made to be included in the request for
early allocation.  Requiring that analysis would also provide a
small additional safeguard against the scenarios discussed in
the Security Considerations section.

Incidentally, while I'm nit-picking about wording, the last
sentence of Section 3.3 has an unfortunate dependency on a form
of the verb "to expire".  Language more similar to that used for
RFCs might be more appropriate, e.g., that the beginning of IESG
Review (or issuance of an IETF Last Call) suspends the
expiration date until either an RFC is published or the IESG or
authors withdraws the document. 

   best,
   john



Re: Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-29 Thread John C Klensin


--On Wednesday, August 28, 2013 07:21 -0700 Dave Crocker
 wrote:

>> RFC 5507 primarily raises three concerns about TXT records:
> 
> RFC 5507 is irrelevant to consideration of the SPFbis draft.
> 
> Really.
> 
> RFC 5507 concerns approaches to design.  However the SPFbis
> draft is not designing a new capability.  It is documenting a
> mechanism that has existed for quite a long time, is very
> widely deployed, and has become an essential part of Internet
> Mail's operational infrastructure that works to counter abuse.
>...

Dave.

I may be violating my promise to myself to stay out of
SPF-specific issues, but this does not seem to me to be an
SPF-specific issue.  I suggest to you that the notions of IETF
change control and consensus of the IETF community are very
important to the integrity of how the IETF does things.  The
question of where and how the IETF adds value comes in there
too.  If some group --whether an IETF WG or some external
committee or body-- comes to the IETF and says "we have this
protocol, it is well-tested and well-deployed, and we think  the
community would benefit from the IETF publishing its
description" that is great, we publish as Informational RFC (or
the ISE does), and everyone is happy.   If that group can get
IETF community consensus for the idea that the spec should get a
gold star, someone writes an Applicability Statement that points
to the other document and says "Recommended", we push any
quibbles about downrefs out of the way, and we move on.

However, it seems to me that, for anything that is proposed to
be a normal standards track document, the community necessarily
ought to be able to take at least one whack at it on IETF LC.
That "one whack" principle suggests that one cannot say "this
was developed and deployed elsewhere and is being published as
Experimental" (which is what, IIR, was one thing that happened
in the discussion of 4408) and then say "now the design quality
of SPF is not a relevant consideration because it has been
considered elsewhere and widely deployed".  If the IETF doesn't
get a chance to evaluate design quality and even, if
appropriate, to consider the tradeoffs between letting a
possibly-odious specification be standardized and causing a fork
between deployed-SPF and IETF-SPF, then the IETF's statements
about what its standards mean become meaningless (at least in
this particular type of case).

Now I think it is perfectly reasonable to say, as you nearly did
later in your note, that SPF-as-documented-in-4408bis is
sufficiently deployed and would be sufficiently hard to change
that the community should swallow its design preferences and
standardize the thing.  One can debate that position, but it is
at least a reasonable position to take.Modulo some quibbles,
it probably the position I'd take at this point if I were
willing to take a position, but it is different from saying
"can't discuss the design choices".

Things would also be very different if the present question
involved updating or replacing an existing Proposed Standard.
If design decisions were made in that earlier version (and that
went through IETF LC and got consensus), I think it would be
perfectly reasonable to say "the IETF community looked at that
before and it is now too late".  You've done that before, I've
done it before, and I don't think anyone who isn't prepared to
explain why, substantively and in terms of deployment, it isn't
too late should be able to object.  

But, in the absence of demonstrated and documented IETF
consensus --independent of WG consensus, implementation
consensus, deployment consensus, silent majority consensus, or
any other type of claim about broader community consensus-- I
don't think one can exclude a discussion of a specification's
relationship to various design considerations, if only because
"that may be deployed but the IETF should not endorse it in that
form by standardizing it" or even "if the community that is
advocating this won't allow design issues to be discussed, then
there is no IETF value-added and the IETF should decline to
standardize on that basis" have got to be possible IETF
community responses.

> To consider RFC 5507 with respect to SPFbis is to treat the
> current draft as a matter of new work, which it isn't.

No, it is to treat the current draft as a matter of work that
the IETF is being asked to standardize for the first time...
which, as far as I can tell, it is.

I think those distinctions about standardization (including the
value-added and change control ones) and what can reasonably be
raised on IETF LC are important to the IETF, even for those who
agree with you (entirely or in part) about what should happen
with this particular specification at this particular point.

YMMD.

best,
   john



Re: Rude responses (sergeant-at-arms?)

2013-08-27 Thread John Leslie
Ted Lemon  wrote:
> 
> I think it should be fairly obvious even to one not practiced in the art
> that a lot of the postings to the ietf mailing list recently have been
> simple repeats of points previously made, with no additional substance,

   +1

   Alas, that statement applies to both posts which raise issues and
posts which refute issues.

> which, well intentioned or not, purely have the effect of making it
> harder to evaluate consensus.

   I feel sorry for Ted, who _does_ have to evaluate consensus here.

   For better or worse, current RFCs in standards track have boilerplate
saying
" 
" This document is a product of the Internet Engineering Task Force
" (IETF).  It represents the consensus of the IETF community...

   Unless and until this boilerplate changes, IESG members have an
obligation to try to decide whether that statement is true.

   I'm _very_ glad I don't have that obligation!

--
John Leslie 


Overloaded TXT harmful (was" Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard)

2013-08-26 Thread John C Klensin


--On Monday, August 26, 2013 10:49 -0400 John R Levine
 wrote:

> Sorry if that last one came across as dismissive.
> 
>> Until such time, I'd personally prefer to see some explicit
>> notion that the odd history of the SPF TXT record should not
>> be seen as a precedent and best practice, rather than hope
>> that this is implicit.
> 
> I'd have thought that the debate here and elsewhere already
> documented that.  Since it's not specific to SPF, perhaps we
> could do a draft on "overloaded TXT considered harmful" to get
> it into the RFC record.

With the help of a few others, I've got a I-D in the pipe whose
function is to create an IANA registry of structured protocol
uses for TXT RR data and how to recognize them.  I hope it will
be posted later this week.  Its purpose is to lower the odds of
"overloaded" sliding into "different uses for forms that are not
easily distinguished".  Other than inspiration, its only
relationship to the current SPF discussion is that some
SPF-related information is a candidate for registration (whether
as an active use or as a deprecated one).

It already contains some text that warns that overloading TXT is
a bad idea but that, because it happens and has happened,
identifying those uses is appropriate.  Once it is posted, I/we
would appreciate any discussion that would lead to consensus
about just how strong that warning should be and how it should
be stated.

best,
   john







Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-26 Thread John R Levine

Sorry if that last one came across as dismissive.


Until such time, I'd personally prefer to see some explicit notion that
the odd history of the SPF TXT record should not be seen as a precedent
and best practice, rather than hope that this is implicit.


I'd have thought that the debate here and elsewhere already documented 
that.  Since it's not specific to SPF, perhaps we could do a draft on 
"overloaded TXT considered harmful" to get it into the RFC record.


Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
"I dropped the toothpaste", said Tom, crestfallenly.

smime.p7s
Description: S/MIME Cryptographic Signature


Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-26 Thread John R Levine

prevented, not solved. I would like to prevent someone from having to
submit a draft specifying that in the case of TXT, the (name, class,
type)-tuple should be extended with the first X octets from the RDATA
fields, somewhere in the future, because client-side demuxing is getting
too buggy and it seems like a good idea to select specific records in
the DNS itself.


Could you point to anyone, anywhere, who has ever said that the odd 
history of the SPF TXT record means that it is perfectly fine to do

something similar in the future?

On the other hand, please look at all of the stuff that people outside 
of the IETF do with apex TXT records, and try and say with a straight face 
that SPF as as much as 1% of the multiplexing problem.


Regards,
John Levine, jo...@taugh.com, Taughannock Networks, Trumansburg NY
"I dropped the toothpaste", said Tom, crestfallenly.


Re: IETF 88 - Registration Now Open!

2013-08-23 Thread John Levine
In article  you write:
>and the hotel is fully booked�.

Not if you use the link on the meeting hotel page.

http://www.ietf.org/meeting/88/hotel.html

R's,
John


Re: Fwd: [dnsext] SPF isn't going to change, was Deprecating SPF

2013-08-23 Thread John Levine
>>> Nobody has argued that SPF usage is zero, and the reasons for
>>> deprecating SPF have been described repeatedly here and on the ietf
>>> list, so this exercise seems fairly pointless.
>> 
>>  the reasons for not deprecating SPF have been described here
>>  and on the ietf list repeatedly ... yet there has been little
>>  concrete data regarding deployment uptake.

Sigh.  We have RFC 6686.  Since this is clearly an issue you consider
to be of vital importance, it is baffling that (as far as I can tell)
you did not contribute to or even comment on it when it was being
written and published.

Those of us in the mail community have a lot of anecdotal evidence,
too.  Most notably, none of the large providers that dominate the mail
world publish or check type 99, and the one that used to check type 99
(Yahoo) doesn't any more.  You don't have to like it, but it's silly
to deny it.

In any event, it's purely a strawman that "nobody" checks type 99.  A
few people do, the WG knows that, and we decided for well documented
reasons to deprecate it anyway.

R's,
John


Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-23 Thread John Levine
>> SPF is ten years old now.  It would be helpful if you could give us a
>> list of other protocols that have had a similar issue with a TXT
>> record at the apex during the past decade.
>
>I don't know of any (at least ones that are used in the global dns
>namespace), and I would like to still not know of any in 2033.
>
>SPF may be a lost cause, let's try and make that the only one.

Since we agree that the issue you're worried about has not arisen even
once in the past decade, could you clarify what problem needs to be
solved here?

R's,
John


Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-22 Thread John Levine
In article <5215cd8d.3080...@sidn.nl> you write:
>So what makes you think the above 4 points will not be a problem for the
>next protocol that comes along and needs (apex) RR data? And the one
>after that?

SPF is ten years old now.  It would be helpful if you could give us a
list of other protocols that have had a similar issue with a TXT
record at the apex during the past decade.

R's,
John


Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-21 Thread John Leslie
NB: I have read the rest of the thread; but this is what deserves a reply:

Dave Crocker  wrote:
> On 8/21/2013 11:58 AM, Pete Resnick wrote:
> 
>> AD hat squarely on my head.

   (There may have been a miscommunication here -- what particular AD
function Pete was speaking in; but to me, at least, it becomes clear
in context.)

>> On 8/21/13 1:29 PM, Dave Crocker wrote:
>>
>>> Oh.  Now I understand.
>>>
>>> You are trying to impose new requirements on the original work, many
>>> years after the IETF approved it.
>>>
>>> Thanks.  Very helpful.
>>
>> That's not an appropriate response.

   Dave has every right to disagree on that; but I quite agree with
Pete. It is decidedly not helpful, not productive, and tends towards
escalating a discussion which has no need of escalation.

>> It is certainly not helpful to me as the consensus caller.

   Dave has no right to disagree with this. We pay Pete the big bucks
to call consensus on difficult issues like this. We need to understand
it will be hard sometimes.

   I'm sure Dave has read Pete's draft on the meaning of consensus.
I'm less sure he remembered it as he responded here.

   If this is the sort of response given to somewhat-valid questions
raised about the draft being proposed, Pete will eventually have to
say there _is_ no consensus. :^(

>> And it is rude.

   Pete's opinion. (I happen to share it.)

   Consensus process works _much_ better if we respect the opinions
of others -- even when we "know" they're wrong.

> Since you've made this a formal process point,

   Pete has _not_ done this.

> I'll ask you to substantiate it carefully and also formally...

   I see no reason Pete has any obligation to do so. If he chooses
to, I ask him to not do it on this list. ("Please don't feed the
troll" comes to mind.)

> A bit of edge is warranted for such wasteful, distracting and 
> destabilizing consumption of IETF resources...

   Dave's opinion. (I happen to not share it.)

   Consensus process _also_ works better if we respect Dave's
opinion here.

   I suggest we all remember that we don't have to change others'
opinions here (were such a thing possible). We have only to bring
them to the point where they agree they can live with the result.

--
John Leslie 


Re: [spfbis] there is no transitiion, was Last Call:

2013-08-21 Thread John Levine
>Actually, I just checked.   Right now, none of them seem to publish SPF RRtype 
>records.
>Yahoo doesn't even publish a TXT record containing SPF information.   An 
>argument could
>be made that if we really wanted to push the adoption of SPF RRtypes, getting 
>Google,
>Yahoo and Hotmail to publish SPF RRtype records would actually make it 
>worthwhile to
>query SPF first, because most queries probably go to those domains.

This would require some reason why it is worth them spending time and
money to do something that has no operational benefit whatsoever.

If they start publishing type 99, something will break, because when
you change something in large systems, something always breaks.  Some
mail systems somewhere with bugs in type 99 handling that they never
noticed will start making mail fail.  For doing that, will anyone's
mail work better?  No.  Will their DNS work better?  No.

As I have mentioned a couple of times already, even though Yahoo
doesn't publish SPF (I believe due to political issues related to the
history of Domainkeys and DKIM), they do check SPF.  They used to
check both TXT and type 99, and stopped checking type 99.  What
argumment is there to spend money to revisit and reverse that
decision?

Arguments about DNS purity, and hypothetical arguments about other TXT
records that will never exist are unlikely to be persusasive.

R's,
John


Re: Call for Review of draft-rfced-rfcxx00-retired, "List of Internet Official Protocol Standards: Replaced by an Online Database"

2013-08-20 Thread John C Klensin


--On Tuesday, August 20, 2013 14:01 -0500 Pete Resnick
 wrote:

> On 8/15/13 2:06 PM, SM wrote:
>> At 11:48 14-08-2013, IAB Chair wrote:
>>> This is a call for review of "List of Internet Official
>>> Protocol  Standards: Replaced by an Online Database" prior
>>> to potential  approval as an IAB stream RFC.
>> 
>> My guess is that draft-rfced-rfcxx00-retired cannot update
>> RFC 2026.   Does the IAB have any objection if I do something
>> about that? [...]
>> The document argues that STD 1 is historic as there is an
>> online list now.
> 
> The IESG and the IAB had an email exchange about these two
> points. Moving a document from Standard to Historic is really
> an IETF thing to do. And it would be quite simple for the IETF
> to say, "We are no longer asking for the 'Official Protocol
> Standards' RFC to be maintained" by updating (well,
> effectively removing) the one paragraph in 2026 that asks for
> it, and requesting the move from Standard to Historic. So I
> prepared a *very* short document to do that:
> 
> http://datatracker.ietf.org/doc/draft-resnick-retire-std1/

FWIW, I've reviewed your draft and have three comments:

(1) You are to be complemented on its length and complexity.

(2)  I agree that the core issue belongs to the IETF, and IETF
Stream, issue, not the RFC Editor and/or IAB.

(3) I far prefer this approach to the more complex and
convoluted RFC Editor draft.   If we really need to do something
formally here (about which I still have some small doubts), then
let's make it short, focused, and to the point.  Your draft
appears to accomplish those goals admirably.

   john



Re: [spfbis] prefixed names, was Last Call:

2013-08-20 Thread John Levine
Newsgroups: iecc.lists.ietf.ietf
From: John Levine 
Subject: Re: [spfbis] prefixed names, was Last Call: 

Summary:
Expires:
References: <5212fcef.80...@dcrocker.net> 
<55459829-933f-4157-893a-f90552d44...@frobbit.se> 
<5213174d.7080...@dcrocker.net> 

Sender:
Followup-To:
Distribution: 
Organization: 
Keywords: 
Cc: 
Cleverness: some

>The two following MIGHT NOT be in the same zone:
>
>foo.example. IN X RDATAX
>_bar.foo.example. IN TXT RDATAY

Since prefixed names have never been used for anything other than
providing information about the unprefixed name, what conceivable
operational reason could there be to put a zone cut at the prefix?

This impresses me as one of those problems where the solution is
"don't do that."

R's,
John


Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-19 Thread John Levine
>AFAICT, no one is arguing that overloading TXT in the
>way recommended by this draft is a good idea, rather the best arguments appear 
>to be that it is a pragmatic
>"least bad" solution to the fact that (a) people often implement (poorly) the 
>very least they can get away
>with and (b) it can take a very long time to fix mistakes on the Internet. 

Neither of those are the reason the WG dropped type 99 records.  The
actual reason has been discussed at length, and was most recently
described earlier today in this thread.

Once again, I really don't understand what the point is here.

R's,
John


Re: Academic and open source rate (was: Charging remote participants)

2013-08-19 Thread John C Klensin


--On Monday, August 19, 2013 12:49 -0700 SM 
wrote:

>...
>> First, I note that, in some organizations (including some
>> large ones), someone might be working on an open source
>> project one month and a proprietary one the next, or maybe
>> both
>> concurrently.  Would it be appropriate for such a person (or
>> the company's CFO) to claim the lower rate, thereby expecting
>> those who pay full rate to subsidize them?  Or would their
>...

> The above reminds me of the Double Irish with a Dutch
> sandwich. If I was an employee of a company I would pay the
> regular fee.  If I am sponsored by an open source project and
> my Internet-Draft will have that as my affiliation I would
> claim the lower rate.

Without understanding your analogy (perhaps a diversity
problem?), if you are trying to make a distinction between
"employee of a company" and "sponsored by an open source
project", that distinction just does not hold up.  I'm
particular, some of the most important reference implementations
of Internet protocols -- open source, freely available and
usable, well-documented, openly tested, etc.-- have come out of
"companies", even for-profit companies.

If the distinction you are really trying to draw has to do with
poverty or the lack thereof, assuming that, if a large company
imposes severe travel restrictions, its employees should pay
full fare if they manage to get approval, then you are back to
Hadriel's suggestion (which more or less requires that someone
self-identify as "poor") or mine (which involves individual
self-assessment of ability to pay without having to identify the
reasons or circumstances).
 
>...
>> Does it count if the open source software is basically
>> irrelevant to the work of the IETF?  Written in, e.g., HTML5?
>> Do reference implementations of IETF protocols count more (if
>> I'm going to be expected to subsidize someone else's
>> attendance at the IETF, I think they should).
> 
> This would require setting a demarcation line.  That isn't
> always a clear line.

What I'm trying to suggest is that the line will almost always
be unclear and will require case by case interpretation by
someone other than the would-be participant.  I continue to find
any peer evaluation model troubling, especially as long as the
people and bodies who are likely to made the evaluations are
heavily slanted toward a narrow range of participants (and that
will be the case as long as those leadership or evaluation roles
require significant time over long periods).

> A subsidy is a grant or other financial assistance given by
> one party for the support or development of another.  If the
> lower rate is above meeting costs it is not a subsidy.

I note that you used that term in a later message,  More
important, I believe the IAOC has repeatedly assured us that, at
least over a reasonable span of meetings, they never seek to
make a profit on registration fees.  Indeed, I suspect that,
with reasonable accounting assumptions, meetings are always a
net money-loser although not my much and more than others.  Any
decision that some people are going to pay less than others
(including the reduced fee arrangements we already have) is a
decision that some people and groups are going to bear a higher
share of the costs than others.  And that is a subsidy, even by
your definition above.

best,
   john




Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-19 Thread John Levine
>There is nothing syntactially worng with those entries. I congratulate
>people advocating SPF in TXT records while also writing parsers.

None of your TXT records are SPF records because they don't start with
the required version tag.  You have two type 99 records that start
with the version tag, which is invalid under section 3.1.2 of RFC 4408
and the similar section 3.2 of 4408bis.  

So you're publishing no SPF information at all.  I gather that you've
tried the popular SPF implementations like libspf2 and verified that
they correctly report that they found nothing.

I really don't understand what point you're making here.

R's,
John



Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-19 Thread John R Levine

* The charter disallows major protocol changes -- removing the SPF RR type
is a direct charter violation; since SPF is being used on the Internet. ...


The SPF working group discussed this issue at painful, extensive length.

As you saw when you read the WG archives, there is a significant interop 
bug in rfc 4408 in the handling of SPF and TXT records, which (again after 
painful and extension discussion) we decided the least bad fix was to get 
rid of SPF records.  I don't see anything in your note about how else you 
think we should address the interop bug.


In your case it doesn't matter, since your TXT and SPF records make no 
usable assertions, but a lot of people use SPF right now as part of their 
mail stream management.


R's,
John


Re: [spfbis] Last Call: (Sender Policy Framework (SPF) for Authorizing Use of Domains in Email, Version 1) to Proposed Standard

2013-08-19 Thread John Levine
>* The charter disallows major protocol changes -- removing the SPF RR type
>is a direct charter violation; since SPF is being used on the Internet. ...

Uh huh.

$ dig besserwisser.org txt

;; QUESTION SECTION:
;besserwisser.org.  IN  TXT

;; ANSWER SECTION:
besserwisser.org.   86173   IN  TXT "MN-1334-RIPE"
besserwisser.org.   86173   IN  TXT "v=spf0x44 be allowed to 
contaminate the DNS."
besserwisser.org.   86173   IN  TXT "v=spf0x42 besides, SPF as 
concept is a layering violation."
besserwisser.org.   86173   IN  TXT "v=spf0x41 not valid. because 
SPF records belong in RRtype 99."
besserwisser.org.   86173   IN  TXT "v=spf0x43 adding insult to 
injury, this layering violation MUST not "

$ dig besserwisser.org spf

;; QUESTION SECTION:
;besserwisser.org.  IN  SPF

;; ANSWER SECTION:
besserwisser.org.   86140   IN  SPF "v=spf1 ip6:0::/0 -all"
besserwisser.org.   86140   IN  SPF "v=spf1 ip4:0.0.0.0/0 -all"

R's,
John




Re: Academic and open source rate (was: Charging remote participants)

2013-08-19 Thread John C Klensin


--On Sunday, August 18, 2013 17:04 -0700 SM 
wrote:

>> I'd love to get more developers in general to participate -
>> whether  they're open or closed source doesn't matter.  But I
>> don't know how  to do that, beyond what we do now.  The email
>> lists are free and  open.  The physical meetings are remotely
>> accessible for free and open.
> 
> On reading the second paragraph of the above message I see
> that you and I might have a common objective.  You mentioned
> that you don't know how to do that beyond what is done now.  I
> suggested a rate for people with an open source affiliation.
> I did not define what open source means.  I think that you
> will be acting in good faith and that you will be able to
> convince your employer that it will not make you look good if
> you are listed in a category which is intended to lessen the
> burden for open source developers who currently cannot attend
> meetings or who attend meetings on a very limited budget.

I think this is bogus and takes us down an undesirable path.
First, I note that, in some organizations (including some large
ones), someone might be working on an open source project one
month and a proprietary one the next, or maybe both
concurrently.  Would it be appropriate for such a person (or the
company's CFO) to claim the lower rate, thereby expecting those
who pay full rate to subsidize them?  Or would their involvement
in any proprietary-source activity contaminate them morally and
require them to pay the full rate?  Second, remember that "open
source" is actually a controversial term with some history of
source being made open and available, presumably for study, but
with very restrictive licensing rules associated with its
adaptation or use.

Does it count if the open source software is basically
irrelevant to the work of the IETF?  Written in, e.g., HTML5?
Do reference implementations of IETF protocols count more (if
I'm going to be expected to subsidize someone else's attendance
at the IETF, I think they should).

Shouldn't we be tying this to the discussion about IPR
preference hierarchies s.t. FOSS software with no license
requirements get more points (and bigger discounts) than BSD or
GPL software, which get more points than FRAND, and so on?

Finally, there seems to be an assumption underlying all of this
that people associated with open source projects intrinsically
have more restrictive meeting or travel budgets and policies
than those working on proprietary efforts in clearly-for-profit
organizations (especially large one).  As anyone who have lived
through a serious travel freeze or authorization escalation in a
large company knows too well, that doesn't reflect reality.

best,
   john






Re: Anyone having trouble submitting I-Ds?

2013-08-18 Thread John Levine
>> The anti-hijacking feature causes the confirmation email to
>> only go to the authors listed on the previous version of the document, so
>> mail was not sent to me and things are working as expected.
>
>This behavior is not documented to the user when they submit the document
>and is therefore a bug.

It's sort of documented somewhere, but I agree that it's a bug that it
doesn't tell the submitter what happened.

I reported it as a bug a while ago, dunno where it is in the tracker.



Re: Academic and open source rate (was: Charging remote participants)

2013-08-18 Thread John Levine
In article <01672754-1c4f-465b-b737-7e82dc5b3...@oracle.com> you write:
>
>I've been told, though obviously I don't know, that the costs are 
>proportional.  I assume it's not literally a "if we get
>one additional person, it costs an additional $500".  But I assume SM wasn't 
>proposing to get just one or a few more "open
>source developer" attendees.  If we're talking about just a few people it's 
>not worth arguing about... or doing anything
>about.  It would only be useful if we got a lot of such attendees.

My trip to the Berlin IETF cost me about $3300, of which the
registration fee was only $650.  (The plane ticket was expensive,
since I flew from upstate NY, but the hotel was cheap because I booked
at a place a block away with a prepaid rate back in May.)

If we're going to provide financial inducements for people to come,
whether open source developers or anyone else, unless they happen to
live in the city where we're meeting, we'll need to give them cash
travel grants, not just waive the fee.  The IRTF brings winners of
their research prize to the meetings to present the winning papers,
so we can look at those numbers to see what it costs.





Re: Academic and open source rate (was: Charging remote participants)

2013-08-18 Thread John C Klensin


--On Sunday, 18 August, 2013 08:33 -0400 Hadriel Kaplan
 wrote:

>...
> And it does cost the IETF lots of money to host the physical
> meetings, and that cost is directly proportional to the number
> of physical attendees.  More attendees = more cost.

I had promised myself I was finished with this thread, but I
can't let this one pass.

(1) If IETF pays separately for the number of meeting rooms, the
cost is proportionate to the number of parallel sessions, not
the number of attendees.

(2) If IETF gets the meeting rooms (small and/or large) for
"free", the costs are borne by the room rates of those who stay
in the hotel and are not proportionate to much of anything
(other than favoring meetings that will draw the negotiated
minimum number of attendees who stay in that hotel).

(3) Equipment costs are also proportional to the number of
meetings we run in parallel.  Since IASA owns some of the
relevant equipment and has to ship it to meetings, there are
some amortization issues with those costs and shipping costs are
dependent on distance and handling charges from wherever things
are stored between meetings (I assume somewhere around Fremont,
California, USA).  If that location was correct and we wanted to
minimize those charges, we would hold all meetings in the San
Francisco area or at least in the western part of the USA.  In
any event the costs are in no way proportionate to the number of
attendees.

(4) The costs of the Secretariat and RFC Editor contracts and
other associated contracts and staff are relatively fixed.  A
smaller organization, with fewer working groups and less output,
might permit reducing the size of those contracts somewhat, but
that has only the most indirect and low-sensitively relationship
to the number of attendees, nothing near "proportional".

(5) If we have to pay people in addition to Secretariat staff
to, e.g., sit at registration desks, that bears some monotonic
relationship to the number of attendees.  But the step
increments in that participate function are quite large, nothing
like "directly proportional".  

(6) The cost of cookies and other refreshments may indeed be
proportional to the number of attendees but, in most facilities,
that proportionality will come in large step functions.  In
addition, in some places, costs will rise with the number of
"unusual" dietary requirements.  The number of those
requirements might increase with the number of attendees, but
nowhere near proportionately.  "Unusual" is entirely in the
perception of the supplier/facility but, from a purely economic
and cost of meetings standpoint, the IETF might be better off if
people with those needs stayed home or kept their requirements
to themselves.

So, meeting "cost directly proportional to the number of
physical attendees"?  Nope.   

best,
   john

p.s. You should be a little cautious about a "charge the big
companies more" policy.  I've seen people who make the financial
decisions as to who comes say things like "we pay more by virtue
of sending more people, if they expect us to spend more per
person, we will make a point by cutting back on those we send
(or requiring much stronger justifications for each one who
wants to go)".  I've also seen reactions that amount to "We are
already making a big voluntary donation that is much higher than
the aggregate of the registration fees we are paying, one that
small organizations don't make.  If they want to charge us more
because we are big, we will reduce or eliminate the size of that
donation."  Specific company examples on request (but not
on-list), but be careful what you wish for.







Re: Charging remote participants

2013-08-16 Thread John C Klensin


--On Friday, August 16, 2013 15:46 -0400 Hadriel Kaplan
 wrote:

> 
> On Aug 16, 2013, at 1:53 PM, John C Klensin
>  wrote:
> 
>> (1) As Dave points out, this activity has never been free.
>> The question is only about "who pays".  If any participants
>> have to pay 
>> (or convince their companies to pay) and others, as a matter
>> of categories, do not, that ultimately weakens the process
>> even if, most of the time, those who pay don't expect or get
>> favored treatment.  Having some participants get a "free
>> ride" that really comes at the expense of other participants
>> (and potentially competing organizations) is just not a
>> healthy idea.
> 
> Baloney.  People physically present still have an advantage
> over those remote, no matter how much technology we throw at
> this.  That's why corporations are willing to pay their
> employees to travel to these meetings.  And it's why people
> are willing to pay out-of-pocket for it too, ultimately.  It's
> why people want a day-pass type thing for only attending one
> meeting, instead of sitting at home attending remote. 
> 
> Being there is important, and corporations and people know it.

Sure.  And it is an entirely separate issue, one which I don't
know how to solve (if it can be solved at all).  It is
unsolvable in part because corporations --especially the larger
and more successful ones-- make their decisions about what to
participate in, at what levels, and with whatever choices of
people, for whatever presumably-good business reasons they do
so.  I can, for example, remember one such corporation refusing
to participate in a standards committee that was working on
something that many of us thought was key to their primary
product.  None us knew, then or now, why they made that decision
although their was wide speculation at the time that they
intended to deliberately violate the standard that emerged and
wanted plausible deniability about participation.  Lots of
reasons; lots of circumstances. 

> An audio input model (ie, conference call model) still
> provides plenty of advantage to physical attendees, while also
> providing remote participants a chance to have their say in a
> more emphatic and real-time format.  We're not talking about
> building a telepresence system for all remote participants, or
> using robots as avatars.

IIR, we've tried audio input.  It works really well for
conference-sized meetings (e.g., a dozen or two dozen people
around a table) with a few remote participants.  It works really
well for a larger group (50 or 100 or more) and one or two
remote participants.  I've even co-chaired IETF WG meetings
remotely that way (with a lot of help and sympathy from the
other co-chair or someone else taking an in-room leadership
role).  

But, try it for several remote participants and a large room
full of people, allow for audio delays in both directions, and
about the last thing one needs is a bunch of disembodied voices
coming out of the in-room audio system at times that are not
really coordinated with what is going on in the room.  Now it
can all certainly be made to work: it takes a bit of
coordination on a chat (or equivalent) channel, requests to get
in or out of the queue that are monitored from within the room,
and someone managing those queues along with the mic lines.
But, by that point, many of the disadvantage of audio input
relative to someone reading from Jabber have disappeared and the
other potential problems with audio input -- noise, level
setting, people who are hard to understand even if they are in
the room, and so on-- start to dominate.   Would I prefer audio
input to typing into Jabber under the right conditions?  Sure,
in part because, while I type faster than average it still isn't
fast enough to compensate for the various delays.  But it really
isn't a panacea for any of the significant problems.

>> (2) Trying to figure out exactly what remote participation
>> (equipment, staffing, etc.) will cost the IETF and then trying
>> to assess those costs to the remote participants would be
>> madness for multiple reasons.  [...snip...]
> 
> Yet you're proposing charging remote participants to bear the
> costs.  I'm confused.

I am proposing charging remote participants a portion of the
overhead costs of operating the IETF, _not_ a fee based on the
costs of supporting remote participation.  And, again, I want
them to have the option of deciding how much of it they can
reasonably afford to pay.

>...

best,
   john



Re: Call for Review of draft-rfced-rfcxx00-retired, "List of Internet Official Protocol Standards: Replaced by an Online Database"

2013-08-16 Thread John C Klensin
n other
ways.However, if they feel some desire to publish it in some
form, let's encourage them to just get it done and move on
rather than consuming even more time on issues that will make no
difference in the long term.

best,
john





Re: Charging remote participants

2013-08-16 Thread John C Klensin
lex process, or one that relied more
on leadership judgments about individual requests, would produce
more than enough additional revenue to compensate for the damage
and risks those approaches would cause, I think it strikes a
reasonable balance. It also addresses all of the issues about
the problems with charging remote participants that have been
raised in this thread except those based on the dubious
principle that anyone who doesn't attend an in-person meeting is
thereby entitled to be subsidized by the rest of the community.

best,
   john









Re: Radical Solution for remote participants

2013-08-16 Thread John C Klensin


--On Friday, August 16, 2013 04:59 -0400 "Joel M. Halpern"
 wrote:

> Maybe I am missing something.
> The reason we have face-to-face meetings is because there is
> value in such meetings that can not reasonably be achieved in
> other ways.
> I would like remote participation to be as good as possible.
> But if would could achieve "the same as being there" then we
> should seriously consider not meeting face-to-face.
> Conversely, until the technology gets that good, we must not
> penalize the face-to-face meeting for failures of the
> technology.

Joel,

I certainly agree with your conclusion.  While I hope the intent
wasn't to penalize the face-to-face meeting, there have been
several suggestions in this thread that I believe are
impractical and a few that are probably undesirable even if they
were practical.   Others, such as improved automation, are
practical if we want to make the effort, would probably help,
and, fwiw, have been suggested by multiple people in multiple
threads.

I do believe it would be helpful for everyone involved in the
discussion to be careful about their reactions and rhetoric.
While it is certainly possible to go too far in any given
direction, significant and effective remote participation will
almost certainly require some adjustments by the people in the
room.  We've already made some of those adjustments: for example
while it is inefficient and sometimes doesn't work well, using
Jabber as inbound channel with someone in the room reading
Jabber input at the Mic does help remote participants at some
cost to the efficient flow of the f2f discussions.  

Perhaps that penalizes the face to face participants.  I believe
it is worth it and that it would be worthwhile going somewhat
further in that direction, e.g., by treating remote participants
as a separate mic queue.  I also see it as very closely related
to some other tradeoffs: for example, going to extra effort to
be inclusive and diverse requires extra effort by existing f2f
participants and very carefully balancing costs -- higher costs
and even costs at current levels discourage broader participants
but many ways of increasing diversity also increase costs.

Wrt "not meeting face-to-face", I don't see it happening, even
with technology improvements.  On the other hand, the absolutely
most effective thing we could do to significantly decrease costs
for those who need the f2f meetings but are cost-sensitive would
be to reverse the trends toward WG substituting interim meetings
for work on mailing lists, toward extending the IETF meeting
week to include supplemental meetings, and even to move toward
two, rather than three, meetings a year.  Those changes,
especially the latter two, would probably require that remote
participation be much more efficient and effective than it is
today, but would not require nearly the level of perfection
required to eliminate f2f meetings entirely.  And any of the
three would "penalize" those who like going to extended f2f
meetings and/or prefer working that way and who have effectively
unlimited travel support and related resources.

best,
john



Re: Community Input Sought on SOWs for RFC Production Center and RFC Publisher

2013-08-13 Thread John Levine
>I wonder, though, if this document might have contained change bars that 
>nobody but people who use MS
>Word would see.   Opening the document up in Preview on the Mac, it's just 
>four or five pages of
>text, with no way to evaluate what changed.

It looks fine in OpenOffice.  Really.

I agree with your suggestion that rfcdiff would be nice, too.






Re: Radical Solution for remote participants

2013-08-13 Thread John C Klensin


--On Tuesday, August 13, 2013 06:24 -0400 John Leslie
 wrote:

> Dave Cridland  wrote:
>> On Tue, Aug 13, 2013 at 2:00 AM, Douglas Otis
>>  wrote:
>> 
>>> 10) Establish a reasonable fee to facilitate remote
>>> participants who receive credit for their participation
>>> equal to that of being local.
>> > 
>> 
>> I understand the rationale here, but I'm nervous about any
>> movement toward a kind of "pay-to-play standardization".
> 
>Alas, that is what we have now. :^(
> 
>There are a certain number of Working Groups where it's
> standard operating practice to ignore any single voice who
> doesn't attend an IETF week to defend his/her postings.

Thee is also a matter of equity even if one were to ignore the
costs to the community of enabling remote participation.   Some
fraction of the registration fee goes to support IETF overhead
activities that are not strictly associated with the costs of
particular meetings.  Although it would be a pity to turn us
into a community of hair-splitting amateur accountants [1], it
is inappropriate to expect those who participate in f2f meetings
to fully subsidize those who participate remotely.
 
>...
>   I don't always understand what Doug is asking for; but I
> suspect he is proposing to define a remote-participation where
> you get full opportunity to defend your ideas. This simply
> doesn't happen today.
 
>...
>> One option might be to give chairs some heavy influence on
>> remote burserships.
>...
>That seems premature at this point: the likely costs aren't
> neatly correlated to number of remote participants; so it's
> not clear there's any reason to "support" an individual,
> rather than support the tools.

Worse, enabling WG Chairs to made de facto decisions about who
participates or not would have the appearance of enabling the
worst types of abuse.  It would be worse than figuring out how
to call on advocates of only one  position to speak.  Even if
those abuses never occurred, the optics and risk would be bad
news.[1[  

>...
>Conceivably what we need is an automated tool to receive
> offers to (partially) subsidize the cost of a tool for a
> particular session.

Seems to me to be the wrong way to go.  I wouldn't want to
discourage Cisco's generosity.  And I think it is time to
declare the "Meetecho experiment" to have been concluded
successfully.  It can use some improvements and I hope it
continues to evolve (I could say the same thing about WebEx but
I'm less optimistic about evolution).  But it works well.  If we
want to use it, it is probably time to take it seriously.
Certainly that means having a available for all relevant
sessions, rather than constrained by the size of the current
team and their resources.  If that means training for operators
other than the core team, having to put in-room operators on a
non-volunteer basis, and/or IETF assumption of equipment
expenses, that would, IMO, be completely appropriate (of course,
that interacts with your comment about remote participation
having costs).

Similarly, "everyone pays but some pay less or zero and we set
up a procedure and/or bureaucracy to figure out who the latter
are" seems like a bad idea.

Simpler suggestion (this interacts with the "data collection"
thread):

(1) Remote participants are required [3] to register (remote
lurkers should continue to get a free ride for multiple reasons).

(2) The IAOC sets and announces a remote registrant fee based on
overhead expenses (those not associated with physical presence
at meetings).  Marginal costs of remote participation are
treated as overhead, not direct meeting expenses, because they
benefit the whole community.

(3) Remote participants pay that fee, or part of it, on a good
faith and conscience "what you can afford" basis with
information about what any particular person pays kept
confidential by the secretariat.  Again, we depend on good
faith. Financially, whatever we collect is better than what we
collect today.  Collecting some fee is better than none and
either too high a fee or the necessity to beg would discourage
registration or, worse, participation.

(4) If the IAOC or ISOC decide to conduct a diversity campaign
to help keep those fees low, more power to them, but such a
campaign (or its success) are not requirements for the above
model working.

best,
john


[1] Possibly an improvement on a community of amateur lawyers,
perhaps not.

[2] Incidentally, one of the advantages of the otherwise clumsy
and efficient "mic lines" is that they make the queue clear to
everyone.

[3] We should recognize that we have no realistic enforcement
ability, at least unless non-registration is used to subvert IPR
rules. Any mechanism we might devise would not stop the truly
malicious.  This has to be a good faith requirement.




Re: Community Input Sought on SOWs for RFC Production Center and RFC Publisher

2013-08-13 Thread John Levine
>   http://iaoc.ietf.org/documents/RPC-Proposed-SoW-2013-final.doc
>
>I know that I should not this, but... I am a bit surprised
>(disappointed) in seeing a proprietary format used here.  I am not
>saying that you should not use the Office suite to write it, but you
>could convert it to PDF (better, PDF/A) before publishing it.
>
>Anyway, I use Linux, so I guess I will not be able to give my input about it.

Hmmn.  Is there some reason you are unable to install OpenOffice?  It
opens and displays the SoW including the redline just fine.

I suppose she could have sent it out in OpenOffice's .odt format which
is nominally more open, but then the people who use MS Word (I hear
there are still a few of them) couldn't read it.  There's no great way
to send around a redlined document and I'd say that Word formats are
currently the least bad.  I presume you know that the more recent
.docx file format is ISO/IEC 29500, so that should make everyone
happy, modulo the detail that it's so complicated that in practice the
older nominally un-open .doc interoperates a lot more reliably.

R's,
John


Re: Radical Solution for remote participants

2013-08-13 Thread John Leslie
Dave Cridland  wrote:
> On Tue, Aug 13, 2013 at 2:00 AM, Douglas Otis  wrote:
> 
>> 10) Establish a reasonable fee to facilitate remote participants who
>> receive credit for their participation equal to that of being local.
> >
> 
> I understand the rationale here, but I'm nervous about any movement toward
> a kind of "pay-to-play standardization".

   Alas, that is what we have now. :^(

   There are a certain number of Working Groups where it's standard
operating practice to ignore any single voice who doesn't attend an
IETF week to defend his/her postings.

   I don't always understand what Doug is asking for; but I suspect
he is proposing to define a remote-participation where you get full
opportunity to defend your ideas. This simply doesn't happen today.

> I'd be happy to pay for good quality remote participation, but I'd
> be unhappy if this blocked participation in any significant way.

   The fact remains that full remote participation has costs. These
costs only become greater if we pretend that we can expect current
WGCs to do the additional work.

> One option might be to give chairs some heavy influence on remote
> burserships.

   Do you mean
" 
" bursarship: noun
"   a grant or payment made to support a student's education.

   That seems premature at this point: the likely costs aren't neatly
correlated to number of remote participants; so it's not clear there's
any reason to "support" an individual, rather than support the tools.

   Today, requests for IETF-week sessions include checkoffs for
"WebEx required" and "MeetEcho" required. AFAIK these simply generate
requests to cisco and meetecho to subsidize the tool for that session.
Cisco seems to automatically approve using the fully-automated tool,
while meetecho seems to need to allocate staff for setup.

   But of course these checkoffs happen long before the WGC knows of
individuals desiring to participate remotely. :^(

   Conceivably what we need is an automated tool to receive offers
to (partially) subsidize the cost of a tool for a particular session.

--
John Leslie  


  1   2   3   4   5   6   7   8   9   10   >