LC comments on draft-cotton-rfc4020bis-01.txt

2013-08-29 Thread Eric Rosen
I think the procedures proposed in this draft could be simplified a bit if
we'd just focus on the actual issue, viz., that the implementation and
deployment of a new specification is completely decoupled from the progress
of that specification through the standards process.  Once any significant
deployment has begun, the codepoints used by that deployment become de facto
unavailable for future use.  The only real issue is whether we want that
codepoint usage to be recorded in a public place, or whether we want to
pretend officially that the codepoints aren't used, and just let the
conflicts arise in the field.

From this perspective, I think the following procedures are problematic:

- Section 2, point d:  early allocation is conditioned upon whether there is
  sufficient interest in early (pre-RFC) implementation and deployment in
  the community as judged by working group chairs or ADs.

  What the WG chairs and ADs really have to judge is whether failure to
  issue an early allocation is likely to result in de facto allocation of
  a codepoint, with consequent conflicts in the future.

  Of course, there have also been many cases where the codepoint is already
  in significant deployment before any official request for it has been
  made; WG chairs and ADs should take notice of this fact.

- Section 3.1, step 6: IANA makes an allocation from the appropriate
  registry, marking it as 'Temporary', valid for a period of one year from
  the date of allocation.  The date of first allocation the date of expiry
  should also be recorded in the registry and made visible to the public.

  What is the point of marking it as temporary?  Once the codepoint is
  placed into use, it is not any more or less temporary than any other
  codepoint; the codepoint is unavailable for future reuse as long as the
  deployments.  Any codepoint that is no longer in use can of course be
  reused, even if its allocation is not marked temporary.

  I do not understand the idea of making the allocation expire after just
  one year.  Are the deployments going to disappear after one year?  If not,
  then the codepoint allocation will not expire after one year.

- Section 3.3: If early allocations expire before the document progresses
  to the point where IANA normally makes allocations, the authors and WG
  chairs may repeat the process in section 3.1 to request renewal of the
  code points.  At most, one renewal request may be made; thus, authors
  should choose carefully when the original request is to be made.

  First, it is not up to the authors to choose carefully when the original
  request is to be made.  At a certain point in the implementation process,
  the codepoint is needed.  Failure to get the codepoint soon enough will
  just cause the implementors to make up their own codepoint, which will
  invariably leak out into deployment.  

  Second, there is no reason whatsoever to put a two-year limit on the early
  allocation, unless one expects that the deployments using it will
  magically disappear after two years.

  I've seen a number of IETF standardization efforts lag six or seven years
  beyond the actual deployment of the standard.

  It might be worthwhile for the WG chairs to ask every few years whether
  the proposed use of a codepoint has been abandoned, but without some
  assurance that the codepoint will never be used as specified, there is no
  sense declaring it to be expired.

- Section 3.1: Note that Internet-Drafts should not include a specific
  value of a code point until this value has been formally allocated by
  IANA.

  To me, this seems entirely counter-productive.  Failure to publicize the
  codepoint you're deploying doesn't solve or prevent any problems.  To the
  contrary, including the codepoint values you're using helps to find
  conflicts.

- Section 5: There is a significant concern that the procedures in this
  document could be used as an end-run on the IETF process to achieve code
  point allocation when an RFC will not be published.

  This concern only makes sense when the codepoint is being requested by
  another SDO. If it's being requested by a vendor (or set of vendors),
  well, no vendor will tell it's customers sorry, you can't have this
  feature because IETF hasn't allocated a codepoint.

  The real concern is the opposite -- folks will try to delay the allocation
  of a codepoint, in order to prevent their quicker competitors from gaining
  an advantage.  But this only leads to the use of codepoints that are not
  officially allocated.
  
My concrete suggestions for improving the draft would be:

- Emphasize that WG chairs and ADs should base their approval of a request
  on the likelihood that an unofficial codepoint will otherwise get
  allocated (or the fact that it is already in use).

- Eliminate the temporary and automatic expiry features, aldn eliminate
  the need to refresh a request after one year.

- Eliminate the requirement that an i-d not specify a codepoint 

Re: RFC 2119 terms, ALL CAPS vs lower case

2012-05-18 Thread Eric Rosen

 So, I recommend an errata to RFC 2119: These words MUST NOT appear in a
 document in lower case.

I'm glad you said I recommend instead of I have recommended, as the
latter would violate the recommended (oh dear) rule.

This RECOMMENDED rule would also imply that documents can no longer be
published during the month of May, as otherwise the date line would put the
document out of compliance with this erratum.

Also, it would no longer be possible for the references to list any document
that was published in the month of May.  I suppose we could make it a rule
that the month of May be referred to as the month between April and June.

Of course, you didn't say that the reserved words muSt nOt appear in mixed
case, so mAybe that will become a workaround.

 Seems to me that precision of meaning overrides graceful use of the
 language.

Many IETF specs lack the desirable degree of precision, but the use of the
reserved words in capitalized or uncapitalized form just is not that big a
contributor to the imprecision.  Failure to properly specify all the actions
in all the possible state/event combinations is a much bigger source of
confusion than failure to capitalize must (I mean failure to capitalize
MUST).  I don't know why so much attention is being given to something
that is NOT going to improve the quality of the specs by any appreciable
amount.








Re: Last Call: draft-ietf-intarea-ipv6-required-01.txt (IPv6Support Required for all IP-capable nodes) to Proposed Standard

2011-08-23 Thread Eric Rosen
This document really wants to be a BCP that makes deployment and strategy
recommendations.  But for some reason it has been disguised as a standards
track document (i.e., as a protocol specification), and the result is that
RFC 2119 language is being used in a very peculiar way.  I think this is
what is responsible for the impression that the document is bizarre.

Some examples:

  a best effort SHOULD be made to update existing hardware and software
  to enable IPv6 support.

RFC 2119 language is used to distinguish optional from mandatory features in
an implementation.  But a best effort ... to update existing hardware and
software does not seem to be a feature of an implementation.  I just don't
understand what this statement requires, or how one would tell if a given
implementation is compliant with it or not.

Current IP implementations SHOULD support IPv6.

Presumably, current implementations support whatever they support, I don't
really understand what is being required here.  

New IP implementations MUST support IPv6.

Is there some objective difference between a new implementation and a
current implementation?  How many lines of code have to change before a
current implementation becomes a new one?

But I don't really see why new vs. current is even relevant.  If there
were a lot of folks writing IP implementations from scratch, using only the
RFCs for guidance, it would be really important to make sure they know about
IPv6.  Does anyone think that that's a real problem?

It may be a problem that new products continue to come out without IPv6
support, but in general, those new products use current implementations.  So
again, there doesn't actually seem to be a useful requirement expressed.

  IPv6 support MUST be equivalent or better in quality and
  functionality when compared to IPv4 support in an IP
  implementation.

So if the v6 support has more bugs than the v4 support (and hence lesser
quality), the implementation would be out of compliance with standards?  I
don't think IETF standards set quality metrics on implementations.

As for functionality, consider the following bit of functionality: the
ability to communicate with an IPv4-only host.  This is a piece of
functionality that v4 support has but v6 support doesn't.  So I guess no
implementation could ever meet the requirements of this document.

Finally:

  MUST NOT require IPv4 for proper and complete function.

Suppose the proper and complete function of a box requires the downloading
of updates from a server the box cannot necessarily reach using IPv6.  Then
IPv4 would be required for proper and complete function of the box.  Or
perhaps the box is a network monitoring appliance of some sort, and has to
use both IPv4 and IPv6 to do its job.  In this case too, v4 is required for
proper and complete function, but that shouldn't lead anyone to say that
the box is non-compliant with IETF standards.

This document just does not say what it means.  The RFC 2119 language is not
being used in the usual manner, it's really being used for its rhetorical
value.  This is not appropriate.  



















  
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Use of unassigned in IANA registries

2011-01-18 Thread Eric Rosen

Phillip But I rather suspect that the reason that this is happening is that
Phillip people know full well that there is a process and choose to ignore
Phillip it because they either can't be bothered to put up with the hassle
Phillip or don't think that the application will be accepted.

Lars Suspect all you want, but it doesn't match my experience.

Phillip's suspicion certainly matches my experience over the past 15 years,
and I've even done my own share of codepoint squatting.  He is also correct
to state that many folks try to use control over the codepoint space as part
of their competitive marketing strategy.  The only way to avoid collisions
due to squatting is to adopt a policy that all codepoint fields be large
enough so that a significant number of codepoints are available for FCFS
allocations.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Old transport-layer protocols to Historic?

2011-01-10 Thread Eric Rosen

 RDP is still in use

 was the initial transport used for gateway control (such as SGCP) until
 SCTP was developed.

 Many commercial gateways still support these older pre-standard (to
 MEGACO) control protocols.  Some older devices still provisioned in the
 network only support the older protocols.

 but we had nothing before SCTP that would fit the bill.  Its limitations
 was one of the driving forces behind developing SCTP.

So RDP is a useful and deployed pre-standards protocol that was one of the
driving forces behind a successful standardization effort.  But newer
applications that need that kind of transport instead use the standard,
SCTP.

I don't know how one could possibly make a stronger case for classifying the
RDP spec as Historic!

However, I do agree with John Klensin's remark that reclassification is not
worth the energy it takes.









___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Gen-ART review of draft-ietf-l3vpn-mvpn-spmsi-joins-01

2010-10-28 Thread Eric Rosen

James perhaps this needs to be stated (that the Type 4 is created by this
James doc for your purpose)?

I think the doc already makes this clear, maybe I'm not sure what you are
asking.

James You can probably imagine how many SIP and RSVP protocol extensions
James there are (70+ and about 20 respectively off the top of my head), and
James yet every one of them have to state Session Initiation Protocol
James (SIP) and ReSource ReserVation Protocol (version-1) (RSVPv1) the
James first time they appear, no matter how big the community of interest
James is.

And this makes sense to you?

Okay, I will expand the occurrence of S-PMSI in the abstract.  

On the issue of the maximum UDP packet size, I think that is an
implementation issue and I don't think it is appropriate to raise it in this
document.

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Gen-ART review of draft-ietf-l3vpn-mvpn-spmsi-joins-01

2010-10-27 Thread Eric Rosen
 Minor issues:
 - Section 3, 2nd para, second sentence is:

   A Type 4 S-PMSI Join may be used to assign a customer
IPv6 (C-S,C-G) flow to a P-tunnel that is created by
PIM/IPv4. 

 I'm curious how else might a Type 4 S-PMSI be used? This sentence 
 makes it seem unclear as to whether there are any uses a Type 4.

While there is no other use of the Type 4 S-PMSI Join, there are other
mechanisms that may be used for the same purpose.  If the text read is
used, someone might interpret that as meaning that there is no other
mechanism for assigning a v6 flow to a v4 P-tunnel.  So I think the wording
should remain as is.

 - Section 3.2, 2nd para, 1st sentence is:

   A single UDP datagram MAY carry multiple S-PMSI Join Messages,
as many as can fit entirely within it.


 What's the MTU of a UDP packet (or frame)?

 Might there be a problem if the MTU of UDP doesn't match the MTU of 
 the link on the PE? This doesn't appear to be covered by the sentence 
 above, but should be, IMO (i.e., state that the link MTU is the max 
 that can fit new S-PMSI Join Messages, or the max bytes UDP allows 
 should be stated here explicitly).

I think it would be an inappropriate layering violation to state the maximum
UDP size in this specification.

If someone were to create a UDP packet that exceeded the link MTU,
presumably it would get fragmented and reassembled.  This might not be a
wise implementation, but it would still work, and I don't see a reason to
prohibit it.

 - as someone not familiar with Multicast VPNs, having the acronym 
  S-PMSI Join messages *not* exploded in the abstract is a bit 
  confusing.

I did think about this, but I thought that using the term Selective Provider
Multicast Service Interface Join message in the abstract would not be any
less confusing to someone who is not familiar with the MVPN work, and it
would be more confusing to someone who is familiar with the MVPN work.

 I had to look into the first and second paragraphs of the 
 intro to determine for myself that it means Selective Provider 
 Multicast Service Interface,

That's a technical term defined in draft-ietf-l3vpn-2547bis-mcast.  I'll bet
you didn't look in that draft to see what this really means!

This is one of those cases where anyone who actually has to use the document
will know what an S-PMSI Join message is, even if they don't know what the
acronym expands to.  I realize that the RFC Editor will probably expand it
anyway ;-)

 - Intro, 3rd para, 1st sentence
  s/specifications/capability (or capabilities)

I think s/specifications/specification would be better.  MVPN implementers
are likely to also be BGP and/or LDP implementers, and in those protocols
the term capability is used to refer to something that requires explcit
advertisement.  

You had a number of valid complaints about the writing of the first two
paragraphs of the introduction.  What do you think of the following proposed
rewrite:

   The Multicast Virtual Private Networks (MVPN) specification [MVPN]
   defines the notion of a PMSI (Provider Multicast Service
   Interface), and specifies how a PMSI can be instantiated by various
   kinds of tunnels through a service provider's network (P-tunnels).
   It also specifies the procedures for using PIM (Protocol Independent
   Multicast, [RFC4601]) as the control protocol between Provider Edge
   (PE) routers.  When PIM is used as the control protocol, PIM messages
   are sent through a P-tunnel from one PE in an MVPN to others in the
   same MVPN.  These PIM messages carry customer multicast routing
   information.  However, [MVPN] does not cover the case where the
   customer is using IPv6, but the service provider is using P-tunnels
   created by PIM over an IPv4 infrastructure.

   The MVPN specification also specifies S-PMSI (Selective PMSI) Join
   messages, which are optionally used to bind particular customer
   multicast flows to particular P-tunnels.  However, the specification
   does not cover the case where the customer flows are IPv6 flows.

It's true that both paragraphs end with the same However, ..., but the
first paragraph is about sending customer PIM/IPv6 messages though a
P-tunnel in a v4 infrastructure, while the second paragraph is about using
the S-PMSI Join message to assign specific customer multicast flows to a
particular P-tunnel.  
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: [mpls] Last Call: draft-ietf-mpls-ldp-upstream (MPLS Upstream Label Assignment for LDP) to Proposed Standard

2010-09-30 Thread Eric Rosen

With regard to this draft, I need to reiterate a comment I made during WG
last call, as I think there is a procedural issue that needs to be brought
to the IESG's attention.

The draft has a normative reference to RFC 3472 Generalized Multi-Protocol
Label Switching (GMPLS) Signaling Constraint-based Routed Label Distribution
Protocol (CR-LDP) Extensions, despite the fact that CR-LDP has been
deprecated by RFC 3468 (The MPLS Working Group decision on MPLS signaling
protocols, February 2003).  I don't think this is allowable; I interpret
RFC 3468 as requiring that we do not take any action that prevents the
deprecated CR-LDP documents from being reclassified as historic.

The only reason the CR-LDP documents were not classified as historic seven
years ago is that there are other standards organizations that have produced
specs with normative references to CR-LDP.

Section 4.3 of RFC 3468 says:

   standards organizations which reference the document [the CR-LDP specs],
   need to be notified of our decision so that they (at their own pace) can
   change their references to more appropriate documents.  It is also
   expected that they will notify us when they no longer have a need to
   normative reference to CR-LDP.

I think the clear implication of this is that neither the IETF nor any other
SDO should be creating new normative references to CR-LDP documents. Note
that RFC 3468 explicitly calls out RFC 3472 as one of the deprecated
documents.

During WG LC, one of the authors stated that he disagreed with my
interpretation, but WG consensus on this issue was never obtained.

At this point, I believe the only change that is needed to draft-ietf-mpls-
ldp-upstream is to move the reference to RFC 3472 into the Informational
References section.  That is, I think that recent revisions to the draft
have made the normative reference gratuitous, as one does not need to read
RFC3472 in order to implement any part of this draft.

If the draft is published with this normative reference, it is almost
inevitable that someone will someday say we don't want to use LDP
upstream-assigned labels, because they depend on CR-LDP and CR-LDP has been
deprecated.






___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: draft-housley-two-maturity-levels

2010-07-07 Thread Eric Rosen

I don't think folks have appreciated how truly insidious Russ' document is.

First, he proposes to eliminate a set of processes that are frequently used
to portray the IETF in a negative light:

- The use of the label Standard for the never-used third level of
  standardization

- The never-used two year limit on Proposed Standards

- The process by which STD numbers are assigned to obsolete documents, but
  not to the documents that obsolete them.

This blatant attempt to improve the public image of the IETF is an obvious
conflict of interest for the IETF chair or any other IESG member!  To the
rest of us, the ability to portray the IETF as a laughing stock is of great
value, especially when we disagree with its results.

But then comes the most insidious proposal of all: the elimination of the
prohibition against downward references.  This clever attempt to remove
one of the disincentives to advancing documents along the standards track
may at first glance seem innocent enough.  However, if the community were to
go along with this attempt to remove disincentives, we might actually end up
with a two level standards process.  Those of us who are happy with the de
facto single level standards process should oppose this change at all costs.
(Fortunately, this tricky proposal is unlikely to succeed in its goal, as
there are so many other disincentives that the document fails to address.)

This document should be sent to a WG where it can be extensively discussed
and analyzed by those members of the community who have the most experience
in failing to achieve consensus on process change.  (Of course, first the
broader community must spend a year or two agreeing on the charter of the
WG.)  The document should then be advanced on the Standards Track, under the
current standards process.  That means that two different Standards
Development Organizations must be created independently from the process
specification in the document, and we must see an implementation report
proving that those two organizations can interoperate.  Only then will it be
appropriate to decide whether the document should become an IETF standard.

Just in case this is an insufficient method of ensuring that there is no
progress, I strongly suggest that the issue of specifying the standards
process be delayed until all issues of document input and output formats,
including internationalization, and representation of diagrams, have been
fully discussed and decided upon.








___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-l3vpn-2547bis-mcast (Multicast in MPLS/BGP IP VPNs) to Proposed Standard

2009-10-19 Thread Eric Rosen

Thank you for your review.  I will try to address your technical comments.
Some of your comments are not really technical objections, but criticisms of
the way the document is constructed, or of the scope of the document, or of
the approach adopted by the WG.  As the construction of the document is the
result of years of difficult negotiation and careful compromise, I will not
address those issues.  Also, I will not attempt to address the
philosophical issues having to do with what is and is not a proper use of
BGP.

Pekka 1) IPv6 support.  The spec apparently aims to support both IPv4 and
PekkaIPv6 because it refers to both in a couple of places.  Yet, there
Pekkais at least one explicit place in the spec (S 7.4.2.2) that's not
Pekkacompatible.

PekkaI suspect many of the BGP attributes used, possibly
Pekkaalso the MCAST-VPN BGP SAFI and others are not IPv6 compatible.

MCAST-VPN BGP SAFI is certainly IPv6 compatible; the details may be found in
draft-ietf-l3vpn-2547bis-mcast-bgp-08.text.  Section 4 of that draft
discusses the use of AFI 2 (IPv6) with the MCAST-VPN SAFI, and all talk of
VPN-IP routes in that document is meant to indicate either VPN-IPv4 or
VPN-IPv6. 

PekkaAt the minimum, the status (intent) of the spec should be
Pekkaclarified. Even better would be to improve and include the support
Pekkahere.

In general, the procedures specified in the document will enable an IPv4 SP
backbone to support customer use of IPv6 multicast.  You are correct that
section 7.4.2.2 is incomplete in this respect.  

Pekka 2) RP configuration in SP network.  It's not clear if SP network
Pekkaneeds to know how customer sites have configured their RPs (when
Pekkathe customer provides the RP).  At least traditional PIM
Pekkasignalling would require SP to know this.  But if auto-rp or BSR
Pekkais not used by the customer, how is this information learned and
Pekkamaintained?  Would it require manual configuration?

A PE router does function as a PIM peer of a CE router, and hence needs some
way to get the group-to-RP mappings of the customer.  How this is done is
for the SP and the customer to determine.

Pekka 5) Active source BGP messages.  This is a duplication of a similar
Pekkamechanism in MSDP (RFC3618) which has caused much grief in
PekkaInternet.  Does this meant that when a host does 'nmap -sU
Pekka224.0.0.0/4' at a VPN site, this will result in about 268 million
PekkaBGP active source updates being sent (2^28) in the SP backbone?

There are two uses of Source Active routes, described respectively in
sections 9 and 10.  As used in section 9, this problem cannot arise because
the SA routes are not generated as the result of receiving data traffic.
The use described in section 10 does however need to be discussed in the
security considerations.

PekkaThis problem is not described in security considerations.

From the Security Considerations section:

   an implementation SHOULD provide mechanisms that allow a SP to place
   limitations on the following:

 - total number of (C-*,C-G) and/or (C-S,C-G) states per VRF

Since SA AD routes are generated only as a result of creating the
corresponding multicast states, limiting the number of multicast states per
VRF results in limiting the number of Source Active routes.

A more specific discussion can be found in the Security Considerations
section of draft-ietf-l3vpn-mcast-bgp:

  In conjunction with the procedures specified in Section Supporting
   PIM-SM without Inter-Site Shared C-trees an implementation SHOULD
   provide capabilities to impose an upper bound on the number of Source
   Active A-D routes, as well as on how frequently they may be
   originated. This SHOULD be provided on a per PE, per MVPN granularity.


Pekka 6) PIM-BIDIR usage.  May the SP use PIM-BIDIR internally even if the
Pekkacustomer interface would use PIM-SM?

I'm not sure I understand exactly what you are asking.  The technology for
building the P-tunnels is completely independent of the multicast technology
used by the customer.

Pekka 7) Type 0 Route Distinguisher.  The spec mandates using type 0 RD which
Pekkaembeds 16-bit AS-number.

Good catch.  This is actually an error in the spec.  The material in section
8.2 reflects some earlier thinking about procedures that were revised and
worked out in more detail in draft-ietf-l3vpn-2547bis-mcast-bgp.  What we
will do is eliminate the text in section 8.2 and replace it with a reference
to the mcast-bgp draft.


 3.2. P-Multicast Service Interfaces (PMSIs)

  Multicast data packets received by a PE over a PE-CE interface must
  be forwarded to one or more of the other PEs in the same MVPN for
  delivery to one or more other CEs.

Pekka .. is this strictly accurate?  doesn't this depend on where the RP is
Pekka configured to be?  This seems to assume that the RP configuration is
Pekka always provided by the customer, never by SP?  Because if RP is

Re: xml2rfc is OK ( was: Re: XML2RFC must die, was: Re: Two different threads - IETF Document Format)

2009-07-06 Thread Eric Rosen

Lars since you asked: I have absolutely no problems with xml2rfc.

I find  that xml2rfc  takes too  much control over  the boilerplate  and the
references to  be a  really useful  tool.  I dropped  it after  one attempt.
However, many  of my  colleagues use it,  and as  a result I've  gotten many
questions of the form  what do I have to do to  make xml2rfc produce output
that  will pass  idnits?.  I  can't tell  them just  put in  the following
boilerplate, instead  I've had to figure  out the right value  of the ipr
variable.  (BTW, no one ever  cares what the boilerplate actually says, just
whether it will  pass idnits; xml2rfc really encourages  folks to ignore the
semantics of the boilerplate.) 

Joel One large draft I was working on was originally written using WORD.  I
Joel found it extremely  difficult to work with (although  I have a current
Joel version  of  Word available  at  all  times.)   Instead, working  with
Joel another author we converted the document to XML for XML2RFC.

Hey, I've converted  from both Word and  XML to nroff; that way  I don't get
any surprises  ;-) OTOH, I  have to admit  that nroff was a  bit challenging
when I moved from Solaris to Linux.

Joel I have seen  some folks arguing that we  should make XML2RFC normative
Joel and mandatory.

Of course,  in the  IETF it  is very common  for folks  to think  that their
personal preferences are objectively superior.  


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: RFC archival format, was: Re: More liberal draft formatting standards required

2009-07-06 Thread Eric Rosen

 huge  number of  mobile devices  that  handle HTML  effortlessly and  IETF
 legacy ASCII not at all

HTML is  a good presentation format, but  as an archival format  it seems to
leave a lot to be desired, as the included links always seem to go stale.

Also,  I don't  think  that the  notions  of page  numbers  and table  of
contents have quite reached the status of quaint and archaic.

 large number  of standard  office printers that  print HTML  instantly and
 correctly 

My experiences printing HTML docs are a bit different, I guess.

Is there no  tool that will html-ize an RFC or  i-d for presentation?  Folks
who want to  read RFCs on their cell  phones should be able to do  it, but I
really don't see what that has to do with archival formats.
___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: LC summary for draft-ietf-opsawg-operations-and-management

2009-06-24 Thread Eric Rosen
Looking through your summary of the LC comments, it appears that there is
considerable sentiment to publish this document as Informational rather than
as a BCP.  Yet the new revision still says Intended Status: BCP.  

 [dbh: this document went to great lengths to say that it was NOT
 prescribing a Management Considerations requirement. sigh]

If this document were to proceed as a BCP, the following text could be
interpreted by an AD as license to require a Management Considerations
section:

   Any decision to make a Management Considerations section a mandatory
   publication requirement for IETF documents is the responsibility of
   the IESG, or specific area directors, or working groups, and this
   document avoids recommending any mandatory publication requirements.

Since assigning responsibilities to the IESG is presumably out of scope of
this document, why not shorten this sentence to:
 
This document avoids recommending any mandatory publication
requirements 

___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-opsawg-operations-and-management(Guidelines for Considering Operations and Management of NewProtocols and Protocol Extensions) to BCP

2009-06-04 Thread Eric Rosen
Adopting this document  as a BCP would  be a serious mistake, and  I hope it
will be strongly opposed.

There is absolutely no evidence that following the dictates of this document
will improve the quality of the  IETF's work, and we certainly know it won't
improve the timeliness.  There is no evidence that ignoring it will harm the
Internet.

I don't see that OPSAWG has  any business imposing requirements on work done
by  other WGs  in other  Areas.   There are  already an  adequate number  of
obstacles to getting work done.  The  last thing we need is another required
considerations section, or another set of excuses for ADs from one Area to
block documents from other Areas.













___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-ietf-opsawg-operations-and-management(Guidelines for Considering Operations and Management of NewProtocols and Protocol Extensions) to BCP

2009-06-04 Thread Eric Rosen

 This does not mean we have to simply accept what they (OPS) say.  But it 
 does mean we should give it a fair review, looking at the details, 
 rather than objecting on principle.

This is  absolute nonsense.  Most of  the people actually doing  work in the
various areas do not have the  time, interest, or expertise to do a detailed
review of  an OPS document.   However, these are  the people who are  in the
best position to determine whether OAM Considerations would help or hinder
the work that they do.

If we are going to talk about adding new hoops for folks to jump through, we
should first  discuss whether any such  hoops are necessary.   We should not
start the  discussion by looking at  the details of  the particular proposed
hoops. 

 the OPS area has as much  right to propose their requirements as any other
 area  (Transport  Congestion, Security,  ...)   has.   And generally,  the
 community has listened to such requests and gone along with them.

Generally,  the community (i.e.,  the folks  doing the  work in  the various
areas) has never even heard  about these proposed requirements until after a
BCP  appears, at  which  time they  are  told that  the  BCP has  community
consensus.  Perhaps  you're familiar with Douglas  Adams' The Hitchhiker's
Guide to the  Galaxy.  To paraphrase, but the plan  for the destruction of
earth was clearly posted in  the planning department on alpha centauri, it's
not our fault you didn't see it.


___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Proposed Experiment: More Meeting Time on Friday for IETF 73

2008-07-18 Thread Eric Rosen
I oppose this experiment.

A better experiment would be to eliminate the Friday morning sessions.




___
Ietf mailing list
Ietf@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Nomcom process realities of confidentiality

2008-03-20 Thread Eric Rosen

 The random selection process coupled with the relatively small number 
 of volunteers is an interesting factor in all of this

Nomcom is  a group  of people randomly  selected from  among a set  of folks
whose only qualifications  are that they want to be on  nomcom and they like
traveling to meetings.

I've never understood  the logic behind the theory that  this is supposed to
lead to a good slate of candidates.  

 The Giant Reset of 2006

Oh I remember  that.  For fairness, we replaced  one randomly selected group
of  folks  with another  randomly  selected group  of  folks  from the  same
volunteer pool.  We wouldn't want to end up with the wrong randomly selected
group, would we?  That's another bit of logic I never understood.

Given  that the  whole  process is  whacky,  fine tuning  of various  little
details probably won't make much of a difference.

___
IETF mailing list
IETF@ietf.org
https://www.ietf.org/mailman/listinfo/ietf


Re: Experimental makes sense for tls-authz

2007-10-29 Thread Eric Rosen

As  a personal  political view,  I happen  to be  opposed to  the  notion of
software patents.   But I still think  that the document  in question should
be published as Experimental:

- It's quite plain  that this political view has never  been adopted by IETF
  consensus.  (I also think it plain  that it has no chance of being adopted
  by IETF consensus.)

- I don't think the IETF considers this document offends my political point
  of view to be a legitimate  reason for opposing the document.  The degree
  of passion and/or repetition with which the political view is expressed is
  irrelevant.   (The suppression  of  a document  for  political reasons  is
  frequently called censorship, even if other avenues of publication still
  exist.)

- It's  really within  the  province of  each  WG to  determine whether  its
  standards are  implementable by whoever  needs to implement them  in order
  for  the standard  to be  successful.  This  may or  may not  include open
  source implementations.  

- If a particular proposal is technically sound, but not adopted because the
  WG thinks that  its patent encumbrances are a  bar to implementation, then
  it  is perfectly valid  to publish  the proposal  as a  non-standard track
  RFC.  The only real criterion is that the technical content be interesting
  or otherwise worth preserving.

With regard  to the  coordinated letter writing  attack being waged  on this
list, well, we're all familiar with  the situation in which folks try to get
their way  by getting  lots of non-participants  to send  scripted messages.
Often you can tell that the  message writers don't even know what the issues
are, but at  least most of the letter writing campaigns  pretend to be about
technical  issues;  the  current  campaign  doesn't  even  bother  with  the
pretense!


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Charging I-Ds

2007-08-01 Thread Eric Rosen
Eric Gray The discussion is essentially inane

I think  this is an  excellent observation.  It  suggests to me  though that
perhaps  the best  way to  get more  funding  for the  IETF is  to impose  a
surcharge on inane messages to the  ietf mailing list.  The surcharge can be
based on the degree of inanity of the message.

I suggest the following schedule of charges:

- $10 for a generic message whining about US customs/immigration processes

- $10 for a  clueless message suggesting a reorganization of  the IETF or a
  change of the fee structure (fortunately not to be imposed retroactively)

- $10 for a message about the value or lack thereof of Ascii art

- $10 for a message about the format of RFCs

- $15 for a  message whining about US customs/immigration  processes, if the
  whine is backed up only by anecdotes

- $100 for  a message suggesting  that US customs/immigration  processes are
  unfair to  white men from  western europe.  I'd  raise the fee to  $500 if
  sent by someone with an obvious chip on his shoulder.

- $100  for a  message suggesting  that IETF  meetings be  held  in peculiar
  locations

- $100 for a message suggesting that  the cookies at IETF meetings should be
  rationed

- $100 for a message stating that the list is full of inane messages (not to
  be imposed retroactively)

- $200 for a message saying that NAT is evil

- $200 for a  message whining about the IETF's  lack of sufficient emphasis
  on IPv6

- $500  for a  message whining  about  the fact  that IETF  meetings do  not
  routinely occur  in one's home town.  I  would raise the fee  to $1,000 if
  Barcelona is mentioned.

- $500 for  a message saying  that the  job of the  IETF is to  prevent the
  marketplace from making technology choices

- $1000 for  a message stating that the  poster knows how to  solve the spam
  problem once and for all.

Of course,  this is not an  exhaustive list of inane  categories of message,
it's just a start.  

If,  during the  course of  a  week, a  single poster  sends multiple  inane
messages which say  exactly the same thing, I would double  the fee for each
subsequent message.

Putting such  a schedule  of charges into  place would either  eliminate the
IETF's budget problems or else make its mailing list a lot more useful.






___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: draft-klensin-norm-ref (Handling Normative References for Standards Track Documents) to BCP

2007-02-28 Thread Eric Rosen
This document has a reasonable goal, but the implementation is objectionable.

The reasonable goal (from the introduction): 

This document replaces the hold on normative reference rule with a
note downward normative reference and move on approach for
normative references to standards-track documents and BCPs.

The objectionable implementation: 

  A note is included in the reference text that indicates that the
  reference is to a target document of a lower maturity level, that
  some caution should be used since it may be less stable than the
  document from which it is being referenced

In many  cases (probably  the vast majority)  where a document  is advancing
despite a downward normative reference, the referenced document (and the
technology  described  therein)  is  no  less stable  than  the  referencing
document, and  no caution is required.  It's  great to be able  to make a
downward reference and move on, but  we should not be required to tell lies
and spread FUD.  

An acceptable implementation would be: 

  A note is included in the reference text that indicates that the
  reference is to a target document which is at a prior step in the
  official IETF standardization process.

That statement adequately captures the facts, and does not require the WG to
make dishonest statements or to spread FUD.

Another alternative, probably a better one, is simply to annotate each
reference with its standards status, and make no editorial comment about it
at all.



___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: As Promised, an attempt at 2026bis

2006-09-29 Thread Eric Rosen
On  the issue  of whether  we have  a de  facto one-step  process,  the real
question is not  whether subsequent steps are ever  invoked, but whether the
subsequent steps  actually have any  practical impact on the  Internet.  One
can certainly  point to a  handful of cases  where the subsequent  steps are
invoked,  but the  point is  that  it makes  no difference  to the  Internet
whether the  subsequent steps are  invoked or not.   So I think it  is quite
accurate to say that we have a de facto one-step process. 

It is thus logical for advocates of the one-step process to argue that we in
fact have more  or less what we  need, and to be skeptical  of anything that
might result in giving more credence  to (or even calling more attention to)
the subsequent steps.

The real problem with the process  is that a protocol can be widely deployed
in multiple interoperable implementations for  six or seven years before its
specification even achieves  this one step.  This can  happen because the WG
gets inundated with idiots, and/or because companies are using the IETF as a
marketing  battleground,  and/or  because  the IESG  deliberately  tries  to
obstruct progress, and/or because the security ADs require you to figure out
the insanely complicated endsystem-oriented security architecture so you can
explain why  you don't need to  adhere to it.   I'm pretty sure there  is no
IESG-wide consensus on how to address  these issues, but if one has suffered
through any of these multiple year  delays, one is likely to oppose anything
that reeks  of more process.  This can  lead one to be  suspicious even of
writeups  that claim  to  be  descriptive, as  the  writeups may  (whether
intended to do so or not)  serve to extend the life of various problematical
processes that might wither away faster if they were never written down.

Sometimes it's just better to leave well enough alone. 





___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'A Process Experiment in Normative Reference Handling' to Experimental RFC (draft-klensin-norm-ref)

2006-06-01 Thread Eric Rosen

 The IESG plans to make a decision in the next few weeks, and solicits
 final comments on this action. 

If the individual  submission is approved as an  Experimental RFC, does that
mean that  the IETF will  adopt the proposed  experiment?  If so,  I don't
think this  draft should be approved.   (Actually, I suspect the  fix is in,
but for the record ...) 

The proposal seems primarily intended  to deal with the following problem.
Sometimes  there are  cases in  which a  doc is  ready to  become a  DS, but
cannot, because of the infamous downref  rule, which states that no DS can
normatively reference a PS. 

The proposal leaves the downref rule in place, but allows it to be waived if
the  WG  is  willing  to   approve  derogatory  text  about  the  referenced
technology: 

  A note is included in the reference text that indicates that the
  reference is to a document of a lower maturity level, that some
  caution should be used since it may be less stable than the
  document from which it is being referenced,

Frankly,  I  think  this   wavier  procedure  is  outrageous,  and  entirely
unacceptable.  The fact  The fact that the referenced  document has not gone
through some bureaucratic process does not  mean that it is any less stable,
or that any more caution is  required in its use.  Inserting this derogatory
language about technology which may  be well-proven and widely deployed will
be extremely misleading to the industry. 

I  think that  any rule  which requires  us to  insert false  and misleading
statements in the documents should be strongly opposed. 

Even worse: 

 The IESG may, at its discretion, specify the exact text to be used

Great, not only is the WG  required to denigrate its own technology, but the
IESG is given free rein to insert whatever derogatory remarks they feel like
putting in. 

Of course, we'll be told not to worry, since:

  If members of the community consider either the downward reference or
   the annotation text  to be inappropriate, those issues  can be raised
   at any time  in the document life cycle, just as  with any other text
   in the document.

Great.  Another useless thing to argue  about in the WG, and another useless
thing to argue about with the IESG.

There  are  also   other  reasons  why  I  find   this  proposed  experiment
disheartening. 

For  one  thing, it  really  misses  the point.   We  need  to simplify  our
processes, not make them more  complicated.  Either we need the downref rule
or we  don't.  If we want  to experiment, let's  experiment with eliminating
the rule entirely, not with fine tuning it.

The  real underlying  problem of  course  is the  the multi-stage  standards
process is just a relic from another  time, and makes no sense at all in the
current environment.  Experiments in fine tuning the process are nothing but
a distraction.


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Last Call: 'A Process Experiment in Normative Reference Handling' to Experimental RFC (draft-klensin-norm-ref)

2006-06-01 Thread Eric Rosen

 that text is not derogatory, but a simply statement of fact. 

Sorry, but however you may try to  talk your way out of it, a statement like
that technology may be unstable is derogatory.  

 Until and unless the definitions of maturity levels are changed, that text
 is not derogatory, but a simply statement of fact. 

I'm afraid that the facts as to whether a technology is stable are in no way
dependent on the IETF's definitions of maturity levels.  

 If a WG agrees with you about a particular piece of technology,
 they have three choices: 

Well, 4: they can issue the doc as a PS obsoleting the old PS. 

 If I writing a document that needed to reference a specification
 that was as well-defined, mature, and stable as you posit, I'd
 first try to get that specification advanced to the right
 maturity level

That's  an interesting  fact about  yourself, but  personally I'd  prefer to
spend my time doing something useful.

 But the assertion you are making about a (e.g.) Proposed
 Standard specification being stable, mature, well-defined,
 widely-deployed, etc., is one that presumably should get some
 community review

Sure.   The WG  should not  advance a  doc  to DS  if it  really depends  on
something which  isn't stable.  The WG needs  to be aware of  the facts, and
should not be compelled to insert statements which they know to be false.  

 you should support this on your  theory that it will create more arguments
 and bog things down further. 

No,  I  don't think  there's  any  need to  do  anything  that creates  more
arguments  and bogs  things  down  further.  I  understand  that there's  no
consensus on how to avoid the iceberg,  but that doesn't mean I want to take
the time  to run experiments  on more complicated  ways to arrange  the deck
chairs. 






___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF Meeting Venue Selection Criteria

2005-10-20 Thread Eric Rosen

 There is no objective way to identify 'primary contributors' other than by
 assuming the regular attendees are also contributors. 

This is simply  silly.  It's not much of  a secret, in any WG,  who does the
work and who comes to listen. 

 We've  tried looking  at how  many local  first-time attendees  from (say)
 Korea later became regular attendees but the data are hard to state in any
 meaningful way and the time constants are long (years). 

This  is a  somewhat round-about  way of  saying that  you have  no  data to
support your position. 

 We certainly know that going a long way from most places,
 as we did in Adelaide, impacts attendance significantly -
 but my recollection is that Adelaide was a very successful
 meeting in terms of WGs making progress. 

Obviously recollections differ. 

By  scattering meetings all  over the  world, with  no consideration  of the
average travel time,  you encourage the creation of  a class of professional
standards-meeting-attenders, which is just the opposite of what is wanted.

 income [from local participants] that we badly need. 

Well, this is  the first I've heard  that we want to maximize  the number of
people who come  to listen rather than to work.   Everything I've ever heard
in the past  suggested the opposite.  If we now want  to maximize the number
of passive attendees, I'm sure we can find a way to do it without scattering
the meetings all around the world. 

___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF Meeting Venue Selection Criteria

2005-10-18 Thread Eric Rosen

I find this  whole discussion to be  nothing but an attempt by  some to push
their personal political views and  preferences down the throats of everyone
else.  

I don't think there should be any political preconditions on the IETF venue.
If we  are going  to start requiring  that the  host country be  governed in
accordance to  our own political views,  I have quite a  few preconditions I
could offer, but they're quite different from the ones Avri offered.

Further, if we're going to select  host countries based on how convenient or
inconvenient it is to enter  that country, the preference should be weighted
by  the  number of  attendees  (or  even better,  by  the  number of  active
contributors), so that we don't eliminate venues which are convenient to the
many but inconvenient to the few.  




___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: Let's make the benches longer.... (Re: draft-klensin-nomcom-term-00.txt)

2005-08-01 Thread Eric Rosen

 the  normal process  for AD  replacement  involved choosing  which of  the
 people who had  worked with the AD for  a long time could do  the job this
 time, 

In American vernacular, this procedure is known as cronyism.  

Generally, one doesn't expect to see this advocated in a public forum ;-)


___
Ietf mailing list
Ietf@ietf.org
https://www1.ietf.org/mailman/listinfo/ietf


Re: [newtrk] Re: List of Old Standards to be retired

2004-12-17 Thread Eric Rosen

Eliot Even  if someone  *has* implemented  the telnet  TACACS  user option,
Eliot would a user really want to use it?

Eric I don't know.  Do  you?  

Eliot Yes, I do.  Many of us do.  And that's the point. 

I'm sure  you think you know,  but I don't  know that you know,  which means
that a lot of  people have to waste a lot of  time preventing you from doing
damage. 

Are you also the one who knows that OSPF demand circuits are never used?

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: List of Old Standards to be retired

2004-12-16 Thread Eric Rosen

I see this exercise has already reached the point of absurdity. 

How can it possibly be worth anyone's time to look at each telnet option and
determine whether it  is deployed?  What possible purpose  could be achieved
by  changing the  standards status  of some  telnet option?   Is  there some
chance that someone is going to implement one of these by mistake somehow? 

A similar comment applies to the FDDI  MIB.  Are we trying to make sure that
no one  implements that MIB by mistake?   Or that no one  implements FDDI by
mistake, just because he thinks there's an IETF standard about it? 

Let me echo Bob Braden's if it's not broken, why break it? query. 

___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: [newtrk] Re: List of Old Standards to be retired

2004-12-16 Thread Eric Rosen

 This is a simple way to simply do what we said we were going to do. 

The worries  about this whole anti-cruft  business have always  been (a) the
likelihood that it would do more harm than good, and (b) the likelihood that
it would  waste enormous amounts  of time on  issues of no  importance.  The
initial foray into this area doesn't do much to relieve these worries. 

 Even if someone  *has* implemented the telnet TACACS  user option, would a
 user really want to use it? 

I don't know.  Do  you?  How do we go about answering  a question like that?
How much time should the IETF spend determining the answer to this question?

It's just better to leave well enough alone. 






___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: archives (was The other parts of the report....

2004-09-14 Thread Eric Rosen

I've never  thought that  the IETF  was OBLIGATED to  hide old  I-Ds; that
seems a rather far-fetched interpretation of anything in RFC 2026. 

However, I think  there is a real practical problem in  making the old i-d's
be too  readily available.   I frequently get  messages asking  me questions
like where  is draft-rosen-something-or-other-04.txt,  I can't find  it to
which the answer is one of the following:

a. you want draft-rosen-something-or-other-23.txt, or

b. you want draft-ietf-somewg-something-or-other-05.txt, or

c. you want RFC 12345. 

What's happened is  that they have found some email  which references a long
outdated draft, and have no clue  how to get to the most up-to-date version,
which is what they really want to see. 

If we make it  too easy to access the old drafts, a  lot of people will just
get the old drafts instead of being forced to look for the more recent work.

Sure, people  who really want to  see the old  drafts should be able  to get
them,  but  people who  really  want to  see  the  most up-to-date  versions
shouldn't get the old drafts just because they only know an old draft name.

In a perfect system, someone would go to the IETF's official I-D page, enter
a draft name,  and get a prominent pointer to the  most recent version (even
if it  is now an RFC  or a draft with  a different name), along  with a less
prominent pointer to the thing they actually asked for. 

If  that can't  be  done, it  might be  better  to keep  the expired  drafts
officially  hidden.   Not  for  the   reasons  being  given  by  our  more
academically inclined  colleagues, but  for the practical  reasons described
above.  Sure, the expired drafts might be obtainable via Google, but getting
something from  Google is  a bit  different than getting  it via  the IETF's
official web page. 



___
Ietf mailing list
[EMAIL PROTECTED]
https://www1.ietf.org/mailman/listinfo/ietf


Re: IETF mission boundaries (Re: IESG proposed statement on the IETF mission )

2003-10-17 Thread Eric Rosen

 The gist of this comment is that someone developing a network
 application protocol ought to somehow get a blessing from the IETF. 
 Reality check. Who got the IETF approval to deploy ICQ, Kazaa, or for
 that matter HTTP? 

The fact  that someone  did something without  the IETF's approval  does not
imply that  what they did is  outside the scope of  the IETF, or  that it is
beyond the IETF's mission. 







Re: IETF mission boundaries (Re: IESG proposed statement on the IETF mission )

2003-10-16 Thread Eric Rosen

 That is wrong or at least a gross overstatement. 

If  that's  what  you think,  I  invite  you  to  make  a list  of  all  the
IETF-standardized protocols and explain how  they are all (or even more than
50% of them) needed to make the Internet work.

 There have been many things that the IETF has chosen to step away from but
 that  ran  and  run  over  the Internet.   Some  graphics  standards  come
 immediately to my  mind ... Those graphics standards were  kept out of the
 IETF not  because the  working groups involved  thought they  didn't think
 they were experts, but because the subject was out of scope for the IETF. 

I'm not  familiar with this particular  case, but I don't  see why protocols
for distributing graphics would be thought  to fall outside the scope of the
IETF, any more  than protocols for distributing voice  or video.  Of course,
graphics standards  that have nothing  do with distribution of  the graphics
over IP would be out of scope.

 No committee is ever able to limit itself on grounds of insufficient
 expertise.  

Now, there  is a  gross overstatement!  For  everyone who  proclaims himself
(rightly or  wrongly) to be  an expert on  some topic, there are  always two
other people who claim  that he is clueless.  It's not uncommon  for a WG to
refuse  to  pick up  a  topic  because the  consensus  is  that the  topic's
proponents are clueless.  











Re: IETF mission boundaries (Re: IESG proposed statement on the IETF mission )

2003-10-16 Thread Eric Rosen

 - For the Internet - only the stuff that is directly involved in making 
 the Internet work is included in the IETF's scope. 

In other words, routing,  DNS, and Internet operations/management.  Adopting
this as  the IETF's mission  would be a  very radical change  indeed!  While
this particular  mission statement does seem  to reflect the  interests of a
certain notorious IESG member, let's not pretend that this has ever been the
limit of the IETF's mission.  The IETF has always been concerned with things
that make the Internet more useful,  and with things that expand the utility
of the IP protocol suite.  There's never been a time when for the Internet
was an accurate representation of the IETF's concerns.

You are  of course welcome  to propose such  a radical change to  the IETF's
mission.  But  if you are  going to circulate  a document under  the subject
line IESG proposed statement on the IETF mission, you should make it clear
that the  IESG is proposing to make  a complete change in  the IETF mission.
Instead,  you  give  the impression  that  the  IESG  thinks that  for  the
Internet is and has always been the IETF's mission. 

The  formulation   I  like  is  Everything  that   needs  open,  documented
interoperability  and  runs  over  the  Internet  is  appropriate  for  IETF
standardization.  This is  much truer to the IETF's  current and historical
practice.  

That doesn't  necessarily mean that  the IETF has to  standardize everything
that falls within  its mission.  For instance, a  particular area might fall
within the mission, but the IETF  might not have the expertise to tackle it.
A WG  in that area  could then be  rejected on the grounds  of insufficient
expertise.  Such decisions  would have to be made  on a case-by-case basis.
Again, this is the way such decisions have always been made in the IETF.










Re: IESG proposed statement on the IETF mission

2003-10-15 Thread Eric Rosen

 The purpose of  the IETF is to create high  quality, relevant, and timely
 standards for the Internet. 

 It is important that this is For the Internet, and does not include 
 everything that happens to use IP.  IP is being used in a myriad of 
 real-world applications, such as controlling street lights, but the 
 IETF does not standardize those applications. 

Well, let's test this assertion.  Suppose a consortium of electric companies
develops a UDP-based protocol  for monitoring and controlling street lights.
It turns  out that  this protocol generates  an unbounded amount  of traffic
(say,  proportional to  the square  of the  number of  street lights  in the
world), has no  congestion control, and no security, but  is expected to run
over the Internet. 

According to you, this has nothing to  do with the IETF.  It might result in
the congestive collapse of the Internet,  but who cares, the IETF doesn't do
street  lights.  I would  like  to see  the  criteria  which determine  that
telephones belong on the Internet but street lights don't!

Another problem  with your  formulation is that  the Internet is  a growing,
changing, entity,  so for the Internet  often means for what  I think the
Internet  should  be  in  a  few  years, and  this  is  then  a  completely
unobjective criterion.  One  would hope instead that the  IETF would want to
encourage competition between different  views of Internet evolution, as the
competition of ideas is the way to make progress. 

I also do not understand whether for the Internet means something different
than for IP networking or not.  

I think  it should  also be part  of the  mission to produce  standards that
facilitate the migration to IP  of applications and infrastructures that use
legacy networking  technologies.  Such  migration seems to  be good  for the
Internet, but I don't know if it is for the Internet or not. 




Re: The requirements cycle

2003-07-07 Thread Eric Rosen

Eric Not sure  what you mean, it  always takes time to  produce a document,
Eric even if the document is just a rock fetch.

Harald sorry; rock fetch is beyond my scope of American idiom. 

Rock  fetch:   when  the boss  sends  the  workers  out on  useless  but
time-consuming tasks.  

Harald But version  -01 of the framework  document is dated  July 19, 2001,
Harald and the  first version submitted to  the IESG is  dated February 15,
Harald 2002. 

About six months to get the WG  to agree on the framework, that doesn't seem
excessive for a document.  It's a rock fetch though because there is no real
need for a framework document. 

Harald I took that as a hint  that there might have been controversy in the
Harald working group about it. 

It  was  never a  very  controversial  document.   My recollection  is  that
framework and requirements were ready about  January of 2002, which is why I
said that they were ready about  18 months ago.  So were the protocol specs,
applicability statements, etc.

Eric Well,  each  objection from  the  IESG needs  to  be  discussed and  a
Eric response crafted. 

Harald which should take approximately 3 days of work, IMHO.  Comments that
Harald translate  to you  are  referencing an  obsolete  version of  LDAP
Harald should take approximately 2 minutes to fix. 

Comments which were  received last fall (I first saw them  a few weeks prior
to  the Atlanta  IETF  meeting)  required a  considerable  reworking of  the
document.  (Sisyphus comes to mind here ;-))

Harald Did the  WG declare consensus on  all those documents  18 months ago
Harald (January 2002)? 

The WG  was told by the  WG chairs that the  IESG would not allow  the WG to
even consider  the solutions documents until the  framework and requirements
documents  were approved  by the  IESG.  Something  is very  wrong  with the
process here.

The L3VPN  protocol specs  themselves haven't changed  in years, which  is a
good thing, given  the large amount of interoperable  deployment by multiple
vendors!










Re: The requirements cycle (Re: WG review: Layer 2 Virtual....)

2003-07-03 Thread Eric Rosen

Harald did  any of  the technologies  change  because of  issues that  were
Harald discovered  in  the discussions  that  were  needed  to clarify  the
Harald requirements and framework? 

No. 

Harald If no - why did it take any time at all to produce them? 

Not sure what you mean, it always  takes time to produce a document, even if
the document is just a rock fetch. 

Harald there is  little that  the IESG can  do when  the WG knows  what the
Harald comments are and chooses not to act upon them for 2-5 months. 

This reminds me  of Dilbert's pointy-haired boss, who  says your project is
late,  so I  want  you to  give me  hourly  status reports.   When we  have
documents which aren't really necessary in the first place, which ultimately
will not  have any impact on the  technology, but which need  to be massaged
and remassaged  so as to get  them past the  IESG, I think it's  quite clear
where the responsibility for the delay is coming from. 

Harald And  I don't  understand why  WG updates  to fix  problems  take 2-3
Harald months  per cycle  when  the WG  thinks  that it's  important to  be
Harald finished with the docs. 

Well, each  objection from  the IESG  needs to be  discussed and  a response
crafted.  

Harald is  the IESG  supposed  to care  about  inconsistencies between  the
Harald requirements (which  are what the  *WG* thinks should  be satisfied)
Harald and the technologies that will be proposed for standardization? 

Sure; but the reqs,  framework, protocol specs, and applicability statements
were all  ready 18 months ago.  They  could have been submitted  as a group.
But we were told, first you need  to submit the first document, then a year
or so  later you can  submit the  second.  This is  a very peculiar  way to
encourage progress  ;-) From the WG  perspective, the specs  have been ready
for review  forever, but  the IESG has  refused to  look at them  because of
bogus process issues.  And then they turn around and accuse the WG of making
slow progress! 









Re: WG review: Layer 2 Virtual Private Networks (l2vpn)

2003-06-30 Thread Eric Rosen

Harald It might  have something  to do with  the fact  that the WG  has not
Harald requested that the  IESG process these drafts if  the WG has not
Harald come  to consensus on  asking for  the drafts  to be  published, I'm
Harald afraid the IESG cannot do anything. 

I consider this answer to be rather disingenuous.

The WG has  not requested that the IESG process these  drafts because the WG
chairs have  told the  WG that  the ADs have  told them  that the  drafts in
question cannot be submitted to the IESG until numerous other drafts that no
one  will  ever  read  (requirements,  framework,  architecture)  have  been
approved by the  IESG.  Of course, most of those  numerous other drafts were
completed about 18 months ago, though a few of them have now come out of the
seemingly endless  IESG reviews,  WG makes minor  change, IESG  reviews, WG
change cycle.  

So you  can't honestly  answer Yakov by  saying the  WG hasn't asked  us to
process  these  drafts;  the  answer   to  Yakov's  question  would  be  an
explanation of (a) why all these  prior drafts are really necessary, (b) why
it is reasonable for such a long  review cycle, and (c) why it is reasonable
to delay  starting to process the  protocol specs until the  prior specs are
already on the RFC Editor's queue. 











Re: WG review: Layer 2 Virtual Private Networks (l2vpn)

2003-06-18 Thread Eric Rosen

People need to understand that the purpose of the Pseudowire stuff (PWE3) is
to enable service providers to  offer existing services over IP networks, so
that they can convert their backbones to IP without first requiring that all
their  customers change  their  access equipment.   Producing the  protocols
needed to enable  migration from legacy networks to IP  networks seems to me
to  be quite in  the mainstream  of IETF.   The technical  issues, involving
creating tunnels, multiplexing  sessions through tunnels, performing service
emulation at the  session endpoints, are all issues that  the IETF has taken
up in the past, there is nothing radically different going on here. 

(To those  who think that other  standards organizations can  do this better,
well, representatives from those other organizations feel free to drop in on
the WGs  in question, so  we are familiar  with their level of  expertise on
IP.   Let's just  say that  if we  want to  aid in  the migration  of legacy
networks to IP, these other organizations are not what we would want to rely
on.) 

One can think of the VPWS work  in L2VPN as taking the PWE3 stuff and adding
some IP-based auto-discovery  mechanisms to facilitate provisioning.  Again,
this isn't out of line with what the IETF typically does. 

The VPLS work is  more difficult to position within the IETF,  as it is hard
to  avoid a lot  of stuff  that overlaps  with IEEE  (a standards  org which
really is  worthy of  respect, unlike some  others), and  extending ethernet
over an IP network  is arguably a bad idea.  On the  other hand, the purpose
is the  same as  indicated above; service  providers can migrate  from their
Q-in-Q  ethernet networks  to  IP networks,  without  first requiring  their
customers to switch from an ethernet service to an IP service.  










Re: v6 support (was Re: Thinking differently about the site local problem (was: RE: site local addresses (was Re: Fw: Welcome to the InterNAT...)))

2003-04-03 Thread Eric Rosen

Steve I  can't get upset  about Microsoft  declining to  ship poorly-tested
Steve code.  Given how many security  holes are due to buggy, poorly-tested
Steve programs, I applaud anyone who takes that seriously. 

Well, suppose they were to ship IPv6 without IPsec, on the grounds that they
didn't have the testing resources  for IPsec.  Would you still be applauding
them?  Or would you be questioning whether they have their priorities right? 

Features always fall off due to the inability to allocate sufficient testing
resources,  but a vendor  does have  some choice  over which  features those
are. 







Re: IETF Sub-IP area: request for input (fwd)

2002-12-10 Thread Eric Rosen

I might as well chime in on the actual question that was asked. 

I guess I disagree with the majority of folks working in the sub-IP area.  I
never thought  it made  any sense to  move all  those working groups  out of
their  original areas  into  a sub-IP  area,  and I  never understood  the
sub-IP area  hourglass architecture  that was foisted  on us by  the IESG.
So I've never  thought that it makes much sense to  have a separate sub-IP
area, and I don't think it makes sense  to keep it as a separate area in the
future. 

The advantage of maintaining the status quo is that everyone has gotten used
to the current ADs, and most  people figure that any change will make things
worse.  When new  ADs get involved, they tend  to reinterpret the charters
and  disrupt the work.   But if  one ignores  the possibility  of short-term
personality issues, I  think it would be better  to choose established areas
for the MPLS, CCAMP, and PPVPN WGs. 







Re: IETF Sub-IP area: request for input

2002-12-10 Thread Eric Rosen

Lars An example  is PPVPN, which is  chartered to work  on specification of
Lars requirements, with new protocol work being explicitly out-of-scope. 

Lars However, some  current PPVPN  IDs (and several  more targetted  at it)
Lars read more like solution documents

From the PPVPN charter: 

This  working  group  is  responsible  for  defining  and
specifying  a limited  number  of sets  of solutions  for
supporting provider-provisioned  virtual private networks
(PPVPNs).

It is  somewhat difficult to define  and specify a  solution without writing
something that reads like a solution document. 

Lars for various existing vendor schemes, 

From the PPVPN charter:

The working group is  expected to consider at least three
specific approaches

Various existing vendor schemes are then explicitly mentioned. 

Lars new protocol  work being explicitly out-of-scope  [but PPVPN documents
Lars are] specifying packet headers and MIBs

In some cases the PPVPN docs do have protocol work in them which needs to be
moved to  another working group.  But  I don't think  this is a case  of the
group  going  beyond its  charter,  it's just  a  matter  of the  individual
contributors getting the time to divide up the documents properly.  In other
cases, the PPVPN docs just specify  how existing protocols are to be used to
achieve various functions.  

I don't think defining a MIB counts as new protocol work.




Re: a personal opinion on what to do about the sub-ip area

2002-12-09 Thread Eric Rosen

 The  workings  of  special  interest  groups  can  and  often  do  have  a
 significant effect  on the general  population, but nobody can  afford the
 time and  energy it takes  to keep track  of every special  interest group
 that might affect him.

Often  it  seems as  though  the  WGs reflect  the  broad  consensus of  the
community, and the IESG is the special interest group.




Re: IETF Sub-IP area: request for input (fwd)

2002-12-05 Thread Eric Rosen

Aaron I can easily imagine this is so,  although, as I say above, I have no
Aaron facts to back this up. 

Wouldn't it be nice if people  based their feedback on facts, rather than on
what they imagine!  Well, at least you're honest about it ;-)

Aaron If  sub-ip represents technologies  that don't,  and will  never, get
Aaron involvement  from  a  broad  spectrum  of the  IETF  community,  we
Aaron shouldn't institutionalize it. 

How would you define a broad spectrum of the IETF community?  





Re: Why IPv6 is a must?

2001-11-29 Thread Eric Rosen

Sure, in  theory one could add  zillions of new  globally routable addresses
without increasing the  size of the routing tables  in the default-free zone
at all. 

The skepticism is about whether there is (or even could be) a realistic plan
to make this happen.




Re: Why IPv6 is a must?

2001-11-28 Thread Eric Rosen


Brian NAT has simply pushed us back to the pre-1978 situation. 

On the contrary, NAT has  allowed us to maintain global connectivity without
requiring every system  to have a globally unique address.   NAT is what has
prevented us from returning to the pre-1978 situation.  

That's not  to say  it wouldn't be  better to  have a million  more globally
unique addresses.  Sure  it would, unless that would  stress out the routing
system  unduly.  If  adding a  million more  globally unique  addresses will
stress out  the routing system, then  one might argue that  a solution which
provides the  addresses but doesn't  change the routing system  isn't really
deployable, and hence  doesn't really solve the addressing  problem. I think
this is the point  that Noel keeps trying to drive home,  and I'm not sure I
understand what the answer is supposed to be. 




Re: trying to reconcile two threads

2001-11-28 Thread Eric Rosen


Fred I  see a  longish  thread about  the  fact that  some cable  companies
Fred apparently are desperate  to charge per IP address  (something one can
Fred only do if IP addresses are in fact a scarce resource)

I think  you miss the  point here.  The  cable companies want to  charge per
computer, and  the only way they  can do this is  to count the  number of IP
addresses they  see.  If you use NAT,  they only see one.   This has nothing
whatsoever to do with scarcity of addresses.

Even  with IPv6, they  would still  want to  charge by  the IP  address, and
people would still use NAT to save money.  




Re: Why IPv6 is a must?

2001-11-28 Thread Eric Rosen


Eric NAT is what has prevented us from returning to the pre-1978 situation.  

Keith this is true only if you believe that [blah blah blah]

The  situation today  with NAT  is that  hosts in  separate realms  can only
communicate in 99% of the desired applications, though perhaps this falls to
80%  if  one  stubbornly  ignores   the  existence  of  tunneling  and  port
redirection.

Pre-1978,  you were  either directly  attached to  the Arpanet  or  you were
pretty much out of luck. 

You have to be very much in  the grip of a theory to regard these situations
as comparable. 

Granted, it's  easier to  talk about the  evils of  NAT than to  explain how
billions of  new routable addresses  are going to  be added to  the existing
routing system. 








Re: filtering of mailing lists and NATs

2001-05-22 Thread Eric Rosen


So, here are the choices:

1. Save thousands of people from having to deal with multiple spams per day,
   at the cost of presenting a minor inconvenience to a few, or

2. Require thousands  of people to receive  and deal with spam  (or to learn
   all about mail filtering), in order to avoid inconveniencing a few.

Easy decision  to make.   For every  bit of whining  by the  usual suspects,
there are thousands of  folks that are very happy to have  the spam kept out
of their mailbox automatically.  (Every  mailing list manager knows that the
whining by  Keith and Lloyd is nothing  compared to the whining  by the list
members as they get spammed multiple times per day.)

Indeed, this is a lot like the arguments re NAT.  There are the thousands of
people it helps, vs. the few who are yelling that the sky will fall if it is
not stamped out.





Re: filtering of mailing lists and NATs

2001-05-22 Thread Eric Rosen


Christian I  would   much  rather  receive  and   delete  another  annoying
Christian proposition to get rich quick or see lurid pictures than tolerate
Christian any form of censorship.  

As has been pointed out, the non-member messages can be moderated.  It takes
about one  second to look  at a message  and tell whether it  is unsolicited
commercial or  not.  So the downside  is that the non-member  message may be
delayed for  a bit  until the moderator  gets to  it.  I wouldn't  call that
censorship.  (I  think one  has to  be very privileged  indeed to  confuse a
small inconvenience with censorship.)

Unless, of  course, you think that  people have a RIGHT  to send unsolicited
commercial email to IETF mailing lists. 





Re: filtering of mailing lists and NATs

2001-05-22 Thread Eric Rosen

Maurizio but this means 
Maurizio - that there is a person who has the right to decide whether the 
Maurizio message is spam or not
Maurizio - that this person is willing to bear the burden for the sake of the 
Maurizio whole community.

Maurizio I happen  to do this  for some lists,  but it's a nuisance,  I may
Maurizio assure you.

I do this  for the mailing list of  the MPLS working group, so  I'm aware of
what a nuisance it is.  But as far as mailing list management goes, it's not
nearly as big a nuisance as trying to figure out which of the error messages
to owner-mpls  are bogus  and which  are real.  (The  mailing list  has 3000
members and each message to it results in 100 error messages.)

It's  not  hard  to  decide  whether a  particular  message  is  unsolicited
commercial email or not, that's not something that people disagree about.  






Re: specific questions about RFC publication and I-Ds

2000-09-28 Thread Eric Rosen

Pete Does this particular entry mean draft-ietf-mpls-arch-07.txt is being
Pete held for normative reference to the  I-Ds listed below it, or that all
Pete those I-Ds are being held for normative references? 

The meaning  of that  queue entry  is that the  first document  mentioned is
being  delayed because  of  normative  references to  the  I-Ds that  appear
beneath it in the list. 

Pete Assuming that  the entry above indicates  that the first  I-D (in this
Pete case,  draft-ietf-mpls-arch-07.txt) is  being held  until  the other
Pete RFCs become  RFCs as well,  doesn't this mean  that it might  be years
Pete before an approved I-D would ever be published as an RFC?

In fact, this is the actual situation.