On Fri, Jun 12, 2020 at 07:45:00PM +0000, Les Ginsberg (ginsberg) wrote:
> Ben -
> 
> Top posting the open issues - as I think there are only two.
> 
> Issue #1
> 
> > > What is the scope over which the user-defined application bits are
> > > defined/allocated?
> > 
> 
> LES: User defined applications are - by definition - outside the scope of 
> this document/standardization.

Yes.

> We have simply defined the syntax of how to advertise associated bit mask.

But if my implementation is built around (say) the meanings of the bits in
the bit mask being local to a single area, and your implementation is built
around them having meaning local to a full domain, when you try to send me
something from out of area it's not going to work so well.  Even without
saying what any given bit means, we can still say that "all systems in the
same <X> need to interpret each bit in the same way.  What is the smallest
<X> such that deployment is safe?

> How, for example, interoperability is achieved is up to implementors.
> Whether they want an application to be area scoped or domain-wide is also up 
> to them.

I think leaving the question of area vs. domain up to the application is a
recipe for interop failures.

(A similar question presumably applies to the OSPF document, though I
didn't get a chance to actually review that one.)

> Issue #2
> 
> > > Section 7.4
> > 
> > >
> > 
> > >    policy for this registry is "Standards Action" [RFC8126].  Bit
> > 
> > >    definitions SHOULD be assigned in ascending bit order beginning with
> > 
> > >    Bit 0 so as to minimize the number of octets that will need to be
> > 
> > >    transmitted.  The following assignments are made by this document:
> > 
> > >
> > 
> > > I worry a little bit that this will encourage codepoint squatting,
> > 
> > > though in theory the user-defined bitmask should avoid the need for
> > 
> > > squatting.
> > 
> > >
> 
> You replied:
> 
> " If everyone expects a sequential allocation policy, then when
> developing/testing, it's natural to look at "what's in the registry now"
> and write code that uses the next value.  If three people do that at the
> same time, we can end up with deployed software that has conflicting
> interpretation of that value.  (This has happened for TLS, yes, with three
> different extensions using the same value.)  My suggestion would be to not
> say "SHOULD be assigned in ascending bit order", and perhaps just note
> (without normative language) that it is advisable to allocate from the
> lowest byte that has bits remaining, to allow for compact encoding.  It's
> not actually necessary to be strictly sequential in order to minimize the
> number of octets transmitted."
> 
> LES: I understand this concern. How about if we change policy to "Expert 
> Review"?

I think that's moving on an orthogonal axis -- the TLS registry where we
had three extensions squatting on the same codepoint is a registry with
expert review.  (My expectation is that Standards Action is actually safer
in this regard than Expert Review would be, since there's enough points of
coordination in the process of advancing a standard that someone is likely
to notice the issue, but I don't have any hard data or proof to support
that expectation.)

To throw another concrete suggestion out there for focusing comments,
perhaps

OLD:

                                                              Bit
   definitions SHOULD be assigned in ascending bit order beginning with
   Bit 0 so as to minimize the number of octets that will need to be
   transmitted.  

NEW:

   Allocating all bits from the first octet before allocating any bits from
   the second octet, etc., provides for the smallest possible encoding when
   transmitted.

(After all, who is supposed to adhere to the original's "SHOULD"?  The IETF,
as it undertakes its Standards Actions?  Our track record on one RFC
constraining what future RFCs do is hardly perfect.)

-Ben

_______________________________________________
Lsr mailing list
Lsr@ietf.org
https://www.ietf.org/mailman/listinfo/lsr

Reply via email to