>> HNCP is a standards track protocol, and there's nobody left who's
>> willing and competent to work on a new revision.

> Yes, of course. We can never change a standards track protocol. That
> would be wrong. :)

My wording was perhaps badly chosen.  Sorry for that.

I meant to say that I don't currently see anyone who would be both willing
and able to (1) change the HNCP spec to add application-layer
fragmentation, (2) update the hnetd implementation to obey the new
protocol, and (3) go through the somewhat time- and energy-consuming
process required to publish a new Standards Track protocol.

(To be clear -- as far as I'm concerned, I declare myself incompetent on
all three points above.  The most I could conceivably do would be to review
a new spec and update shncpd so it interoperates with a new revision of hnetd.)

> What I’m trying to understand is how bad a problem this is.

My understanding is that while HNCP should have no trouble scaling to
large networks, it is not designed to carry large amounts of per-node data.

This could cause trouble in the following cases:

  - if a single node has hundreds of HNCP neighbours (e.g. because it is
    connected to a large switch or serves as a tunnel server);

  - if a single node announces large numbers (dozens) of external
    connections; or

  - if a protocol extension dumps large binary blobs into HNCP.
  
-- Juliusz

_______________________________________________
homenet mailing list
homenet@ietf.org
https://www.ietf.org/mailman/listinfo/homenet

Reply via email to