I told myself I would stay out of this, but I can't help
but point out that if the SEAL-FS header were used instead
of the UDP/LISP headers there would only be 4 bytes exposed
to corruption instead of 16. And, if one of the SEAL-FS
header fields is corrupted there is no danger of causing
unpredictable behavior.

The SEAL-FS header is formatted as follows:

 0                   1                   2                   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|VER|I|                     Identification                      |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

If the I bit is flipped, the end result is that the ETR
could send a spurious Information Request message which
should cause no harm whatsoever to the ITR. If the
Identification field is corrupted, the end result is that
the ETR may send a control message back to the ITR with
an unrecognized Identification - this would be dropped
silently by the ITR. If the VER field is corrupted, there
is a chance that the ETR could try to process the packet
according to a different SEAL protocol version. But, LISP
ITRs would only be expected to implement VER=0 anyway so
the packet would simply be dropped.

All this, plus a tidy 12 bytes per-packet saved over what
the existing UDP/LISP encapsulation gives.

Fred
fred.l.temp...@boeing.com 
--------------------------------------------------------------------
IETF IPv6 working group mailing list
ipv6@ietf.org
Administrative Requests: https://www.ietf.org/mailman/listinfo/ipv6
--------------------------------------------------------------------

Reply via email to