Please see inline.

From: Tony Przygienda <[email protected]>
Sent: Friday, July 18, 2025 2:24 AM
To: Les Ginsberg (ginsberg) <[email protected]>
Cc: [email protected]; lsr <[email protected]>
Subject: Re: [Lsr] Review comments for draft-prz-lsr-hierarchical-snps-00: 
Detailed comments

Inline


On 17 Jul 2025, at 23:15, Les Ginsberg (ginsberg) 
<[email protected]<mailto:[email protected]>>
 wrote:

Here are some more detailed comments – many of which are editorial.

Proceeding in section order through the document.

Introduction

It seems to me that the mechanisms defined here are potentially applicable to 
PSNPs as well as CSNPs - but you do not discuss this. Is this intentional?

The use of the term "level" in regards to the merkle hash is a bit confusing 
given that we have a protocol which supports multiple levels. Perhaps replace 
"level" with "ordinal"?? (Open to other suggestions)


Yeah, I agree, a synonym for level would be better. Ordinal is bit stilted ;-) 
[though stilted language is normally my specialty;-).   What about “rank” or 
“grade”, “degree” ?

[LES:] “degree” works for me



Section 2 Suggested Dynamic Leaf Partitioning

Paragraph 2 has an error:

"LSP ID + Fragment + Seq# + CSUM + Lifetime length which amounts to 8 + 2 + 2 + 
2 = 14 bytes each"

should be

"LSP ID + Fragment + Seq# + CSUM + Lifetime length which amounts to 8 + 2 + 4 + 
2 = 16 bytes each"

i.e., the Seq# is 4 bytes in length


Ah, perfect catch

As to previous note, observe that CSUM e.g. is NOT guaranteed either to be 
unique so there is the corner, corner case in the protocol where some node 
reboots, issues same LSP with same sequence# but different content _but_ 
checksum ends up matching

If you consider that “fine” then there is the discussion whether HSNP  should 
just hash the CSUMs ;-)   but I’d think that would start to be a bit of a lax 
approach ;-()
[LES:] See my response to Tony Li. I think the consequences of duplicate 
checksums are much less severe.



Definition of the hashes

You describe an encoding of:

Start ID (7 bytes)
End ID (7 bytes)
Hash (4 bytes)
Total 18 bytes

This suggests two things:


  *   The ID range is meant to include all LSPs generated by the nodes 
including pseudo-node LSPs
  *   The range is inclusive

But the actual encoding shown in Figure 2 puts "00" into the 7th ID byte 
(pseudo-node ID) which seems very odd. It suggests either that you do not 
intend to include pseudo-node LSPs in your compression or that you are 
deviating from existing representation of the range (in existing CSNPs) by 
having the end ID identify only the last node - not the actual last LSPID that 
is included.

Actaully the intention would be to go to 6 bytes here and include all the 
pseudo nodes in the range of the node given that there is so little info in 
pseudo nodes.

You think that’s a reasonable idea ?



There is indeed virtually no discussion of pseudo-node LSPs in the document.

My intuition suggests that a more accurate representation of the range would 
use only the 6 bytes of systemid and that there is no need to include the 
pseudo-node id as it is being ignored anyway i.e.,

+--------------------------------------------+
|  Start System ID: 1000.0000.008E.00        |
+--------------------------------------------+
|  End System ID:   1000.0000.00A0.00        |
+--------------------------------------------+

actually is meant to indicate all LSPs (including pseudo-node LSPs) generated
by the system ids in the range of (1000.0000.008E - 1000.0000.00A0) (inclusive) 
- in which case:

+--------------------------------------------+
|  Start System ID: 1000.0000.008E           |
+--------------------------------------------+
|  End System ID:   1000.0000.00A0           |
+--------------------------------------------+

would make more sense.

Am I understanding this correctly?


Ah, I see, reading on you suggest same thing. Yes, I’m all for it



Packing/Re-packing

You state

" it is desirable to produce one packet on a miss on the merkle hash of first 
level leaf and hence such leaves will pack initially 80 LSP fragments"

This seems targeted to an environment where most systems generate something 
less than 80 LSPs / node (including pseudo-node LSPs). This probably is a 
reasonable assumption for many deployments/nodes, but certainly does not apply 
to ABRs (L1/L2 routers) who are leaking a large number of prefixes nor does it 
apply to nodes with a large number of neighbors. I am not arguing about the 
number - nor am I quibbling about the calculation of how many hashes can fit in 
an lsp-mtu sized HSNP. I just find that the discussion of packing/repacking 
(Section 2.1) pays little attention to the cases which don't fit within the 
assumptions.


Given that there is an understandable desire to have neighbors choose the same 
ranges of IDs for a given hash ordinal (as this limits the cases in which the 
hash has to be recalculated on receipt of an HSNP) it would seem beneficial if 
the "rules" for packing/repacking were more strict. Otherwise, the likelihood 
that different implementations will realize the benefits of consistent packing 
decreases significantly.

Do I understand this correctly?

Yes,. And it bears discussion. I did not go in this draft. to excruciating 
levels of correctness because I assumed we will have father discussions like 
this one

So I hope you are in Madrid because that needs a beer and a napkin

[LES:] Sadly – won’t be in Madrid. But hopefully we can arrange a discussion 
soon.

   Les




Figure 1

This shows an example that includes

"SEQNR: 0"

but LSP sequence numbers begin at 1 - 0 is reserved for SNP entries to indicate 
that the sender does not have the specified LSP.

Good catch again



Figure 2

The example starts with what I assume to be the indication of the range of LSPs 
in the HSNP:

+--------------------------------------------+
|  Start System ID: 0000.0000.0000.00        |
+--------------------------------------------+
|  End System ID:   0000.0000.00A0.00        |
+--------------------------------------------+

But as the intent is to identify the first/last LSPIDs included, it would make 
more sense to omit the pseudo-node ID:

+--------------------------------------------+
|  Start System ID: 0000.0000.0000           |
+--------------------------------------------+
|  End System ID:   0000.0000.00A0           |
+--------------------------------------------+

Subsequent entries in the figure are meant to represent a particular hash
TLV, but for some reason the system IDs are not within the range identified for 
the PDU.


+--------------------------------------------+
|  Start System ID: 1000.0000.0000.00        |
+--------------------------------------------+
|  End System ID:   1000.0000.0002.00        |
+--------------------------------------------+
|              Merkle Hash                   |
+--------------------------------------------+

Is this a typo and the IDs are actually meant to be

+--------------------------------------------+
|  Start System ID: 0000.0000.0000.00        |
+--------------------------------------------+
|  End System ID:   0000.0000.0002.00        |
+--------------------------------------------+

??

The same question applies to Figure 3.

Again agreed that the pseudo node id should be omitted

The examples were hand generated so I'm sure they're full of mistakes



Section 5

I assume the encoding of the new TLV in IIHs will be specified in a future 
revision - but what is discussed is the advertisement of maximum level(ordinal) 
of hashes supported. However, it seems it is also necessary for the nodes to 
advertise their choice of the range of IDs covered in a given hash, else each 
node would have to calculate hashes based on both its local choice and the 
choice of each of its neighbors. You do discuss this in Section 5.2 - but you 
don't mention it as something which should be advertised/negotiated in the new 
IIH TLV - though I would expect it to be there.

I do not think we will advertise ranges since this will start to take a lot of 
place and because we will never know when things are synchronize without a 
complex ack'ing protocol


Hence, the choice of the hash function. I envision that an implementation will 
keep something like the hash for all LSPs one sec, one sec, one sec of a note, 
including the notes and then when there is arrangement match, you can very 
quickly adjust what you have or so together couple of things to check it.






   Les

_______________________________________________
Lsr mailing list -- [email protected]<mailto:[email protected]>
To unsubscribe send an email to [email protected]<mailto:[email protected]>

_______________________________________________
Lsr mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to