I would be hesitant to introduce a situation where a load balancer is forced to use memory, especially memory it doesn’t fully control. It may be fine as a choice, but not the only choice.
Aside from potential attacks, there is also the hardware cost/complexity. SHA256 and AES is pretty standard in almost anything, but lots of RAM is a cost driver. It is really hard to estimate crypto vs lookup overhead, but it is far from a given that lookup will be faster once the tables grow large. Less coordination is a good thing though. I’m afraid that without out of band payload to coordinate, there will have to be a choice between configuration and state. Mikkel > On 15 Jan 2021, at 21.04, Martin Duke <[email protected]> wrote: > > To muddy this discussion a little further, after a little more thinking I > believe there's a way to generalize this approach to all three of the > original algorithms, encrypted or unencrypted, so there is never a need to > manually allocate server IDs. > > Again, the main tradeoff here is simpler configuration vs. more complexity > and state at the load balancer. > > As a document organization matter, rather than have six different algorithms > I would prefer to specify three with a separate section describing the two > separate ways to allocate a server ID. > > But it is not too late to yell "stop" at this multiplicity of options if > people feel the tradeoffs are clear-cut in one way or the other. > > On Mon, Jan 11, 2021 at 6:50 PM Martin Duke <[email protected] > <mailto:[email protected]>> wrote: > Yes. Do you have an alternate suggestion? > > On Mon, Jan 11, 2021 at 5:54 PM Christian Huitema <[email protected] > <mailto:[email protected]>> wrote: > > > On 1/11/2021 5:22 PM, Martin Duke wrote: >> Perhaps I should make some edits for clarity! >> >> On Mon, Jan 11, 2021, 16:52 Christian Huitema <[email protected] >> <mailto:[email protected]>> wrote: >> I am looking at the text of section 4.2, and I am not sure how I would >> implement that. What should be the value of the config rotation bits in CID >> created by the server? >> >> Any config includes the corresponding CR bits, and when generating the CID >> it would use those bits. >> >> The confusing part is that, for this algorithm, a usable SID has to be >> extracted from any CID, hence all the weird stuff about CIDs with undefined >> configs. >> >> Aside from that, it's like PCID: any server-generated CID uses the CR bits >> in the config, optional length encoding, SID, server-use octets. >> >> >> Should the 6 other bits in the first octet be set to a CID Len or to a >> random value? >> >> It depends on the rest of the config, as with the other algorithms. >> >> Issss the timer set when the server ID is first added to the table, or is >> the timer reset each time a packet is received with that CID? In the latter >> case, is it reset when any packet is received, or only when a "first >> initial" packet is received? >> >> When any packet is received with that SID (not CID), the expiration is >> refreshed. > OK. So we can have the following: > > 1) Server learns of Server-ID = X. > > 2) Server creates new CID with that server ID, uses it to complete handshake. > > 3) Client maintains a long running connection with that CID. > > 4) Server keeps receiving messages with CID pointing to server-ID = X > > 5) server-ID=X never expires. > > Is that by design? > > -- Christian Huitema > > >
