My general thoughts, in no particular order: * I’d be more excited about using a generic mechanism to agree on SETTINGS pre-handshake (e.g., ALPS) than using a protocol-specific TLS extension for just this feature. Therefore, HTTP WG for this particular use-case, but if there were to be a generic mechanism, that would probably live in TLS WG with HTTP WG as a consumer. * The value of a TLS extension is that it lets you agree on something before the application-layer protocol gets started. That enables these things which you can’t do in SETTINGS: * Wholesale replacement of the static table, to put different things in the low-bitcount real estate * Use of “extended” entries in the first flight, to fit the most requests in the first packet(s); note that this matters more with TCP/TFO than QUIC 0-RTT. * Reducing the size of the static table for micro clients; see below * If we’re not doing one of those things, SETTINGS is the right model for this. We decided in H3 that pre-application-data agreement wasn’t required for SETTINGS; do we want to revisit that?
* The value of the static table in the protocol drops as the table gets longer, as the entries get more expensive to reference. For a frequently-used field, the marginal impact may just be shortening the initial reference to put the header into the dynamic table. How many bytes does this actually save in a real session? * The difficulty of including the static table in a binary increases as the static table gets bigger, and this proposal could be very helpful in that problem – if a compact implementation could declare they only want to use the first 32 entries, for example. The current version of QPACK imposes a minimum binary size on any H3 implementation, due to the requirement to keep the full static table accessible. * Wholesale table replacement probably involves offer-select or a priority ordering, departing from the usual SETTINGS model. * Agreement is somewhat implicit – if the decoder has advertised support for entry N, the encoder can only reference it if it knows the value of entry N. If it knows the value of N, it would ordinarily be willing to accept references to N as well. * Exception: those micro-clients again. If a small binary contained a sparse selection of “useful values” from the static table, the encoder could use the entries of which it was aware, but a decoder could only advertise support for the first contiguous portion of the table. ________________________________ From: TLS <tls-boun...@ietf.org> on behalf of Lucas Pardue <lucaspardue.2...@gmail.com> Sent: Tuesday, September 26, 2023 9:48 PM To: Martin Thomson <m...@lowentropy.net> Cc: Mike Bishop <mbis...@akamai.com>; HTTP Working Group <ietf-http...@w3.org>; TLS List <tls@ietf.org>; Hewitt, Rory <rhewitt=40akamai....@dmarc.ietf.org> Subject: Re: [TLS] New Internet Draft: The qpack_static_table_version TLS extension Hi Rory, I echo Watson and Martin, lets discuss this in the HTTP WG. As for a very brief technical response. In general I'm supportive of the idea of more agility of the static table but I think my motivations would be different than the ones behind this proposal. For me, I'd like more domain-specific tables to be defined, and to have the possibility of asymmetric tables; but lets stick that on the side for now. The QPACK static table description states "The order of the entries is optimized to encode the most common header fields with the smallest number of bytes.". How does the proposed append-only table gel with this? I.e. each year, the new most common fields are added to the end? At what point would it make sense to wipe out the cruft and define a newer table altogether? I think what might be needed is a good amount of datamodelling and simulation that is sufficient to decide when there is activation energy to make changes. Perhaps the proposal is a compromise to make it low effort enough for implementations to update, that they don't need tremendous amounts of overwhelming evidence to keep up. IIRC historically the effort to sample the Internet and propose a table has been quite high, and there have been some criticisms about the outputs. Given the HTTP WG has struggled with this aspect, I think it is decidedly impractical to make IANA or a designated expert solely responsible for deciding the QPACK entries. This is something that has to run through a consensus approach IMO, especially as lower entries are more valuable and couldn't ever be reclaimed. I think the largest activation energy would be to convince endpoints to implement the negotiation mechanism because it is pushing it into the TLS layer and that crosses implementation boundaries. Watson asks why not SETTINGS. One answer is that it requires clients to have to wait for the server's settings, which adds a delay that many clients don't already apply. Trading off latency for a few bytes doesn't sound like a good tradeoff. Indeed, this is why we optimised the static table for the clien'ts first flight before it knows if the server supports dynamic QPACK or not. Putting a client's static QPACK preference in a Client Hello is a fingerprinting vector, so that is a concern. Perhaps a middleground is to use a SETTING but then rope in the old ALPS proprosal [1] so that the client learns the server's view as early as required, and the client sends its view in SETTINGS after protection is established. Changing track a bit, I don't understand why your proposal requires the client and server to agree on the extension value. Its a declaration of what the endpoint can support and I don't think there is any issue in e.g. the client being willing to receive references to entries greater than 99 and the server refusing to. Encoding asymmetry is already part of QPACK DNA. Cheers, Lucas [1] - https://datatracker.ietf.org/doc/html/draft-vvv-tls-alps On Wed, Sep 27, 2023 at 1:40 AM Martin Thomson <m...@lowentropy.net<mailto:m...@lowentropy.net>> wrote: On Wed, Sep 27, 2023, at 01:32, Hewitt, Rory wrote: > Apologies if I should respond directly to the mailing list - my old W3C > profile has disappeared and I'm trying to get it back... Just on this point. Watson added the HTTP working group, which I think is the right thing to do here. The maintenance of HTTP/3 now formally belongs in that group. The work of defining a TLS extension for that purpose would occur there (if indeed a TLS extension is the right choice, as Watson asks). As for the W3C involvement, the HTTP working group is an IETF activity that - for historical reasons - uses a W3C-hosted mailing list. You don't need to be a W3C member to sign up for that list. The process is just a little different than for other IETF lists. See https://lists.w3.org/Archives/Public/ietf-http-wg/ for details.
_______________________________________________ TLS mailing list TLS@ietf.org https://www.ietf.org/mailman/listinfo/tls