Inline with [MB].

________________________________
From: Christian Huitema <[email protected]>
Sent: Wednesday, January 28, 2026 6:44 PM
To: Mike Bishop <[email protected]>; The IESG <[email protected]>
Cc: [email protected] <[email protected]>; 
[email protected] <[email protected]>; [email protected] 
<[email protected]>; [email protected] <[email protected]>
Subject: Re: Mike Bishop's Discuss on draft-ietf-quic-multipath-19: (with 
DISCUSS and COMMENT)


On 1/28/2026 12:54 PM, Mike Bishop via Datatracker wrote:
> Mike Bishop has entered the following ballot position for
> draft-ietf-quic-multipath-19: Discuss
>
> When responding, please keep the subject line intact and reply to all
> email addresses included in the To and CC lines. (Feel free to cut this
> introductory paragraph, however.)
>
>
> Please refer to 
> https://www.ietf.org/about/groups/iesg/statements/handling-ballot-positions/
> for more information about how to handle DISCUSS and COMMENT positions.
>
>
> The document, along with other ballot positions, can be found here:
> https://datatracker.ietf.org/doc/draft-ietf-quic-multipath/
>
>
>
> ----------------------------------------------------------------------
> DISCUSS:
> ----------------------------------------------------------------------
>
> # IESG review of draft-ietf-quic-multipath-19
>
> CC @MikeBishop
>
> I have previously reviewed this draft as a working group member, and 
> appreciate
> the work that has been put into it so far. I have a few comments from my most
> recent review, but in general am quite pleased with how the draft has
> progressed since I last read it.
>
> ## Discuss
>
> ### Section 2.2, paragraph 1
> ```
>       When the QUIC multipath extension is used, the
>       active_connection_id_limit transport parameter [QUIC-TRANSPORT]
>       limits the maximum number of active connection IDs per path.  As
>       defined in Section 5.1.1 of [QUIC-TRANSPORT] connection IDs that are
>       issued and not retired are considered active.
> ```
> This seems to present a conundrum to clients trying to manage their memory
> consumption. active_connection_id_limit can't be changed after the handshake. 
> So
> if a constrained implementation wants to manage no more than N CIDs total, but
> also supports multipath, it cannot advertise N for this value, because its 
> total
> memory commitment is active_connection_id_limit x the current maximum number 
> of
> paths.

The `active_connection_id_limit` is set to enable migrations of paths,
and the limit will restrain how many potential 4-tuple can be tested in
parallel for a migration attempt. You argue that if multipath is
negotiated, endpoints would use creation of new paths in preference to
migrations of existing paths, and thus could use a lower limit when
multipath is negotiated. However, we do not have any data about that. We
have at least one example with "preferred address migration" of a
scenario that requires path migration even if multipath is supported.
Another example would be NAT rebinding of an existing path.

So yes, we could speculate that having separate negotiation of two
parameters would result in lower resource consumption. However, this is
speculation, and it is hard to quantify how much resource this would
actually save. The protocol is complex enough, we tried not to introduce
complexity if the requirement is fuzzy, which is why we did not define a
separate option.

> But if it takes the conservative approach and advertises N / M for
> active_connection_id_limit, then when it establishes a non-multipath QUIC
> connection, it will be understating its willingness to handle CIDs and 
> therefore
> hampering its ability to rotate/migrate.
>
> Did the WG discuss this and reach consensus on reusing the transport parameter
> despite this challenge? I would have expected either a transport parameter 
> that
> supersedes active_connection_id_limit when multipath is negotiated, or some
> post-handshake way to adjust the limit.

This point was not discussed, and did not appear to be an actual problem
in any of the interop tests.

I suspect that most memory limited implementations will converge to a
small limit, between 2 and 4 per path, satisfying both multipath and
most unipath scenarios. The one exception may be P2P extensions, but
again we do not have a lot of experience. If the "ICE in QUIC" scenarios
turn out to require many CIDs, then perhaps we can think of extensions
to allow larger number of parallel tests as part of these designs.

[MB] I agree it's speculative, and it's encouraging that this hasn't been an 
issue in existing constrained tests. I worry about relying on extensions as an 
escape hatch, though, because a constrained implementation that would only do 
multipath with some hypothetical extension can't know whether it's supported 
before negotiating multipath. That said, it could always advertise a maximum 
path ID of 0 and only increase from there after seeing what else the 
negotiation agrees on.

>
> ### Section 3.1, paragraph 2
> ```
>       A client that wants to use a new path MUST validate the peer's
>       address before sending any data as described in Section 8.2 of
>       [QUIC-TRANSPORT], unless it has previously validated the 4-tuple used
>       for that path.
> ```
> Can you point me to the text in Section 8.2 of RFC 9000 you're referencing for
> this prohibition on sending data? What I find there is:
>
>> An endpoint MAY include other frames with the PATH_CHALLENGE and 
>> PATH_RESPONSE
>> frames used for path validation.
> ...and more explicitly in Section 9.3:
>
>> An endpoint MAY send data to an unvalidated peer address, but it MUST protect
>> against potential attacks as described in Sections 9.3.1 and 9.3.2.
> In fact, in Section 3.1.2 of this document, "any frame can be sent on a new 
> path
> with a new path ID at any time...."

The intent is definitely to follow RFC 9000 here. Do you have a
suggestion for improving the text?

[MB] This is effectively analogous to the anti-amplification limit — we don't 
want a malicious server to tell the client to start sending to some other 
address, then generate a large amount of attack traffic. The cap on such 
traffic in the initial handshake is the ICW; given that each new path gets a 
fresh congestion controller, the same would apply here. I think there's also an 
implied preference for existing paths if they're available, and the document 
already has terms for that concept.

Perhaps: "A client that wants to use a new path MUST validate the peer's 
address. Until the peer address is validated, the client SHOULD treat the new 
path as "backup," i.e. less preferred than any available and usable paths."

That might also suggest being very explicit in Section 5.3 that each path still 
has a distinct congestion controller even if the paths share a 4-tuple.

>
> ### Section 5.8, paragraph 1
> ```
>       [QUIC-TRANSPORT] the DPLPMTUD Maximum Packet Size (MPS) is maintained
>       for each combination of local and remote IP addresses.  Note that
>       with the multipath extension multiple paths could use the same
>       4-tuple but might have different MPS.  One simple option, if the
> ```
> How would two paths with "the same 4-tuple" ever have a different "combination
> of local and remote IP address"? Isn't that a subset of the 4-tuple by
> definition?

If they have different diffserv class of service, for example.

[MB] If different DiffServ classes could have different maximum packet sizes, 
let's be more explicit that we're saying Section 14.3's 2-tuple logic is 
insufficiently granular. Right now, it reads like a contradiction, hence my 
confusion. Perhaps something like:

"An implementation should take care to handle different PMTU sizes across 
multiple paths. In Section 14.3 of [QUIC-TRANSPORT] the DPLPMTUD Maximum Packet 
Size (MPS) is maintained for each combination of local and remote IP addresses. 
However, with the multipath extension multiple paths could use the same 4-tuple 
but might have a different MPS due to other factors (see Section 5.2). Each 
path's PMTU can (SHOULD?) be probed and tracked separately, even when the path 
shares a 4-tuple with an existing path.

If the PMTUs are similar, an implementation could apply the minimum PMTU of all 
paths to each path, which might simplify retransmission processing."

> ### Section 7.2, paragraph 2
> ```
>       Further, multiple paths could be initialized simultaneously.  The
>       anti-amplification limits as specified in Section 8 of
>       [QUIC-TRANSPORT] limit the amplification risk for a given path, but
>       multiple paths could be used to further amplify an attack.
> ```
> Why then is the anti-amplification limit per-path rather than per-address?

We never have the concept of per-address limit. Asking implementations
to perform special treatment per IP address would be rather error prone,
given NATs. We could investigate other limits, such as limiting the
number of concurrent path establishment, but I think we really need
implementation experience before writing more rules.

[MB] I'm not sure that's how I read RFC 9000. It frames the anti-amplification 
limit as being with regard to an unvalidated *address*: "Therefore, after 
receiving packets from an address that is not yet validated, an endpoint MUST 
limit the amount of data it sends to the unvalidated address to three times the 
amount of data received from that address. This limit on the size of responses 
is known as the anti-amplification limit."

Reply via email to