Hello QUICWG,

In lieu of taking up agenda time in Philadelphia, I thought I'd update the
WG on the status of the QUIC-LB document via email, on behalf of my fellow
authors, Nick and Christian.

*TL;DR editorially, the document is ready for WGLC, but the working group
might want to wait for more running code.*

Draft-14 is out
<https://datatracker.ietf.org/doc/draft-ietf-quic-load-balancers/> and
closes all the remaining open Github issues:

   - We responded to the crypto review. There are some tweaks to the 4-pass
   algorithm, and we address the concern that there should be more than 4
   passes in Section 8.7
   
<https://datatracker.ietf.org/doc/html/draft-ietf-quic-load-balancers#section-8.7>
   .
   -  I added further normative requirements around the 0b11 'fourtuple'
   codepoint in Section 2.2
   
<https://datatracker.ietf.org/doc/html/draft-ietf-quic-load-balancers#section-2.2>
   .
   - A new non-normative section about issues relating to demultiplexing
   between processes
   
<https://datatracker.ietf.org/doc/html/draft-ietf-quic-load-balancers#section-6.2>
   on a single server.

There are no remaining issues and we believe, from an editorial
perspective, consensus is strong enough for WGLC. There have been extensive
WG contributions, although interest has waned as we get deeper in the
weeds. So IMO there's rough consensus; what about running code?

1. I implemented an open-source encoder/decoder library
<https://github.com/f5networks/quic-lb> and Load Balancer based on NGINX
UDP Proxy <https://github.com/martinduke/nginx-quic-lb> -- last updated for
draft-10, both in C. There is (very slow) WIP to get this caught up with
draft-14.

2. I implemented a Google QUICHE open-source C++ encoder/decoder library
<https://github.com/google/quiche/tree/main/quiche/quic/load_balancer>
(currently
draft-12 but with -14 support imminent). Work over the next several months
will deploy this into various server and L4 load balancer platforms that
use Google QUICHE, both proprietary and open source.

3. In non-Martin news, Alipay has a longstanding NGINX fork
<https://github.com/alipay/quic-lb> that also implements the QUIC-LB
load-balancer side, though it's apparently dated to draft-08.

In other words, we're pretty good on the load balancer side and light on
the server side; given the number of QUIC server implementations out there,
anyone willing to contribute code to one would be greatly appreciated. I
will commit to interop testing with a couple of weeks' notice between my
(updated) load balancer and anyone that produces a server.

So the WG has a few choices here:
a) Progress immediately to WGLC

b) Wait for Google's deployment to start to see if that generates any
further insights relevant to the spec, which is bounded to the order of
months

c) Wait for more implementations not by me and/or interop between them,
which has no bound how long it might take

I'm interested in the group's perspectives on the path forward -- and as
always, for you to read and review the document. I'll be at IETF 114 if
people would like to discuss privately in person, and if there's strong
support to add it to the agenda, I can ask the chairs for time.

Thanks for reading!
Martin

Reply via email to