On Tue, Feb 10, 2026 at 05:56:15AM +0300, Maxim Dounin wrote: > Hello! > > On Mon, Feb 09, 2026 at 04:58:34PM +0300, Vladimir Homutov via nginx-devel > wrote: > > > On Sun, Feb 08, 2026 at 11:23:13PM +0300, Maxim Dounin wrote: > > > # HG changeset patch > > > # User Maxim Dounin <[email protected]> > > > # Date 1770581815 -10800 > > > # Sun Feb 08 23:16:55 2026 +0300 > > > # Node ID bce5954f74a115503fd22d955d0fcea18f6c77cb > > > # Parent b44cd6f3bc983859f3a213fbff19b4761ac59ec8 > > > Upstream: fixed SSL initialization on read events. > > > > > > Previously, SSL initialization only happened on write events, which are > > > reported once a TCP connection is established. However, if some data > > > are received before the write event is reported, the read event might be > > > reported first, potentially resulting in a plain text response being > > > accepted when it shouldn't (CVE-2026-1642). > > > > > > Normally SSL servers do not send anything before ClientHello from the > > > client, though formally they are allowed to (for example, HelloRequest > > > messages may be sent at any time). As such, the fix is to do proper SSL > > > initialization on read events as well. This ensures correct and identical > > > behaviour regardless of the order of events being reported. > > > > Hello, > > > > while the fix does the job, I think that the root reason > > is the fact that we set u->read_event_handler to > > ngx_http_upstream_process_header > > in ngx_http_upstream_connect(), when we haven't yet sent the request. > > > > And we have no idea what the request will be (maybe plaintext, maybe > > SSL, maybe some other protocol). > > > > It looks like the reasonable default for u->read_event_handler should be > > setting something like 'reject_premature_response'. > > > > And when we later connect and send the request, we may override it with > > appropriate handlers according to the desired protocol. > > I don't think that rejecting premature responses is a correct > approach, since at least in case of SSL server can legitimately > send data first: as mentioned in the commit log, HelloRequest > messages in TLSv1.2 may be sent at any time, see here: > > https://datatracker.ietf.org/doc/html/rfc5246#section-7.4.1.1 > > While it says that "Servers SHOULD NOT send a HelloRequest > immediately upon the client's initial connection", servers still > can send it, either immediately or after some timeout, and this is > not expected to break things (but it will if we'll reject any > data before the write handler is triggered). > > Also I tend to think that in HTTP returning a response without an > actual request on the wire is also valid - e.g., 408 might be > returned due to a timeout before the request is received, or 503 > might be sent due to some server issues without trying to read an > actual request. > > What probably can be done here is a block reading handler, so > reading will actually happen, but only after the write event > handler is triggered (and we'll decide what to do with the > connection). It will indeed simplify things though, including > fixing some other issues (notably proper reinitialization when > switching to next upstream after a premature response, and > $upstream_connect_time calculation). > > A downside of this approach is that it doesn't allow optimized > handling of premature error responses. On the other hand, it does > not look like a big issue, as such responses aren't common (if > exist at all). > > The patch below implements this approach. What do you think? > > # HG changeset patch > # User Maxim Dounin <[email protected]> > # Date 1770691814 -10800 > # Tue Feb 10 05:50:14 2026 +0300 > # Node ID 36c9c3fae57f47a5a6710cc754e52f79ae8dc0fe > # Parent b44cd6f3bc983859f3a213fbff19b4761ac59ec8 > Upstream: blocked read events till sending request. > > Previously, SSL initialization only happened on write events, which are > reported once a TCP connection is established. However, if some data > are received before the write event is reported, the read event might be > reported first, potentially resulting in a plain text response being > accepted when it shouldn't (CVE-2026-1642). > > Similarly, if a response was received before the write event was reported > (and hence before the request was sent to the upstream server), switching > to the next upstream server failed to do proper reinitialization, which > used to check the u->request_sent flag. This resulted in various issues, > most notably previously parsed headers were not cleared. > > Normally SSL servers do not send anything before ClientHello from the > client, though formally they are allowed to (for example, HelloRequest > messages may be sent at any time). Further, returning an immediate error > response might also be seen as valid, for example, in HTTP. As such, > the fix is to block read events until the write event is reported, ensuring > correct and identical behaviour regardless of the order of events being > reported. > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -52,6 +52,8 @@ static ngx_int_t ngx_http_upstream_send_ > static void ngx_http_upstream_send_request_handler(ngx_http_request_t *r, > ngx_http_upstream_t *u); > static void ngx_http_upstream_read_request_handler(ngx_http_request_t *r); > +static void ngx_http_upstream_block_reading(ngx_http_request_t *r, > + ngx_http_upstream_t *u); > static void ngx_http_upstream_process_header(ngx_http_request_t *r, > ngx_http_upstream_t *u); > static ngx_int_t ngx_http_upstream_test_next(ngx_http_request_t *r, > @@ -1595,7 +1597,7 @@ ngx_http_upstream_connect(ngx_http_reque > c->read->handler = ngx_http_upstream_handler; > > u->write_event_handler = ngx_http_upstream_send_request_handler; > - u->read_event_handler = ngx_http_upstream_process_header; > + u->read_event_handler = ngx_http_upstream_block_reading; > > c->sendfile &= r->connection->sendfile; > u->output.sendfile = c->sendfile; > @@ -2138,6 +2140,10 @@ ngx_http_upstream_send_request(ngx_http_ > return; > } > > + if (!u->request_sent) { > + u->read_event_handler = ngx_http_upstream_process_header; > + } > + > sent = c->sent; > > if (!u->request_sent) { > @@ -2419,6 +2425,24 @@ ngx_http_upstream_read_request_handler(n > > > static void > +ngx_http_upstream_block_reading(ngx_http_request_t *r, ngx_http_upstream_t > *u) > +{ > + ngx_connection_t *c; > + > + c = u->peer.connection; > + > + ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c->log, 0, > + "http upstream block reading"); > + > + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { > + ngx_http_upstream_finalize_request(r, u, > + NGX_HTTP_INTERNAL_SERVER_ERROR); > + return; > + } > +} > + > + > +static void > ngx_http_upstream_process_header(ngx_http_request_t *r, ngx_http_upstream_t > *u) > { > ssize_t n; > @@ -2437,11 +2461,6 @@ ngx_http_upstream_process_header(ngx_htt > return; > } > > - if (!u->request_sent && ngx_http_upstream_test_connect(c) != NGX_OK) { > - ngx_http_upstream_next(r, u, NGX_HTTP_UPSTREAM_FT_ERROR); > - return; > - } > - > if (u->buffer.start == NULL) { > u->buffer.start = ngx_palloc(r->pool, u->conf->buffer_size); > if (u->buffer.start == NULL) { >
yes, this makes sense. looks good for me.
