Note that if you can control the client, then you can use the Expect:
100-Continue header to avoid sending the body until the headers are
checked... but that doesn't work if the client is just a browser.

It does seem like Nginx could do better if jetty has already send a 400
with connect:close and then closes the connection.  It is wrong for nginx
to say bad gateway, as that is a perfectly legal thing to do in HTTP.

On Wed, 31 Mar 2021 at 18:00, Simone Bordet <sbor...@webtide.com> wrote:

> Hi,
>
> On Wed, Mar 31, 2021 at 6:45 AM Daniel Gredler <djgred...@gmail.com>
> wrote:
> >
> > Hi,
> >
> > I'm playing around with a Jetty-based API service deployed to AWS
> Elastic Beanstalk in a Docker container. The setup is basically: EC2 load
> balancer -> nginx reverse proxy -> Docker container running the Jetty
> service.
> >
> > One of the API endpoints accepts large POST requests. As a safeguard, I
> wanted to add a maximum request size (e.g. any request body larger than 1
> MB is rejected). I thought I'd be clever and check the Content-Length
> header, if present. If the header indicates that the body is too large, I'd
> reject the request immediately (HTTP 400 error), without even wasting time
> reading the request body. I can imagine similar fail-fast checks on the
> security side, using the Authorization HTTP request header.
> >
> > This Content-Length check works correctly most of the time, but
> occasionally nginx reports "writev() failed (32: Broken pipe) while sending
> request to upstream" and sends a HTTP 502 error upstream to the load
> balancer, which duly informs the client that there was a HTTP 502 Bad
> Gateway error somewhere along the line.
> >
> > It appears that in these instances Jetty is closing the connection after
> sending back the HTTP 400 error, nginx doesn't notice and continues to try
> to send the request body content to Jetty, sees at that point that the
> connection is closed, and reports a less-than-friendly HTTP 502 error to
> the client.
> >
> > So I'm wondering... is this fail-fast Content-Length header check too
> clever? Is it best practice to actually always read the full request body,
> and only fail once the body has been fully read, even if we have enough
> information to reject the request much earlier? Or would most people just
> accept the occasional 502 error? I've seen some mentions of SO_LINGER /
> setSoLingerTime and setAcceptQueueSize as possible workarounds, but
> SO_LINGER especially always seems to be surrounded with "here be dragons"
> warnings...
> >
> > What's the best practice here? Should I just accept that I need to read
> these useless bytes?
>
> Don't use SO_LINGER.
> Your best option is to read all the bytes; would be best if you can do
> this asynchronously.
>
> The problem is that by the time you close the connection from Jetty,
> Nginx may not have received the whole content from the client.
> So Jetty closes, then Nginx receives some more content from the
> client, tries to write to Jetty, but finds the connection closed, and
> reports back the 502.
>
> Not reading the content will just cause the TCP connection to congest
> with the same results (502), so if you really want to send a clean 400
> you have to read the whole content.
>
> --
> Simone Bordet
> ----
> http://cometd.org
> http://webtide.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> _______________________________________________
> jetty-users mailing list
> jetty-users@eclipse.org
> To unsubscribe from this list, visit
> https://www.eclipse.org/mailman/listinfo/jetty-users
>


-- 
Greg Wilkins <gr...@webtide.com> CTO http://webtide.com
_______________________________________________
jetty-users mailing list
jetty-users@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/jetty-users

Reply via email to