Keep-alive for a public HTTP server

2019-05-18 Thread Vladimir Mihailenco
Hi,

I've been trying to investigate why we are having lots (tens per second) of
following lines in our log

May 15 11:33:11 localhost haproxy[17618]: XX.XX.179.70:28366
[15/May/2019:11:33:11.430] fe_http~ be_/s2 0/0/0/-1/3 -1 505 - - SD--
120/120/2/2/0 0/0 {api.xx.io|Ruby/2.6.3|10853} {} "POST /api/v3/
HTTP/1.1"

which basically means that our Go backend immediately closes the connection
without a response.

After some investigation it turns out that Haproxy proxies "Connection:
close" from the client to Go server and Go closes the connection after
serving the request. But it looks like haproxy keeps using the connection
to serve the next request.

Adding `http-request set-header Connection keep-alive` or `http-request
del-header Connection` fixes the problem. Go does not see `Connection:
close`, therefore Go does not close the connection, and haproxy does not
get an error writing to a closed connection.

All that is true for Haproxy 1.8.20 and Haproxy 1.9.8 with option
http-keep-alive

(which
is the default I know). Does that make sense?

TBH I would expect haproxy to either remove `Connection: close` (preferred)
or to not reuse the connection. How it is working now is confusing
especially since docs/tutorials say that all you need is option
http-keep-alive

(which
is the default).


IP rate limiting on EC2 and 100% CPU usage

2018-03-27 Thread Vladimir Mihailenco
Hi,

I am using latest haproxy with EC2 elastic load balancer configured to
proxy TCP:443 <-> TCP:443 to support HTTP2. PROXY protocol is enabled to
get original IP address.

IP rate limiting is done using following config:

frontend fe_http
bind *:443 accept-proxy ssl crt ... no-sslv3 alpn h2,http/1.1

stick-table type ip size 256k expire 10s store http_req_rate(10s)
tcp-request inspect-delay 5s
# Must use "content" because of PROXY protocol.
tcp-request content track-sc0 src

acl check_http_req_rate sc0_http_req_rate ge 256
tcp-request content reject if check_http_req_rate
use_backend be_429_slow_down if check_http_req_rate

backend be_429_slow_down
errorfile 503 /etc/haproxy/errors/429.http

It works and is helpful until some point when haproxy consumes 100% CPU on
1 of 4 available cores and requests start failing. It can be that I need
better/more hardware, but I wonder if there is anything I can improve in my
config to lower CPU usage? Thanks in advance.


Re: Gzip compression and transfer: chunked

2017-01-26 Thread Vladimir Mihailenco
>Just to be sure, here there is a typo error. You meant "HTML is
compressed", right ?

No, it was not compressed with Haproxy 1.6. AFAIK compression is
automatically disabled for chunked responses in Haproxy <1.7 -
https://bogomips.org/rainbows-public/cd8781d3-288b-4b61-85ed-16b8b15a9...@gmail.com/.
At least it corresponds with behavior I saw... Also Haproxy 1.7 changelog
says "MAJOR: http: re-enable compression on chunked encoding".

>if possible, it could be helpful to have tcpdump of data exchanged between
HAProxy and your backends

I can't do it on existing staging, because it constantly receives some
requests and I am not that good at tcpdump to filter 1 request.

On Wed, Jan 25, 2017 at 11:41 AM, Christopher Faulet <
christopher.fau...@capflam.org> wrote:

> Le 24/01/2017 à 10:55, Vladimir Mihailenco a écrit :
>
>> This is the config -
>> https://gist.github.com/vmihailenco/9010ad37f5aeb800095a6b18909ae7d5.
>> Backends don't have any options. I already tried to remove `http-reuse
>> safe`, but it does not make any difference.
>>
>> Haproxy 1.7 with compression (HTML not fully loaded) -
>> https://gist.github.com/vmihailenco/05bda6e7a49b6f78cd2f749abb0cf5b3
>> Haproxy 1.7 without compression (HTML fully loaded) -
>> https://gist.github.com/vmihailenco/d8732e53acac3769a85b59afd7336bab
>> Haproxy 1.7 with compression and Rails configured to set Content-Length
>> via config.middleware.use Rack::ContentLength (HTML fully loaded) -
>> https://gist.github.com/vmihailenco/13a809f486c4e1833ef813a019549180
>>
>>
> Hi,
>
> Thanks for details. There are some things that puzzle me.
>
> I guess that when you disable the compression, it means that you comment
> "compression" lines in the frontend section. In that case, we can see the
> response is chunked. Because it is untouched by HAProxy, this comes from
> your backend. Here there is no problem.
>
> But when the compression is enabled, there is a Content-Length header and
> no Transfer-Encoding header. That's really strange because, HAProxy never
> adds Content-Length header, at any place. So, I'm tempted to think that it
> comes from your backend. But that's contradicts my previous remark. And
> there is no Content-Encoding header. So, it means that HAProxy didn't
> compressed the response.
>
> So, to be sure, if possible, it could be helpful to have tcpdump of data
> exchanged between HAProxy and your backends (or something similar, as you
> prefer). Send me them in private to not flood the ML. In the meantime, I
> will try to investigate.
>
> With Haproxy 1.6 and enabled compression
>> - i can load full HTML (200kb)
>> - HTML is not compressed
>> - Transfer-encoding: "chunked"
>> - no Content-Length header
>>
>
> Just to be sure, here there is a typo error. You meant "HTML is
> compressed", right ?
>
> --
> Christopher
>


Re: Gzip compression and transfer: chunked

2017-01-24 Thread Vladimir Mihailenco
This is the config -
https://gist.github.com/vmihailenco/9010ad37f5aeb800095a6b18909ae7d5.
Backends don't have any options. I already tried to remove `http-reuse
safe`, but it does not make any difference.

Haproxy 1.7 with compression (HTML not fully loaded) -
https://gist.github.com/vmihailenco/05bda6e7a49b6f78cd2f749abb0cf5b3
Haproxy 1.7 without compression (HTML fully loaded) -
https://gist.github.com/vmihailenco/d8732e53acac3769a85b59afd7336bab
Haproxy 1.7 with compression and Rails configured to set
Content-Length via config.middleware.use
Rack::ContentLength (HTML fully loaded) -
https://gist.github.com/vmihailenco/13a809f486c4e1833ef813a019549180

On Mon, Jan 23, 2017 at 1:06 PM, Christopher Faulet <
christopher.fau...@capflam.org> wrote:

> Le 23/01/2017 à 11:54, Vladimir Mihailenco a écrit :
>
>> Hi,
>>
>> I am using haproxy as load balancer/reverse proxy for Rails/Go
>> application. I am upgrading from working Haproxy 1.6 config to 1.7.2.
>> And it looks like I need to change my existing config, because Haproxy
>> 1.7 truncates responses from Rails/Rack application.
>>
>> With Haproxy 1.6 and enabled compression
>> - i can load full HTML (200kb)
>> - HTML is not compressed
>> - Transfer-encoding: "chunked"
>> - no Content-Length header
>>
>> With same config Haproxy 1.7
>> - only first 14kb are avalable
>> - no Transfer-encoding
>> - Content-Length: 14359
>>
>> With Haproxy 1.7 and compression disabled
>> - full HTML is available
>> - HTML is not compressed
>> - Transfer-encoding: "chunked"
>> - no Content-Length header
>>
>> Any recommendations? Should I disable compression from Rails/Rack app?
>>
>
> Hi,
>
> Could you share your configurations, the both please ? And if possible,
> the request/response headers for all scenarios. The compression was
> rewritten in 1.7. So it is possible that something was broken.
>
> Headers returned by your backend could be useful too.
>
> --
> Christopher
>
>


Gzip compression and transfer: chunked

2017-01-23 Thread Vladimir Mihailenco
Hi,

I am using haproxy as load balancer/reverse proxy for Rails/Go application.
I am upgrading from working Haproxy 1.6 config to 1.7.2. And it looks like
I need to change my existing config, because Haproxy 1.7 truncates
responses from Rails/Rack application.

With Haproxy 1.6 and enabled compression
- i can load full HTML (200kb)
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header

With same config Haproxy 1.7
- only first 14kb are avalable
- no Transfer-encoding
- Content-Length: 14359

With Haproxy 1.7 and compression disabled
- full HTML is available
- HTML is not compressed
- Transfer-encoding: "chunked"
- no Content-Length header

Any recommendations? Should I disable compression from Rails/Rack app?


Re: Migration from nginx

2015-09-05 Thread Vladimir Mihailenco
Thanks for advice. It turns out that Go silently (without any reply and log
message) closes the connection when it can't fully read request headers.
Which is kinda strange, because I thought that haproxy fully reads request
headers to route it to proper backend...


On Sat, Sep 5, 2015 at 4:11 AM, <thierry.fourn...@arpalert.org> wrote:

> On Wed, 2 Sep 2015 09:26:25 +0300
> Vladimir Mihailenco <vladimir.web...@gmail.com> wrote:
>
> > Hi,
> >
> > I am trying to migrate existing app written in Go from nginx to HA-Proxy
> > version 1.5.14 2015/07/02 on Ubuntu 12.04. nginx/haproxy runs behind F5
> > load balancer. My config:
> > https://gist.github.com/vmihailenco/9b41016b05cdea821687 . App mainly
> > serves POST requests with body size 10-64kb.
> >
> > First thing that I noticed after stopping nginx and starting haproxy is
> > that app spends more time processing requests (same server, same amount
> of
> > requests). E.g. with nginx Go responds within 1-2ms, but with haproxy
> > response time is in range of 100-400ms. I guess the reason is that nginx
> > buffers incoming request until it is fully read, but haproxy does not.
> What
> > can I do to enable request buffering in haproxy?
> >
> > From the logs I also see that sometimes Go does not send response
> headers,
> > e.g.
> >
> > haproxy[6607]: 149.210.205.54:54598 [01/Sep/2015:17:15:01.931] http-in
> > goab/s1 0/0/0/-1/1 -1 381 - - SD-- 128/128/6/6/0 0/0 {myhost} "POST /url
> > HTTP/1.1"
> > haproxy[6607]: 192.243.237.46:34628 [01/Sep/2015:17:15:12.851] http-in~
> > goab/s1 224/0/0/1/674 413 381 - - SD-- 128/128/15/15/0 0/0 {myhost} "POST
> > /url HTTP/1.1"
>
> Hi,
>
> You can look the documentation about log here:
>
>https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#8.2.3
>
> The termination flags are SD--, so the documentation says:
>
>S : the TCP session was unexpectedly aborted by the server, or
>the server explicitly refused it.
>
>D : the session was in the DATA phase.
>
> I suppose that you have some keepalive errors. Try to activate
> keepalive between the browser and haproxy, and deactivate between
> haproxy and your go server.
>
> Look for the directive "option httpclose".
>
>
> > So these are 2 identical requests with same response body, but 2nd
> request
> > has status code = -1. I don't understand how that is possible, because if
> > app does not set status code Go uses 200 OK status code. And app does not
> > crash.
> >
> > Thanks in advance for any help/advices.
>