Hello Willy,

thanks for your suggestions.
>> There are two things you can try :
>> 1) please issue "show errors" on your CLI when this happens
>> 2) you can enable the HTX mode
I did both. Though It was not really a success... What could I do next? 
Everything - browser, docker/haproxy and the jetty application are running on 
the same machine, there is no wire in between... (ok, except the database, but 
I do not have a request reaching the servlet).

172.17.0.1:47340 [19/Mar/2019:15:03:49.755] [FRONTEND] [BACKEND]/web850 
0/0/0/-1/13 400 191 - - CH-- 1/1/0/0/0 0/0 "POST [URL] HTTP/2.0"

root@ad23bb7939a9:/# echo "show errors" | socat 
unix-connect:/var/run/haproxy.stat stdio
Total events captured on [19/Mar/2019:15:04:05.665] : 0

root@ad23bb7939a9:/# echo "show sess" | socat 
unix-connect:/var/run/haproxy.stat stdio
0x5604be62deb0: proto=unix_stream src=unix:1 fe=GLOBAL be=<NONE> srv=<none> ts 
age=0s calls=1 cpu=0 lat=0 rq[fÀc220h,i=0,anh,rx=,wx=,ax=] 
rp[f€008000h,i=0,anh,rx=,wx=,ax=] s0=[7,80008h,fd,ex=] 
s1=[7,204018h,fd=-1,ex=] exp
root@ad23bb7939a9:/# echo "show stat" | socat 
unix-connect:/var/run/haproxy.stat stdio
# 
pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses,wrew,connect,reuse,cache_lookups,cache_hits,
frontend_https-sni,FRONTEND,,,1,1,2000,1,119357,1935662,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,1,,,,0,147,0,1,0,0,,0,28,148,,,0,0,0,0,,,,,,,,,,,,,,,,,,,,,http,,0,1,1,0,0,0,0,,,0,0,
[BACKEND],[XXX],0,0,0,11,,148,119357,1935662,,0,,0,0,0,0,no 
check,1,1,0,,,492,,,1,3,1,,148,,2,0,,28,,,,0,147,0,0,0,0,,,,,1,0,,,,,8,,,0,1,16,784,,,,,,,,,,,,172.18.11.102:80,,http,,,,,,,,0,27,121,,,
[BACKEND],BACKEND,0,0,0,11,200,148,119357,1935662,0,0,,0,0,0,0,UP,1,1,0,,0,492,0,,1,3,0,,148,,1,0,,28,,,,0,147,0,1,0,0,,,,148,1,0,0,0,0,0,8,,,0,1,16,784,,,,,,,,,,,,,,http,roundrobin,,,,,,,0,27,121,0,0,

Which basically says there is one "hrsp_4xx".

root@ad23bb7939a9:/# echo "show info" | socat 
unix-connect:/var/run/haproxy.stat stdio
Name: HAProxy
Version: 1.9.4
Release_date: 2019/02/06
Nbthread: 1
Nbproc: 1
Process_num: 1
Pid: 250
Uptime: 0d 0h12m56s
Uptime_sec: 776
Memmax_MB: 0
PoolAlloc_MB: 0
PoolUsed_MB: 0
PoolFailed: 0
Ulimit-n: 4031
Maxsock: 4041
Maxconn: 2000
Hard_maxconn: 2000
CurrConns: 0
CumConns: 8
CumReq: 157
MaxSslConns: 0
CurrSslConns: 0
CumSslConns: 1
Maxpipes: 0
PipesUsed: 0
PipesFree: 0
ConnRate: 0
ConnRateLimit: 0
MaxConnRate: 1
SessRate: 0
SessRateLimit: 0
MaxSessRate: 1
SslRate: 0
SslRateLimit: 0
MaxSslRate: 1
SslFrontendKeyRate: 0
SslFrontendMaxKeyRate: 1
SslFrontendSessionReuse_pct: 0
SslBackendKeyRate: 0
SslBackendMaxKeyRate: 0
SslCacheLookups: 0
SslCacheMisses: 0
CompressBpsIn: 0
CompressBpsOut: 0
CompressBpsRateLim: 0
ZlibMemUsage: 0
MaxZlibMemUsage: 0
Tasks: 7
Run_queue: 1
Idle_pct: 100
node: ad23bb7939a9
Stopping: 0
Jobs: 4
Unstoppable Jobs: 0
Listeners: 3
ActivePeers: 0
ConnectedPeers: 0
DroppedLogs: 0
BusyPolling: 0


-----Ursprüngliche Nachricht-----
Von: Willy Tarreau <w...@1wt.eu>
Gesendet: Dienstag, 19. März 2019 15:48
An: Maximilian Böhm <maximilian.bo...@auctores.de>
Cc: haproxy@formilux.org
Betreff: Re: 400 SC on h2 xhr post

Hi Maximilian,

On Tue, Mar 19, 2019 at 01:17:52PM +0000, Maximilian Böhm wrote:
> 172.17.0.1:46372 [19/Mar/2019:12:10:43.465] [fntnd] [bknd] 0/0/0/-1/8 400 187 
> - - CH-- 1/1/0/0/0 0/0 "POST [URL] HTTP/1.1"

This one could indicate a client close while uploading the contents, but it 
could also actually be a side effect of the ambiguity we have at certain stages 
between an end of request and a close, and the short timings make me think 
about this.

> Any ideas? Maybe a similar bug is known?

Not directly known but could be related to something we're currently working on 
as indicated above.

> What shall/can I do next? Setting up
> Wireshark with MITM and comparing the requests? Right now, I can't
> imagine the error is on side of the client nor on the backend (the
> backend is not changed).

There are two things you can try :

1) please issue "show errors" on your CLI when this happens, to see if
   you find a real client error captured there. It could be possible that
   something is invalid in the request and that it was blocked during the
   parsing (though quite unlikely).

2) you can enable the HTX mode which avoids the translation from H2 to H1
   before processing the request. For this, please add "option http-use-htx"
   to your configuration and see if the problem goes away or becomes more
   frequent (or doesn't change). In all cases please report your observation
   here :-)

> The bug happens on our productive cluster as well as in a docker
> container ("haproxy:1.9.4"). I was able to reproduce the same bag
> within that container with a minimal configuration (see below).

At first glance your config is perfectly fine, so it could very well be related 
to the issue above.

Thanks!
Willy

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to