force close connection for HTTP/2

2022-11-08 Thread Benedikt Fraunhofer
Hello Haproxy-List,

I need a way to forcefully close a HTTP/2 connection with a
haproxy-internally generated response ('http-request redirect" or
"http-request return")

Basically what "Connection: close" ("option httpclose" or "no option
http-keepalive") did for 1.1.

I know the HTTP/2 spec provides GOAWAY Frames for this
and haproxy already sends those on shutdown [1].

Is there a way to manually trigger these?

After lots of trying, crying and cursing I finally was able to abuse
"timeout client 100", but this seems ugly, even for me.
Not enabling HTTP/2 and using "option httpclose" or "no option
http-keep-alive" is - of course - another "workaround"

I also found [2] which suggests using a 421 response and an errorfile
for the content (one should be able to use 'http-request return'
instead today) but this is for retrying _the same_ request over a new
connection, not a redirect?
[3] is about another 421 foo for yet another ssl-problem as was [2];
an answer cites the RFC which says "client MAY retry", not "SHOULD" or
"MUST" and that chrome had a now-fixed bug in 2021 which ruined that.

I know use cases for this are rare. The Authors in [2] needed this for
client-certificates and [3] for some SNI stuff;  I need it for some
nat-conntrack-foo I'd rather not solve using raw/mangle iptables.

Hopefully the "timeout client " workaround at least
makes it into the docs so others running in this problem might find a
low-impact workaround. Or search engines scrape the mailinglist :)

Thx in Advance

  Benedikt

[1]
https://github.com/haproxy/haproxy/issues/13

[2]
https://haproxy.formilux.narkive.com/fyNOpSGz/force-response-to-send-http-2-goaway

[3] https://serverfault.com/questions/916724/421-misdirected-request



Low performance when using mode http for Exchange-Outlook-Anywhere-RPC

2012-05-08 Thread Benedikt Fraunhofer
Hello List,

I placed haproxy in front of our exchange cluster for OutlookAnywhere
Clients (that's just RPCoverHTTP, port 443). SSL is terminated by
pound and forwards traffic on loopback to haproxy.

Everything works but it's awfully slow when i use mode http;
requests look like this:

RPC_IN_DATA /rpc/rpcproxy.dll?[...] HTTP/1.1

HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824

RPC_OUT_DATA /rpc/rpcproxy.dll?[..] HTTP/1.1

HTTP/1.1 200 Success..Content-Type:application/rpc..Content-Length:1073741824

(this is the nature of microsoft rpc I've been told, it's using two
channels to make it duplex)
and are held open in both cases (mode tcp and mode http) due to long
configured timeouts (and no option httpclose for the http-mode)

I can't see a big difference in how packets look, there's an awful lot
of nearly empty packets
with Syn and Push set, but that's in both cases. Packets reach 16k
(that's the mtu of the loopback device)

The only difference you can see in the Outlook Connection Info Window
is the response-time: with mode tcp it's around 16-200ms while in
http-mode it's above 800ms.

Any hint? Or is mode-http of no use because I'll be unable to inject
stuff into the session-cookie at all?

Thx in advance
  Beni.



Re: Low performance when using mode http for Exchange-Outlook-Anywhere-RPC

2012-05-08 Thread Benedikt Fraunhofer
Hello Willy,

2012/5/8 Willy Tarreau w...@1wt.eu:

 For such border-line uses, you need to enable option http-no-delay. By

great! that did it.

 default, haproxy tries to merge as many TCP segments as possible. But in
 your case, the application is abusing the HTTP protocol by expecting that

Does haproxy even discard the PUSH Flag on tcp-packets? or is
microsoft simply not sending it?

 wrong but it's not the first time microsoft does crappy things with HTTP,
 see NTLM).

HTTP is such a versatile protocol,  and, as already being sung by some
some of them want to be abused :)

 Please note that such protocols will generally not work across
 caches or anti-virus proxies.

Well. In this case, all proxies on the client side will only see https
traffic; they should not be able to inspect that.

 With option http-no-delay, haproxy refrains from merging consecutive
 segments and forwards data as fast as they enter. This obviously leads
 to higher CPU and network usage due to the increase of small packets,
 but at least it will work as expected.

I'm following the mailing-list and saw that you did something
different for web-sockets?
[...]because haproxy switches to tunnel mode when it sees the WS
handshake and it
keeps the connection open for as long as there is traffic.[...]
or is tunnel mode something different and keeps the inner working of
assembling and merging packets in the http-mode

I dunno if that's important but maybe one should do that for
Content-Type:application/rpc, too, but anyhow it easy to throw in
the option and i'm more than happy that i can stay with my setup and
have client-stickiness for dryout-purposes.

And congrats to your new president :)

Thx again and again

 Beni.



Re: Matching URLs at layer 7

2010-04-28 Thread Benedikt Fraunhofer
Hi *,

2010/4/28 Andrew Commons andrew.comm...@bigpond.com:
        acl xxx_url      url_beg        -i http://xxx.example.com
        acl xxx_url      url_sub        -i xxx.example.com
        acl xxx_url      url_dom        -i xxx.example.com

The Url is the part of the URI without the host :)
A http request looks like

 GET /index.html HTTP/1.0
 Host: www.example.com

so you can't use url_beg to match on the host unless you somehow
construct your urls to look like
 http://www.example.com/www.example.com/
but don't do that :)

so what you want is something like chaining
acl xxx_host hdr(Host) 
acl xxx_urlbe1 url_begin /toBE1/
use_backend BE1 if xxx_host xxx_urlbe1
?

Cheers

  Beni.



Re: Matching URLs at layer 7

2010-04-28 Thread Benedikt Fraunhofer
Hi *,

 (2) Host header is www.example.com
 (3) All is good! Pass request on to server.
 (2) Host header is www.whatever.com
 (3) All is NOT good! Flick request somewhere harmless.

If that's all you want, you should be able to go with

 acl xxx_host hdr(Host)  -i xxx.example.com
 block if !xxx_host

, in your listen(, ...) section. But everything comes with a downside:
IMHO HTTP/1.0 doesnt require the Host header to be set so you'll be
effecitvely lock out all the HTTP/1.0 users unless you make another
rule checking for an undefined Host header (and allowing that) (or
checking for HTTP/1.0, there should be a macro for that.

Just my 2cent
  Beni.



Re: issue with using digest with jetty backends

2010-04-06 Thread Benedikt Fraunhofer
Hi,

2010/4/6 Matt mattmora...@gmail.com:

  HTTP/1.1 100 Continue
  HTTP/1.1 200 OK

Somehow this looks very odd to me :)
Dunno if that helps, but we had problems with curl and digest
authentication some time ago and solved it using

  curl --digest -H Expect: [...]

but we might have used a very old (buggy) version of curl.
Please let me know if that helps in your case.

Just my 2 cent
  Beni.



Re: [PATCH] [MINOR] CSS HTML fun

2009-10-13 Thread Benedikt Fraunhofer
Hello,

2009/10/13 Dmitry Sivachenko mi...@cavia.pp.ru:

 End tag for ul is optional according to

really? Something new to me :)

 http://www.w3.org/TR/html401/struct/lists.html#edef-UL

hmm. /li is optional (implied by next li or closing /{u,o}l,
 /{u,o}l not?

!ELEMENT UL - - (LI)+ -- unordered list --
!ATTLIST UL
  %attrs;  -- %coreattrs, %i18n, %events --
  
!ELEMENT OL - - (LI)+ -- ordered list --
!ATTLIST OL
  %attrs;  -- %coreattrs, %i18n, %events --
  

Start tag: required, End tag: required

the line stating Start tag: required, End tag: optional
is for the li

Just my 2cent
  Beni.