Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-12 Thread Dave Cottlehuber
On Wed, 12 Jun 2024, at 13:04, Aleksandar Lazic wrote:
> Hi.
>
> Attached a new version with updated upstream-proxy.cfg.
>
> This Patch have also the feature `upstream-proxy-target` to get rid of the 
> dependency for the srv->hostname.
>
> ```
> tcp-request content upstream-proxy-target www.test1.com
> ```
>
> Now have I tested the setup with `0.0.0.0` as server.
>
> ```
> server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128 
> init-addr 
> 127.0.0.1
> ```
>
> @Dave: Can you use a name for the upstream-proxy-tunnel instead of IP?

Yes, it does the DNS lookup happily, and I can pass secret via env. nice! 

--- 8< ---
frontend stream_fe
  bind:::443v4v6
  mode tcp
  option tcplog
  default_backend stream_be

backend stream_be
  mode tcp
  tcp-request content upstream-proxy-header Host www.httpbin.org
  tcp-request content upstream-proxy-header "$AUTH" "$TOKEN"
  tcp-request content upstream-proxy-header Proxy-Connection Keep-Alive
  tcp-request content upstream-proxy-target www.httpbin.org
  server stream www.httpbin.org:443 upstream-proxy-tunnel "$PROXY":1
--- 8< ---

So this looks good, we send the right headers now thank-you!

Upstream proxy replies "HTTP/1.1 200 OK" which seems legit.

But then haproxy sends RST, instead of the buffered proxy data.

After a a bit more tcpdump & code reading, I made a small
modification in conn_recv_upstream_proxy_tunnel_response/2

struct ist upstream_proxy_successful = ist("HTTP/1.1 200 OK");

and then I get actual data back through the proxy - great!

This seems ok according to 
https://datatracker.ietf.org/doc/html/rfc9110#name-connect

"Any 2xx (Successful) response indicates that the sender (and all inbound 
proxies) will switch to tunnel mode immediately after the response header 
section ..."

Is it possible to read up to "HTTP/1.1 200" and then ignore everything
up do 0x0d0a ? that should cover the RFC and both our examples.

For me, there are still 2 things I'm not clear on:

- I don't follow yet what upstream-proxy-target provides yet, or is this just
  plumbing for later when we have requests?

- In `server https_Via_Proxy1 0.0.0.0:0 upstream-proxy-tunnel 127.0.0.1:3128`
  from your config, what is 0.0.0.0:0 used for here? This binds to all IPv4
  but on a random free port?

A+
Dave



Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-12 Thread Dave Cottlehuber
On Tue, 11 Jun 2024, at 22:57, Aleksandar Lazic wrote:
> Hi Dave.
>
> Thank you for your test and feedback.
>
> When you put this line into backend, will this be better?
>
> ```
> tcp-request connection upstream-proxy-header HOST www.httpbun.com
> ```
>
> Regards
> Alex

Hi Alex,

Sorry I forgot to mention that. This is not allowed by the backend:

[ALERT](76213) : config : parsing [/usr/local/etc/haproxy/haproxy.conf:228] 
: tcp-request connection is not allowed because backend stream_be is not a 
frontend

So there is likely a simple solution to allow these in either end.

A+
Dave



Re: [PATCH] FEATURE/MAJOR: Add upstream-proxy-tunnel feature

2024-06-11 Thread Dave Cottlehuber
On Mon, 10 Jun 2024, at 22:09, Aleksandar Lazic wrote:
> It is now possible to set via "tcp-request connection upstream-proxy-header" 
> headers for the upstream proxy
>
> ```
> tcp-request connection upstream-proxy-header Host www.test1.com
> tcp-request connection upstream-proxy-header Proxy-Authorization "basic 
> base64-value"
> ```

Thanks Alex!

## sending CONNECT & headers

A simple `listen` server works, but a split frontend/backend one doesn't,
no headers are present in tcpdump/ngrep nor in debug.

I read the header iteration function and I'm not sure what the difference
is, I guess the backend doesn't see the frontend header structure?

### works

listen stream_fe
  bind:::443v4v6
  mode tcp
  option tcplog
  tcp-request connection upstream-proxy-header HOST www.httpbun.com
  server stream www.httpbun.com:443 upstream-proxy-tunnel 123.45.67.89:8000

## headers missing when split frontend/backend

frontend stream_fe
  bind:::443v4v6
  mode tcp
  option tcplog
  tcp-request connection upstream-proxy-header HOST www.httpbun.com
  default_backend stream_be

backend stream_be
  server stream www.httpbun.com:443 upstream-proxy-tunnel 123.45.67.89:8000

In the failing case, `mtrash->orig` shows it as empty, when I uncomment
your DPRINTF line. Looking at starutp log it captures the header from
the config correctly:

 debug 
... config phase ...

Header name :HOST:
Header value :www.httpbun.com:
name  :HOST:
value :www.httpbun.com:

 so far so good...

... proxy phase ...

HTTP TUNNEL SEND start
proxy->id :stream_be:
hostname: www.httpbun.com
trash->data :38:
connect_length :39:
trash->data :40:
trash->orig :CONNECT www.httpbun.com:443 HTTP/1.1

... there should be more in orig here ...



the working single listen version shows iterating over the headers:

list each name  :HOST:
list each value :www.httpbin.org:

Built with: 
$ gmake -j32 USE_ZLIB=1 USE_OPENSSL=1 USE_THREAD=1 USE_STATIC_PCRE2=1 
USE_PCRE2_JIT=1 TARGET=freebsd DEFINE='-DFREEBSD_PORTS -DDEBUG_FULL'

Run with:
$ ./haproxy -d -db -V -f /usr/local/etc/haproxy/haproxy.conf

Either way, I didn't get to make a tcp connection through, this might need some
more tcpdump work tomorrow.

A+
Dave



Re: Now a Working Patchset (was: Re: Patch proposal for FEATURE/MAJOR: Add upstream-proxy-tunnel feature)

2024-06-07 Thread Dave Cottlehuber
On Thu, 6 Jun 2024, at 22:57, Aleksandar Lazic wrote:
> Hi.
>
> I was able to create a working setup with the attached patches, I'm 
> pretty sure 
> that the patch will need some adoptions until it' ready to commit to 
> the dev branch.
>
> It would be nice to get some feedback.

Hi Alex,

This is pretty exciting, thanks! I rebased Brent's old patches last year
for IIRC 2.5, but couldn't figure out how to inject some headers for
TCP mode. Your C is better than mine already.

Patches compiled fine against 3.0.0. Minor nits:

- examples/upstream-proxy-squid.conf needs the ^M line endings removed.
- a few trailing whitespace and stray tabs in the diff should go
  in upstream-proxy.cfg include/haproxy/server-t.h src/connection.c

I couldn't quite understand how to use upstream-proxy.cfg example:

   server https_Via_Proxy1 www.test1.com:4433 upstream-proxy-tunnel 
127.0.0.1:3128 upstream-proxy-header-host "www.test1.com:4433" sni 
str(www.test1.com) init-addr 127.0.0.1

but what is the purpose of each of the fields here?

   server https_Via_Proxy1 
- name as usual
   www.test1.com:4433
- is this the url we are requesting to proxy?
- not sure why its needed here 
   upstream-proxy-tunnel 127.0.0.1:3128
- ok, this is the upstream proxy we are connecting to
   upstream-proxy-header-host "www.test1.com:4433"
- not sure why its needed here
   sni str(www.test1.com)
- I assume I can add this from a fetch
- i.e. dynamic for each connection?
   init-addr 127.0.0.1
- I assume this is only needed for test

We have the requested url 3x here, and I'm not clear why thats required.
Aren't they always the same?

Is it possible to have that URL set from say SNI sniffer fetch, similar
to 
https://www.haproxy.com/blog/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension
 ?

My scenario:

I have a very similar setup, (done outside haproxy), where I SNI sniff the
header, compare it to a dynamic allow list, and then forward traffic through
firewall with CONNECT. To track usage, a custom header is pre-pended on connect.
We're not decrypting the TLS session to preserve privacy of message. Just not
destination.

Here's your setup, with a slight amendment to match what I'm doing:
 
> Just for my clarification is the following setup now possible with HAProxy 
> with all the new shiny features  :-)

$ curl https://httpbun.com/
...
client => frontend
  sniff SNI request, check in ACL: are you allowed to go here?
  big brother is watching.
  |
  \-> backend server dest1 IP:port
    |
    \-> call "CONNECT httpbun.com:443 on upstream proxy
  |
 send additional headers, like
 host: httpbun.com:443
 authentication: bearer abc123
 |
 upstream replies HTTP/1.1 200 OK
 |
 now we switch to tunneling, and fwd the original TLS
 traffic
  \-> TCP FLOW to destination IP


In my case I would have to vary the httpbun.com in both CONNECT and HOST:
headers, per each allowed domain.

In practice I could create lots of backends per each SNI header, if its
not possible to use the inspected SNI name.

A+
Dave
———
O for a muse of fire, that would ascend the brightest heaven of invention!



proxy CONNECT + custom headers

2023-12-02 Thread Dave Cottlehuber
hi,

Can haproxy support following backend scenario?

- use HTTP CONNECT to establish a proxy connection
- send custom HTTP header with the CONNECT method
- then switch to tunnel mode to allow custom TLS protocol through

I've not found anything really useful in RFC7231 whether this
is a common scenario, and while trying to implement this with
haproxy, I either land on:

- add headers in `mode http`, but can't handle TLS protocol
- use `mode tcp`, but then can't add custom header

https://www.rfc-editor.org/rfc/rfc7231#section-4.3.6 


## haproxy.cfg snippet

frontend tunnel80_fe
bind 10.0.0.1:80
mode http
## strictly speaking the headers aren't part of CONNECT
http-request set-header Authorization "Bearer mytoken123"
default_backend tunnel80_be

backend tunnel80_be
mode http
server tunnel80 remote.ip:12345

frontend tunnel443_fe
bind 10.0.0.1:443
mode tcp
## ignored because we're in tcp mode
http-request set-header Authorization "Bearer mytoken123"
default_backend tunnel443_be

backend tunnel443_be
mode tcp
server tunnel443 remote.ip:12346

A+
Dave



Re: lua workers and peer stick tables

2022-09-07 Thread Dave Cottlehuber
> On Wed, Sep 07, 2022 at 09:04:44PM +0000, Dave Cottlehuber wrote:
>> hi,
>> 
>> I'm working towards dumping a list of top N http requesters via a
>> lua-driven HTTP response, from a peer synced table.
>> 
>> The first stage is to dump without peers. I have found the stick table
>> object, but can't call any of the info, dump, or lookup methods on it.

The first part was trivial! See gist for details.

** Use : not . for methods **

https://gist.github.com/dch/63dd70f626b4203c2769298c9c371958

which produces this:
{
  ":::127.0.0.1": 397,
  ":::172.16.2.21": 103,
  "::1": 5732
}

The second part, is it possible to access peer stick tables?

I don't see them in the objects listed by Thierry, nor when recursively
dumping the core object.

https://www.arpalert.org/src/haproxy-lua-api/2.6/#external-lua-libraries

> just a quick response to confirm it reached the list. Thanks for your
> patience and sorry for the inconvenience.

not at all! Thank-you.

A+
Dave



lua workers and peer stick tables

2022-09-07 Thread Dave Cottlehuber
hi,

I'm working towards dumping a list of top N http requesters via a
lua-driven HTTP response, from a peer synced table.

The first stage is to dump without peers. I have found the stick table
object, but can't call any of the info, dump, or lookup methods on it.

Using this example[0] from the blog[1], I end up with this lua snippet:

NB s/dump/info/ below to work around possible url spam filtering...


-- stick.lua
core.Alert("lua: stick loaded");

local function list(applet)
require("print_r")
-- unused -- local filter = {{"data.http_req_rate", "gt", 100}}
local st = core.backends.st_src_global.stktable

-- yes we have a sticky table
print_r(st, true, function(msg) io.write(msg) end)

-- this crashes oh but why
local dump = st.dump()
print_r(info, true, function(msg) io.write(msg) end)

-- gotta return something stringy for the moment
local body = '{}'
applet:set_status(200)
applet:add_header("content-length", string.len(body))
applet:add_header("content-type", "application/json")
applet:start_response()
applet:send(body)
end

core.register_service("stick", "http", list)
---

After hitting the URL a few times, the table has some entries:

$ echo 'show table st_src_global data.http_req_rate gt 0' \
| socat stdio /tmp/haproxy.sock

# table: st_src_global, type: ip, size:1048576, used:2
0x82e8aa490: key=127.0.0.1 use=0 exp=591419 http_req_rate(60)=4
0x94d38a3d0: key=127.0.0.2 use=0 exp=589307 http_req_rate(60)=3

and the debug log shows in beautiful colour the first print_r,
but the second fails:

(table) HAProxy class StickTable [
METATABLE: (table) table: 0x82a611980 [
"__tostring": (function) function: 0x82a614470
"__index": (table) table: 0x82a6119c0 [
"info": (function) function: 0x307690
"dump": (function) function: 0x307b60
"lookup": (function) function: 0x307880
]
]
0: (userdata) userdata: 0x82a75ef00
]

[ALERT](47889) : Lua applet http '': [state-id 0]
  runtime error: stick.lua:14: bad argument #1 to 'dump' ((null))
  from [C]: in field 'info', stick.lua:14: in function line 9.

How do I use the info/dump/lookup methods on it? 

Finally, how do I walk the lua object tree to find a peer stick
table? Is it exposed somehow?

A+
Dave

[example]: 
https://gist.githubusercontent.com/haproxytechblog/af7f4678e0457b147ec487c52ed01be6/raw/33e9d66b32207492cb16a78f5eed131daa695d2b/blog20180921-03.cfg
[blog]: https://www.haproxy.com/blog/introduction-to-haproxy-stick-tables/
[api]: https://www.arpalert.org/src/haproxy-lua-api/2.6/



spoe - capturing & mirroring traffic

2022-08-11 Thread Dave Cottlehuber
I'd like to capture & mirror HTTP traffic for a few days, to
capture some truly representative load generating traffic.

https://www.haproxy.com/blog/haproxy-traffic-mirroring-for-real-world-testing/

There seem to be a few general options:

- use spoe & spoa-mirror
- some bpf/tcpdump powered capture & replay tool

spoa-mirror in general is more interesting as we can take
care of TLS termination and get all the haproxy goodness,
but it seems not to be maintained.

Is anybody aware of alternatives using the SPOE engine?

BTW I have found goreplay which looks very nice for BPF
style capture, but having the full filtering & acl power
of haproxy is definitely a better option.

https://github.com/haproxytech/spoa-mirror
https://goreplay./org/

A+
Dave



Re: testing and validating complex haproxy.conf rules

2020-03-31 Thread Dave Cottlehuber
On Tue, 31 Mar 2020, at 07:53, Aleksandar Lazic wrote:
> Hi Dave.
> 
> On 31.03.20 09:24, Dave Cottlehuber wrote:
> > hi all,
> > 
> > Our main haproxy.conf has practically become sentient... it's reached the
> > point where the number of url redirects and similar incantations is very
> > hard to reason about, and certainly not test or validate, until it's
> > shipped. In fact I deploy to a "B" cluster node, and verify most changes
> > on a spare production node. This is not always possible to ensure that
> > existing acls and url redirects aren't broken by the changes.
> > 
> > For example:
> > 
> > https://%[hdr(host)]%[url,regsub(/$,)] ...
> > 
> > didn't do what the person who deployed it thinks it does - easy enough to
> > fix. How could we have tested this locally before committing it?
> > 
> > Is there any easy-ish way to try out these rules, almost like you
> > could in a REPL?
> > 
> > Once we've written them, and committed them to our ansible repos, is there
> > any way to unit test the whole config, to avoid regressions?
> > 
> > 90% of these commits relate to remapping and redirecting urls from patterns.
> 
> Please can you tell us which version of HAProxy and some more details 
> from the config.
> Maybe you can split the redirects, for example can you use a map for 
> the host part.

thanks Aleks,

In this case it's haproxy 2.1, and the config is complex. 

This is a generic problem, not one for a single rule -- I need to find a way
to enable other people "unit test" their changes, before committing, and,
once committed, to avoid breaking production, be able to validate that the
most recent change doesn't break existing functions (more unit tests but
over the whole config). I can spin up a full staging environment if
necessary but I'm hoping somebody has a clever hack to avoid this.

Our newer stuff looks a bit like this with a map file:

  http-requestredirect  code 301  location 
%[capture.req.uri,map(/usr/local/etc/haproxy/redirects.map)] if { 
capture.req.uri,map(/usr/local/etc/haproxy/redirects.map) -m found } 

but there are hundreds of acls that can overlap, or even override the 
straightforward logic of the map. That's what I need to find a way to deal with.

A+
Dave



testing and validating complex haproxy.conf rules

2020-03-31 Thread Dave Cottlehuber
hi all,

Our main haproxy.conf has practically become sentient... it's reached the
point where the number of url redirects and similar incantations is very
hard to reason about, and certainly not test or validate, until it's
shipped. In fact I deploy to a "B" cluster node, and verify most changes
on a spare production node. This is not always possible to ensure that
existing acls and url redirects aren't broken by the changes.

For example:

https://%[hdr(host)]%[url,regsub(/$,)] ...

didn't do what the person who deployed it thinks it does - easy enough to
fix. How could we have tested this locally before committing it?

Is there any easy-ish way to try out these rules, almost like you
could in a REPL?

Once we've written them, and committed them to our ansible repos, is there
any way to unit test the whole config, to avoid regressions?

90% of these commits relate to remapping and redirecting urls from patterns.

A+
Dave



Re: 1.9b6 301 redirect anomaly

2018-11-15 Thread Dave Cottlehuber
On Thu, 15 Nov 2018, at 14:49, Christopher Faulet wrote:
> Le 15/11/2018 à 11:14, Dave Cottlehuber a écrit :
> > bonjour list,
> > 
> > In comparison to 1.8 (and probably 1.9b5 but I can't verify that at 
> > present) the 301 redirect seems to be handled differently. Initially I 
> > thought this was an HTTP2 issue but it 's reproducible on HTTP/1.1 as well.
> > 
> Hi Dave,
> 
> A bug was introduced in the commit 6b952c810 in the way the request's 
> uri is captured. So it exists since the 1.9-dev2. Could you test the 
> attached patch to confirm the fix ?

Merci beaucoup Christopher, it works!

> GET /api HTTP/1.1
> Host: logs.example.com
> User-Agent: curl/7.62.0
> Accept: */*
> 
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: https://logs.example.com/api

A+
Dave



1.9b6 301 redirect anomaly

2018-11-15 Thread Dave Cottlehuber
bonjour list,

In comparison to 1.8 (and probably 1.9b5 but I can't verify that at present) 
the 301 redirect seems to be handled differently. Initially I thought this was 
an HTTP2 issue but it 's reproducible on HTTP/1.1 as well.

curl --http1.1 -4vsSLo /dev/null https://logs.example.com/  > h11_18.log or 
h11_19b6.log)
full diff of logs are at end of the email.

Interestingly this is handled differently in browsers - firefox compiles 
strictly with the redirect and eventually exceeds its acceptable URL length, 
appending the %20HTTP/1.1 each time. Chrome and forks seem to ignore it.

the curl output is identical until:

 |< HTTP/1.1 301 Moved Permanently
 |< Content-length: 0
-|< Location: https://example.com/
+|< Location: https://example.com/ HTTP/1.1

where we can see under 19b6 the HTTP/1.1 sneaks in.

And as we are using the -L follow option in curl we see the incorrect URL being 
propagated back:

 |* Connection #0 to host logs.example.com left intact
-|* Issue another request to this URL: 'https://example.com/'
+|* Issue another request to this URL: 'https://example.com/ HTTP/1.1'
 |*   Trying 95.216.20.215...
I've tried fiddling with my 301 settings in haproxy.conf to no avail.

environment:

FreeBSD 11.2Rp4 amd64

internet -> haproxy :443 (ipv4 or ipv6) -> h2o 2.3.0b1 backend for serving 
actual files.

# /usr/local/etc/haproxy/haproxy.conf
... blah
  bindipv4@:443 ssl alpn h2, crt ...
  bindipv6@:443 ssl alpn h2, crt ...
  # redirect anything that doesn't match our ACLs or isn't TLS
  http-request redirect code 301 location https://example.com%[capture.req.uri] 
unless www or api or beta
  http-request redirect scheme https code 301 if !{ ssl_fc }

I can provide full actual logs / configs off-list if needed, and it's quick to 
switch in versions of haproxy for validating or get pcap traces. Just bug me on 
irc dch in #haproxy channel.

haproxy -vvv
HA-Proxy version 1.9-dev6 2018/11/11 
*** or HA-Proxy version 1.8.14-52e4d43 2018/09/20
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv 
-Wno-address-of-packed-member -Wno-unused-label -Wno-sign-compare 
-Wno-unused-parameter -Wno-ignored-qualifiers -Wno-missing-field-initializers 
-Wno-implicit-fallthrough -Wtype-limits -Wshift-negative-value 
-Wnull-dereference -DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1 
USE_REGPARM=1 USE_OPENSSL=1 USE_LUA=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.42 2018-03-20
Running on PCRE version : 8.42 2018-03-20
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with Lua version : Lua 5.3.5
Built with OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
Running on OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available multiplexer protocols :
(protocols markes as  cannot be specified using 'proto' keyword)
: mode=TCP|HTTP   side=FE|BE
  h2 : mode=HTTP   side=FE

Available filters :
[TRACE] trace
[COMP] compression
[SPOE] spoe

patdiff of curl output between http/1.1 and 1.8 vs http/1.1 and 1.9b6:

-- h11_18.log
++ h11_19b6.log
@|-39,16 +39,16 
 |> User-Agent: curl/7.62.0
 |> Accept: */*
 |> 
 |{ [5 bytes data]
 |< HTTP/1.1 301 Moved Permanently
 |< Content-length: 0
!|< Location: https://example.com/ HTTP/1.1
 |< 
 |* Connection #0 to host logs.example.com left intact
-|* Issue another request to this URL: 'https://example.com/'
+|* Issue another request to this URL: 'https://example.com/ HTTP/1.1'
 |*   Trying 95.216.20.215...
 |* TCP_NODELAY set
 |* Connected to example.com (95.216.20.215) port 443 (#1)
 |* ALPN, offering http/1.1
 |* successfully set certificate verify locations:
 |*   CAfile: /usr/local/share/certs/ca-root-nss.crt
@|-79,36 +79,25 
 |*  start date: Sep  6 19:45:18 2018 GMT
 |*  expire date: Dec  5 19:45:18 2018 GMT
 |*  subjectAltName: host "example.com" matched cert's "example.com"
 |* 

Re: H2O - an optimized HTTP server

2018-09-28 Thread Dave Cottlehuber
On Sat, 29 Sep 2018, at 00:31, Aleksandar Lazic wrote:
> Hi.
>
> Have anyone used this server in production setup behind haproxy?
>
> https://h2o.examp1e.net/

Yes for the last 2 years at least. but from a pure speed and http2
perspective you’re best off running them beside each other. It’s
solid web server and the embedded mruby is very useful but its proxy
support is primitive still. I use its OCSP script to handle things
for haproxy though.
A+
Dave









Re: HA Proxy Source IP Issue

2018-09-17 Thread Dave Cottlehuber
On Mon, 17 Sep 2018, at 13:04, Damen Barker wrote:
> Hi There
>
> We are running 1.6, the issue we are facing is that my backend servers
> are seeing the incoming IP address of the HAProxy server and not the
> client IP address and our application needs to see this. Please see
> below our configuration and if you can offer any advice that would be
> greatly received.

Welcome Damen.

See 
https://cbonte.github.io/haproxy-dconv/1.7/configuration.html#4-option%20forwardfor

option forwardfor

and adjust your application accordingly. Sometimes x-real-ip is used or 
sometimes the application can support the PROXY protocol, you'll need to check 
whats possible -- https://www.haproxy.com/blog/haproxy/proxy-protocol/ was 
invented IIRC by Willy for haproxy, but it's really widespread now in other 
applications, as a generic non-HTTP-specific way of providing inbound IP 
address to proxied applications.

A+
Dave



HTTP/2 frames with websocket permessage-deflate option

2018-04-11 Thread Dave Cottlehuber
I've been taking HTTP/2 for a spin, using a phoenix[1] app with websockets. The 
basic "does it connect" works very well already (thank-you!) but I'm not sure 
if it's possible to enable per-frame compression within websockets or not -- or 
even intended?

My use case is to reduce the size of JSON blobs traversing a websocket 
connection, where a reasonable portion of frames contain almost-identical JSON  
from one to the next:

http/1.1 backend connection upgraded to websockets
   |
   | JSON blobs...
   |
haproxy
   |
   | JSON blobs...
   |
http/2 frontend to browser (using TLS obviously) 

I can see that my endpoints are requesting permessage-deflate option, but that 
haproxy is not returning that header back to indicate its support for it.

While haproxy has no way of knowing that a particular stream would benefit from 
compression or not,  the application developer *does* know, and I could ensure 
that compressible websocket requests use a different endpoint, or some form 
header + acl, to enable that, for example.

Some thoughts:

- in general, I prefer to keep away from compression over TLS because of BREACH 
and CRIME vulnerability classes
- this long-running websockets connection is particularly interesting for 
compression however as the compression tables are apparently maintained across 
sequential frames on the client

Is this something that might come in future releases, or do you feel its better 
left out due to compression overhead and vulnerability risks?

[1]: http://phoenixframework.org/

$ haproxy -vv
HA-Proxy version 1.8.6 2018/04/05
Copyright 2000-2018 Willy Tarreau 

Build options :
  TARGET  = freebsd
  CPU = generic
  CC  = cc
  CFLAGS  = -O2 -pipe -fstack-protector -fno-strict-aliasing 
-fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow 
-Wno-address-of-packed-member -Wno-null-dereference -Wno-unused-label 
-DFREEBSD_PORTS
  OPTIONS = USE_GETADDRINFO=1 USE_ZLIB=1 USE_CPU_AFFINITY=1 USE_ACCEPT4=1 
USE_REGPARM=1 USE_OPENSSL=1 USE_STATIC_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with network namespace support.
Built with zlib version : 1.2.11
Running on zlib version : 1.2.11
Compression algorithms supported : identity("identity"), deflate("deflate"), 
raw-deflate("deflate"), gzip("gzip")
Built with PCRE version : 8.40 2017-01-11
Running on PCRE version : 8.40 2017-01-11
PCRE library supports JIT : yes
Built with multi-threading support.
Encrypted password support via crypt(3): yes
Built with transparent proxy support using: IP_BINDANY IPV6_BINDANY
Built with OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
Running on OpenSSL version : OpenSSL 1.0.2o-freebsd  27 Mar 2018
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2

Available polling systems :
 kqueue : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use kqueue.

Available filters :
[TRACE] trace
[COMP] compression
[SPOE] spoe



skip logging some query parameters during GET request

2018-03-13 Thread Dave Cottlehuber
Hi,

I'm using haproxy to handle TLS termination to a 3rd party API that requires 
authentication (username/password) to be passed as query parameters to a GET 
call.

I want to log the request as usual, just not all the query parameters. 
Obviously for a POST the parameters would not be logged at all, but is it 
possible to teach haproxy to exclude one specific query parameters on a GET 
request?  

the request:

GET /api?username=seriously&password=ohnoes&command=locate&item=chocolat

desired log something like:

GET /api?username=seriously&command=locate&item=chocolat

I can do this downstream in rsyslog but I'd prefer to cleanse the urls up front.

A+
Dave



Re: TLS termination with 2 certs on same IP

2018-03-02 Thread Dave Cottlehuber
On Fri, 2 Mar 2018, at 01:40, Lukas Tribus wrote:
> On 2 March 2018 at 01:09, Dave Cottlehuber  wrote:
> > I have 2 TLS cert bundles that I'd like to serve off haproxy, using a 
> > single IP. Both certs have multiple SANs in them.
> 
> Yes. You don't need TCP mode and manual SNI matching at all. Haproxy
> will do all those things for your automatically. The article is
> specifically about content switching TCP payload based on SNI, but
> that's not you usecase (not of you want a simple and build-in
> solution).
> 
> The point is: you can specify multiple certificate or even directories
> with the "crt" keyword.

Thanks Lukas

this indeed works and is much simpler.

FWIW I had this config previously and it wasn't working; I'd assumed my
haproxy config was incorrect but in fact one of the TLS certs had an
incorrect intermediate certificate, once that's fixed I can revert to
the expected setup.

A+
Dave



TLS termination with 2 certs on same IP

2018-03-01 Thread Dave Cottlehuber
I have 2 TLS cert bundles that I'd like to serve off haproxy, using a single 
IP. Both certs have multiple SANs in them.

- our main production site: api,beta,www.example.com using EV cert
- a lets-encrypt cert bundle for old DNS names that we only need to redirect 
https: back to the main site
 
I've followed 
https://www.haproxy.com/blog/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/
 and updated it a bit. Does this look sensible? is there a simpler way to do 
this?

#
frontend example_sniffer
  bind1.2.3.4:443
  bind[1:2:3::4]:443
  modetcp
  tcp-request inspect-delay 5s
  tcp-request content accept if { req.ssl_hello_type 1 }
  acl redirectreq.ssl_sni -i www.example.com.au blog.example.com
  use_backend example_tls_forwarder if redirect
  default_backend example_http_https_be

backend example_http_https_be
  modetcp
  server  example_fe[::1]:10443

backend example_tls_forwarder
  modetcp
  server  example_fe[::1]:10444

frontend example_http_https
  bind[::1]:80
  bind[::1]:10443 ssl crt   
/usr/local/etc/ssl/keys/example.com.pem
  bind[::1]:10444 ssl crt   
/usr/local/etc/ssl/keys/letsencrypt.example.com.pem
  # redirect letsencrypt requests
  acl url_acme  path_beg  /.well-known/acme-challenge/
  use_backend acme_backend  ifurl_acme
  # redirect traffic to beta or prod jail as required
  acl iwmn_prod hdr(host) example.com api.example.com
  acl iwmn_beta hdr(host) beta.example.com
  # redirect main site urls
  acl valid_hosthdr(host) example.com api.example.com 
beta.example.com
  http-requestredirect  code 301  location 
https://example.com%[capture.req.uri] unless valid_host
  use_backend prod_backend  if iwmn_prod
  default_backend imsorry_backend
  # ... backends

thanks
Dave



Re: HaProxy Hang

2017-06-07 Thread Dave Cottlehuber
On Wed, 7 Jun 2017, at 10:42, David King wrote:
> Just to close the loop on this, last night was the time at which we were
> expecting the next hang. All of the servers we updated haproxy to the
> patched versions did not hang. The test servers which were running the
> older version hung as expected
> 
> Thanks so much to everyone who fixed the issue!

Same here, although as we patched everything we had no issues at all :D
Merci beaucoup!

A+
Dave



Re: HaProxy Hang

2017-04-04 Thread Dave Cottlehuber
On Wed, 5 Apr 2017, at 01:34, Lukas Tribus wrote:
> Hello,
> 
> 
> Am 05.04.2017 um 00:27 schrieb David King:
> > Hi Dave
> >
> > Thanks for the info, So interestingly we had the crash at exactly the 
> > same time, so we are 3 for 3 on that
> >
> > The setups sounds very similar, but given we all saw issue at the same 
> > time, it really points to something more global.
> >
> > We are using NTP from our firewalls, which in turn get it from our 
> > ISP, so i doubt that is the cause, so it could be external port 
> > scanning which is the cause as you suggest. or maybe a leap second of 
> > some sort?
> >
> > Willy any thoughts on the time co-incidence?
> 
> Can we be absolutely positive that those hangs are not directly or 
> indirectly caused by the bugs Willy already fixed in 1.7.4 and 1.7.5, 
> for example from the ML thread "Problems with haproxy 1.7.3 on FreeBSD 
> 11.0-p8"?
>
> There maybe multiple and different symptoms of those bugs, so even if 
> the descriptions in those threads don't match your case 100%, it may 
> still caused by the same underlying bug.

I'll update from 1.7.3 to 1.7.5 with those goodies tomorrow and see how
that goes.

A+
Dave



Re: HaProxy Hang

2017-04-03 Thread Dave Cottlehuber
On Mon, 13 Mar 2017, at 13:31, David King wrote:
> Hi All
> 
> Apologies for the delay in response, i've been out of the country for the
> last week
> 
> Mark, my gut feeling is that is network related in someway, so thought we
> could compare the networking setup of our systems
> 
> You mentioned you see the hang across geo locations, so i assume there
> isn't layer 2 connectivity between all of the hosts? is there any back
> end
> connectivity between the haproxy hosts?

Following up on this, some interesting points but nothing useful.

- Mark & I see the hang at almost exactly the same time on the same day:
2017-02-27T14:36Z give or take a minute either way

- I see the hang in 3 different regions using 2 different hosting
providers on both clustered and non-clustered services, but all on
FreeBSD 11.0R amd64. There is some dependency between these systems but
nothing unusual (logging backends, reverse proxied services etc).

- our servers don't have a specific workload that would allow them all
to run out of some internal resource at the same time, as their reboot
and patch cycles are reasonably different - typically a few days elapse
between first patches and last reboots unless its deemed high risk

- our networking setup is not complex but typical FreeBSD:
- LACP bonded Gbit igb(4) NICs
- CARP failover for both ipv4 & ipv6 addresses
- either direct to haproxy for http & TLS traffic, or via spiped to
decrypt intra-server traffic 
- haproxy directs traffic into jailed services
- our overall load and throughput is low but consistent
- pf firewall
- rsyslog for logging, along with riemann and graphite for metrics
- all our db traffic (couchdb, kyoto tycoon) and rabbitmq go via haproxy
- haproxy 1.6.10 + libressl at the time

As I'm not one for conspiracy theories or weird coincidences, somebody
port scanning the internet with an Unexpectedly Evil Packet Combo seems
the most plausible explanation.  I cannot find an alternative that would
fit the scenario of 3 different organisations with geographically
distributed equipment and unconnected services reporting an unusual
interruption on the same day and almost the same time.

Since then, I've moved to FreeBSD 11.0p8, haproxy 1.7.3 and latest
libressl and seen no recurrence, just like the last 8+ months or so
since first deploying haproxy on FreeBSD instead of debian & nginx.

If the issue recurs I plan to run a small cyclic traffic capture with
tcpdump and wait for a re-repeat, see
https://superuser.com/questions/286062/practical-tcpdump-examples

Let me know if I can help or clarify further.

A+
Dave



Re: Force connection close after a haproxy reload

2017-03-15 Thread Dave Cottlehuber
On Wed, 15 Mar 2017, at 12:02, Willy Tarreau wrote:
> Hi Cyril!
> 
> On Wed, Mar 15, 2017 at 11:48:01AM +0100, Cyril Bonté wrote:
> > As a reminder (to me), I sent a patch in december (just before the 1.7.0
> > release), which immediately closes the HTTP keep-alived connections.
> > Currently, during the soft stop, HTTP connections are only closed when a
> > request is processed, it doesn't do anything on connections already in an
> > idle state.
> 
> Ah yes I vaguely remember about this discussion now.
> 
> > I didn't spend more time on it but having a quick look at it, it may be 
> > ready
> > to merge soon.
> 
> Cool!
> 
> > About TCP connections, while I wrote the patch, I was thinking about a 
> > global
> > "grace timeout", which will enforce haproxy exit if the soft stop takes too
> > long (for example when tcp connections don't expire). Something like :
> > 
> > global
> >   grace 30s

Yes please.  I have a reasonable number of websocket connections that
run for hours or days. I'd much prefer having an operational guarantee
that
a restart/reload will take no longer than 5 minutes by which time all of
the
transactional HTTP-only (non-upgraded) connections will have been long
time.

A+
Dave 



Re: HAProxy stops handling or accepting connections

2017-02-28 Thread Dave Cottlehuber
On Tue, 28 Feb 2017, at 06:24, Mark S wrote:
> Hi Folks,
> 
> This is a strange one and I haven't yet been able to duplicate.  But I  
> wanted to report the description of what did happen in case it was either 
> a known issue or one that would seem likely based on the code.
> 
> The servers in question are running HAProxy 1.7.1 on FreeBSD-11.

I am working through similar symptoms from yesterday  (FreeBSD 11p2
kernel + p7 userland, haproxy 1.6.10) where all 4 load balancers in
different regions locked up, around the same time. I'm also struggling
to identify anything that we did that might have triggered this. The
only correlation I've found so far is a temporary loss of network on our
frontend web servers, and we seem not to have had anybody deploying
stuff around that time.

- stats page is inaccessible
- front end & back end seem to be disconnected
- there was only a single haproxy instance running
- dtruss showed only kqueue(0,0,0) = 22 (EINVAL) continuously
- system logs are blank, although shortly afterwards the box panicked
completely

A+
Dave




Re: [PATCHES] Add support for LibreSSL 2.5.1

2017-02-10 Thread Dave Cottlehuber
On Fri, 10 Feb 2017, at 16:21, Piotr Kubaj wrote:
> Please try the corrected patches. Before Haproxy was kind of unstable.
> Now it seems to work fine. I also changed tests for defined
> LIBRESSL_VERSION_NUMBER to testing LibreSSL version to keep the older
> versions working.
> 
> On 17-02-10 13:48:20, Piotr Kubaj wrote:
> > I'm attaching two patches:
> > a) patch-src_ssl__sock.c - it makes possible to build Haproxy against 
> > LibreSSL 2.5.1 at all,
> > b) patch-include_proto_openssl-compat.h - since "auto" ECDHE API doesn't 
> > work OOTB, this patch is also needed
> > 
> > They are against the latest 20170209 snapshot. Please consider merging a) 
> > to stable branches.

Piotr's got a  FreeBSD bug in Bugzilla
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=216763 for this
already - thanks!

A+
Dave



Re: Start From Zero concept

2017-02-03 Thread Dave Cottlehuber
This is exactly like Zerg http://erlangonxen.org/zerg the requirements
are that haproxy:


- triggers launching a new backend immediately on accepting the tcp
  handshake or ssl initiation
- holds the frontend tcp connection open until the new backend is spun
  up
- triggers closing the backend on close of that same connection



Maintaining a small pool of pre initialized backends might be the
simplest way to handle this in practice.


I'm definitely curious if this is possible.



BTW we have come full circle to preforking Apache httpd… PHP style...



A+ Dave





On Wed, 1 Feb 2017, at 05:22, Thilina Manamgoda wrote:

> Hi,

>

> I am Thilina Manamgoda, an undergraduate of Faculty of Engineering,

> University of Peradeniya, Sri Lanka.  What I meant by "Start
> from Zero"
> is

> start a server from stop state to running state when the first request
> comes. This functionality  is needed in the Server-less architecture

> concept where request is served in that way.

>

> Currently I am working with a *Kubernetes Cluster* where the
> servers are
> deployed as pods. What I am trying to do is when the first
> request comes
> for a server which is in stop state, rest call should be made to a

> service

> which will start the server.

>  May be this is
>  not a
> functionality that is relevant at the moment for the project  but I am
> trying to Implement it and all suggestions are welcome.

>

>

> regards,

> Thilina Manamgoda





—

  Dave Cottlehuber

  +43 67 67 22 44 78

  Managing Director

  Skunkwerks, GmbH

  http://skunkwerks.at/

  ATU70126204

  Firmenbuch 410811i






Re: 1.7-dev6 build failure on FreeBSD 11.0 amd64 & libressl

2016-11-23 Thread Dave Cottlehuber
> > Am 16.11.2016 um 15:39 schrieb Willy Tarreau:
> > > 
> > > Same here. What is annoying is that every time it appears, it's protected
> > > by a #if OPENSSL_VERSION_NUMBER >= 1.1.0 so that means that LibreSSL is
> > > spoofing OpenSSL version numbers without providing compatibility. If so,
> > > that will become quite painful to support.

I can see how over time this would become quite unsupportable.

> > Something like this (which is already in the code twice) should permit the
> > build:
> > #if (OPENSSL_VERSION_NUMBER >= . && !defined
> > LIBRESSL_VERSION_NUMBER)
> >
> > It will be a mess, and it will unconditionally disable new features for all
> > LibreSSL releases, but I don't see any other easy way out of this.
> 
> I think for the mid-term what we can do is to check what highest openssl
> version LibreSSL is compatible with, and redefine it accordingly. For
> example
> (not correct values) :
> 
> #if LIBRESSL_VERSION_NUMBER >= 1.2.3.4
> #undef OPENSSL_VERSION_NUMBER
> #define OPENSSL_VERSION_NUMBER 1.0.2
> #endif

Would this happen in haproxy itself, or in the FreeBSD port carrying
patches?


Thanks everybody for the suggestions & feedback. 

At present I can safely build all other production ports using LibreSSL,
so I will try to get the FreeBSD port to build against ports
security/OpenSSL statically. It should be possible to have our cake and
eat it too / avoir le buerre et l'argent de beurre...

I'll post back when I get this sorted.

A+
Dave



1.7-dev6 build failure on FreeBSD 11.0 amd64 & libressl

2016-11-15 Thread Dave Cottlehuber
Hi there

I'm running into a build failure for 1.7-dev6 with LibreSSL on FreeBSD
11.0-RELEASE-p3 amd64.  I've no idea if this is a supported combo or not
but it does work with 1.6.9 very nicely already.

cc -Iinclude -Iebtree -Wall -O2 -pipe -fno-omit-frame-pointer 
-fstack-protector -fno-strict-aliasing   -DFREEBSD_PORTS-DTPROXY
-DCONFIG_HAP_CRYPT -DUSE_GETADDRINFO -DUSE_ZLIB  -DENABLE_POLL
-DENABLE_KQUEUE -DUSE_CPU_AFFINITY -DUSE_OPENSSL  -DUSE_LUA
-I/usr/local/include/lua53 -DUSE_PCRE -I/usr/local/include
-DUSE_PCRE_JIT  -DCONFIG_HAPROXY_VERSION=\"1.7-dev6-d5d890b\"
-DCONFIG_HAPROXY_DATE=\"2016/11/09\" -c -o ebtree/ebistree.o
ebtree/ebistree.c
src/ssl_sock.c:1966:8: warning: implicit declaration of function
'SSL_CTX_add1_chain_cert' is invalid in C99
[-Wimplicit-function-declaration]
if (!SSL_CTX_add1_chain_cert(ctx, ckch->chain_certs[i]))
{
 ^
src/ssl_sock.c:2270:12: warning: incompatible integer to pointer
conversion assigning to 'pem_password_cb *' (aka 'int (*)(char *, int,
int, void *)') from 'int' [-Wint-conversion]
passwd_cb = SSL_CTX_get_default_passwd_cb(ctx);
  ^ ~~
src/ssl_sock.c:2271:21: warning: incompatible integer to pointer
conversion assigning to 'void *' from 'int' [-Wint-conversion]
passwd_cb_userdata =
SSL_CTX_get_default_passwd_cb_userdata(ctx);
   ^ ~~~
src/ssl_sock.c:3521:6: error: use of undeclared identifier
'OSSL_HANDSHAKE_STATE'
OSSL_HANDSHAKE_STATE state =
SSL_get_state((SSL
*)conn->xprt_ctx);
^
4 warnings generated.
src/ssl_sock.c:3522:24: error: use of undeclared identifier 'state'; did
you mean 'stat'?
empty_handshake = state ==
TLS_ST_BEFORE;
  ^
  stat

full log is here:
https://gist.github.com/dch/929c09cb48fc5dec5e1a99bda2f7d5d5

There's a partial patch here
https://github.com/HardenedBSD/hardenedbsd-ports/commit/b0c5e0fd15cdf9b6059e5c66e66f9e81b4e7f252
via HardenedBSD project but I can't tell if this would break other SSL
library combinations. 

Any suggestions?

Thanks
Dave



Re: Getting JSON encoded data from the stats socket.

2016-11-10 Thread Dave Cottlehuber
On Thu, 10 Nov 2016, at 13:53, Malcolm Turnbull wrote:
> Georg,
> 
> That's a timely reminder thanks:
> I just had another chat with Simon Horman who has kindly offered to
> take a look at this again.

Sounds great!

I'm very interested in logging this continually via chrooted unix
socket,
into both riemann & rsyslog and into graylog/splunk. I'm happy to help
test
and contribute documentation as well.

I was planning to use riemann-tools with csv format
 https://github.com/riemann/riemann-tools/blob/master/bin/riemann-haproxy 

A+
Dave