HAProxy with CRL from Vault...

2020-03-11 Thread Mehdi Ahmadi
Dear community,

I'm striving to prototype an example of HAProxy working with (HashiCorp)
Vault acting as PKI and CA generating end-user certificate as well as
providing CRL + OCSP (repo below)

To explain briefly - using openssl I'm generating root CA and intermediate
certificates on an initial HAProxy host - I then use the intermediate
certificate on a 2nd host to enable Vault and then generate subsequent
end-user certificates using its features.

The issue I am facing is a breakage to my working end-to-end (user) tests
via curl - when scraping the CRL file from Vault and setting it in
haproxy.conf with `bind  crl mycrl.pem` - there are no complaints from
and the config is valid (as too the CRL) - however I am no longer able to
get any valid responses:

```
# after: wget http://__VAULT___:8200/v1/pki/crl/pem & setting crl-file in
conf + restart

curl -v --cacert allowed1.tld.com.local_cachain.pem --cert
allowed1.tld.com.local_bundle.pem https://subdomain.tld.com.local/ ;
# *   Trying 192.168.10.200...
# * TCP_NODELAY set
# * Connected to subdomain.tld.com.local (192.168.10.200) port 443 (#0)
# * ALPN, offering h2
# * ALPN, offering http/1.1
# * successfully set certificate verify locations:
# *   CAfile: allowed1.tld.com.local_cachain.pem
#   CApath: none
# * TLSv1.2 (OUT), TLS handshake, Client hello (1):
# * TLSv1.2 (IN), TLS handshake, Server hello (2):
# * TLSv1.2 (IN), TLS handshake, Certificate (11):
# * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
# * TLSv1.2 (IN), TLS handshake, Request CERT (13):
# * TLSv1.2 (IN), TLS handshake, Server finished (14):
# * TLSv1.2 (OUT), TLS handshake, Certificate (11):
# * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
# * TLSv1.2 (OUT), TLS handshake, CERT verify (15):
# * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
# * TLSv1.2 (OUT), TLS handshake, Finished (20):
# * TLSv1.2 (IN), TLS alert, unknown CA (560):
# * error:1401E418:SSL routines:CONNECT_CR_FINISHED:tlsv1 alert unknown ca
# * Closing connection 0
# curl: (35) error:1401E418:SSL routines:CONNECT_CR_FINISHED:tlsv1 alert
unknown ca

# take crl-file out of haproxy all works ok again :-(
```

I've tried a few permutations of chains to ensure that my certificates have
the root & intermediate - however I'm lost for ideas and would be thankful
for any guidance, tips or indicators towards anything obvious I may have
missed.

Many thanks in advance.

https://github.com/aphorise/hashicorp.vagrant_vault-pki_haproxy
HAProxy conf:
https://github.com/aphorise/hashicorp.vagrant_vault-pki_haproxy/blob/master/2.install_haproxy.sh#L78


httpchk with: http-send-name-header & related...

2016-05-14 Thread Mehdi Ahmadi
When specifying:
```
option httpchk
```
As default or specific to a back-end - other properties are not passed or
set as part of the health check request.

For example:
- http-send-name-header
- forwardfor
Are not preserved or included in the health-check request.

At present it is possible to simply include required headers manually such
as:
```
backend TEST
 option httpchk HEAD / HTTP/1.1\r\nHost: fqnd.tld.org
 http-check expect status 200
```
However in the case of different / differing ID by server that'd correspond
to the expected Host value to be passed with each request - one would need
to result to multiple backends with varying checks or have multiple checks
per server in each back-end which may not be possible?

IMO it would be beneficial to include all related headers and options with
health checks by default with an added flag or property to disable this
behavior reverting back to what is the current behavior.

Is this a rational request & can it be anticipated in a future release?

Thanks to everyone for their continued input & contributions.

Cheers,.
Mehdi -


Re: Adding backend server name as request header

2016-05-12 Thread Mehdi Ahmadi
It may be that your after:
```
http-send-name-header X-CustomHeader
```
Which would set the ID of the selected server into the header
`X-CustomHeader`.
See the documentation for further details.
http://cbonte.github.io/haproxy-dconv/configuration-1.5.html



On Thu, May 12, 2016 at 6:29 PM, Dennis Jacobfeuerborn <
denni...@conversis.de> wrote:

> Hi,
> I'm wondering if there is a way to add the name of the server chosen for
> the request as a request header i.e. if the following server is chosen
> for the request:
>
> server back1 10.1.0.10:8080 check inter 2
>
> then I'd like to receive this header on the 10.1.0.10 system:
>
> X-CustomHeader: back1
>
> Is this possible?
>
> Regards,
>   Dennis
>
>


Re: Stale UNIX sockets after reload

2016-05-10 Thread Mehdi Ahmadi
In the case of `stop` - I image that the stale / former sockets can be
listed using:
`sudo lsof | grep SOMETHING` ?

I'm wondering if an additional shell level checks / cleanup can be done in
such cases were related PID's would be killed? - or perhaps there's a core
part to how socets are created and managed that I'm lacking.

Thanks for the input Willy! :-)


On Tue, May 10, 2016 at 2:15 PM, Willy Tarreau  wrote:

> On Mon, May 09, 2016 at 04:12:32PM +0200, Pavlos Parissis wrote:
> > On 09/05/2016 02:26 , Christian Ruppert wrote:
> > > Hi,
> > >
> > > it seems that HAProxy does not remove the UNIX sockets after reloading
> > > (also restarting?) even though they have been removed from the
> > > configuration and thus are stale afterwards.
> > > At least 1.6.4 seems to be affected. Can anybody else confirm that?
> It's
> > > a multi-process setup in this case but it also happens with binds bound
> > > to just one process.
> > >
> >
> > I can confirm this behavior. I don't think it is easy for haproxy to
> > clean up stale UNIX socket files as their names can change or stored in
> > a directory which is shared with other services.
>
> In fact it's not exact, it does its best for this. The thing is that upon
> a reload it's the new process which takes care of removing the old socket
> and it does so pretty well. But if you perform a stop there's no way to
> do it if the process is chrooted. In practice many daemons have the same
> issue, that's how you end up with /dev/log even when syslogd is not running
> or with /tmp/.X11-unix/X0 just to give a few examples.
>
> Hoping this helps,
> Willy
>
>
>


Re: config changes on the fly -- dynamically adding/removing backend servers

2016-02-18 Thread Mehdi Ahmadi
A `restart` - is naturally an unconditional step thats required -
especially in cases of new backends or server additions therein - which are
not trivial changes. Network loss / disturbance - if any - is negligible as
any existing connections should be migrated between process with a minute
window (relative to host) where new connection may not get a response, etc
- though realistically in most modern x64 hosts and environments we're
talking of < 10ms (least in my own case).

What may be a better option for you to consider is - having backup server
instances / routes predefined that you can then adjust or change
dynamically and perhaps even better control via `acl` related clauses which
can also be changed dynamically without having to do a restart.

This procedure can also work for load elastication - for example where
you'd have at least 100% / duplicate number of server instances in each
backend;  also a similar approach may server for later fail-safes that have
not yet been spawned or running but at least have a predefined route which
can be adjusted with appropriate checks and server weighting without any
further restart.

I hope I've not misinformed you in anyway.


On Thu, Feb 18, 2016 at 3:07 PM,  wrote:

> Hello,
>
> What is the best way to dynamically add a new backend server (cluster
> node) without causing traffic disruptions? In other words, after adding a
> new 'server' line to backend section. Merely saving the configuration file
> does not seem to cause HAProxy to re-read the configuration. My HAProxy
> runs as a service.
>
> Thank you
> Alex
>


Re: Sticky-tables by 2x query-strings...

2016-02-17 Thread Mehdi Ahmadi
This is a follow up to my former question regarding the ability to do `src`
based connection persistence / sticky tables subject to an additional rule
for two (2x) query-strings where present:

1 - a targeted server instance (over-rules all else)
2 - user-id (used where present or is the only inclusion)

EG (incoming) GET's:
http://sub-domain.tld.org/
http://sub-domain.tld.org/?SID=6
http://sub-domain.tld.org/?SID=6&UID=123
http://sub-domain.tld.org/?UID=123

What I was striving to accomplish was the conventional `stick on src` based
IP stick-tables where neither query-string is included.

I believe that I've been able to achieve what was required using two (2x)
separate backend with their own stick-tables.

I'm including the configuration for future reference and in case anyone has
any suggestion or alternative recommendations.

```
# # SID == server-id,
# # UID == user-id,
# # TLD == namespace / domain scope
#//~~~
frontend inwebs_https
 bind *:80
 bind *:443 #/*...*/
#//...
 acl url_TLD hdr(host) -i sub-domain.tld.org
 acl url_TID urlp(SID) -m found
 acl url_UID urlp(UID) -m found
 use_backend STICKY1 if url_TLD url_SID || url_UID
 use_backend STICKY2 if url_TLD ! url_SID ! url_UID
#//===

#//...
backend STICKY1
 stick-table type ip size 1m
 stick on src
 balance leastconn
 server server1.tld.org 127.0.0.1:58810 check
 server server2.tld.org 127.0.0.1:58811 check
 server server3.tld.org 127.0.0.1:58812 check
 server server4.tld.org 127.0.0.1:58813 check
#//

backend STICKY2
 stick-table type string size 1m
 stick on src table STICKY1
 stick on urlp(UID)
 balance leastconn
 use-server server1.tld.org if { urlp(SID) 1 }
 use-server server2.tld.org if { urlp(SID) 2 }
 use-server server3.tld.org if { urlp(SID) 3 }
 use-server server4.tld.org if { urlp(SID) 4 }
 server server1.tld.org 127.0.0.1:58810 check
 server server2.tld.org 127.0.0.1:58811 check
 server server3.tld.org 127.0.0.1:58812 check
 server server4.tld.org 127.0.0.1:58813 check
#//
```

Thanks very much & a big shout to the guys (bjozet, dlloyd, double-p,
PiBa-NL, meineerde) on IRC: #haproxy @ freenode.



On Tue, Feb 16, 2016 at 12:38 PM, Mehdi Ahmadi  wrote:

> Hey guys a first & late joining to the list.
>
> I'd like to have a src / IP based sticky rule that uses one or two
> combinations of query-string / get-parameters to determine stickiness.
>
> One (1x) query-string is specifically for session/user id's (UID) and the
> second (2x) to allow for specific targeting of a server thats in the
> concerned backend (SID).
>
> By default a leastconn balance algorithm should stick users to a server in
> a Backend group where no UID or SID are specified. Where both UID & SID are
> specified then the existing Stick entry should be overwritten with
> precedence toward SID for the given UID and where only `UID` is specified
> then any existing / prior entries should be used or the leasconn server
> designated for the request.
>
> In short the idea is to stick and keep user to first assigned server
> unless there are UID and or SID additions which should be respected
>
> I have the following two confs which I was experimenting toward a near
> solution:
>
> #//
> #//- EXAMPLE 1 - leastconn
> #//
> backend STICKY_HAP
>  stick-table type ip size 20k
>  stick on src
>  stick on urlp(UID)
>  balance leastconn
>  use-server S1.my.io if { urlp(SID) 1 }
>  use-server S2.my.io if { urlp(SID) 2 }
>  use-server S3.my.io if { urlp(SID) 3 }
>  server S1.my.io 127.0.0.1:58810 check
>  server S2.my.io 127.0.0.1:58811 check
>  server S3.my.io 127.0.0.1:58812 check
> #//
>
> #//
> #//- EXAMPLE 2 - url_param
> #//
> backend STICKY_HAP
>  stick-table type string len 64 size 1m
>  stick on src
>  stick on urlp(UID)
>  balance url_param UID
>  use-server S1.my.io if { urlp(SID) 1 }
>  use-server S2.my.io if { urlp(SID) 2 }
>  use-server S3.my.io if { urlp(SID) 3 }
>  server S1.my.io 127.0.0.1:58810 check
>  server S2.my.io 127.0.0.1:58811 check
>  server S3.my.io 127.0.0.1:58812 check
> #//
>
> I've also tried other settings such as:
> #//
> appsession UID len 64 timeout 3h request-learn query-string
> appsession SID len 64 timeout 3h request-learn query-string
> #//
> But without much luck.
>
> Thanks very much in advanced.
>


Sticky-tables by 2x query-strings...

2016-02-16 Thread Mehdi Ahmadi
Hey guys a first & late joining to the list.

I'd like to have a src / IP based sticky rule that uses one or two
combinations of query-string / get-parameters to determine stickiness.

One (1x) query-string is specifically for session/user id's (UID) and the
second (2x) to allow for specific targeting of a server thats in the
concerned backend (SID).

By default a leastconn balance algorithm should stick users to a server in
a Backend group where no UID or SID are specified. Where both UID & SID are
specified then the existing Stick entry should be overwritten with
precedence toward SID for the given UID and where only `UID` is specified
then any existing / prior entries should be used or the leasconn server
designated for the request.

In short the idea is to stick and keep user to first assigned server unless
there are UID and or SID additions which should be respected

I have the following two confs which I was experimenting toward a near
solution:

#//
#//- EXAMPLE 1 - leastconn
#//
backend STICKY_HAP
 stick-table type ip size 20k
 stick on src
 stick on urlp(UID)
 balance leastconn
 use-server S1.my.io if { urlp(SID) 1 }
 use-server S2.my.io if { urlp(SID) 2 }
 use-server S3.my.io if { urlp(SID) 3 }
 server S1.my.io 127.0.0.1:58810 check
 server S2.my.io 127.0.0.1:58811 check
 server S3.my.io 127.0.0.1:58812 check
#//

#//
#//- EXAMPLE 2 - url_param
#//
backend STICKY_HAP
 stick-table type string len 64 size 1m
 stick on src
 stick on urlp(UID)
 balance url_param UID
 use-server S1.my.io if { urlp(SID) 1 }
 use-server S2.my.io if { urlp(SID) 2 }
 use-server S3.my.io if { urlp(SID) 3 }
 server S1.my.io 127.0.0.1:58810 check
 server S2.my.io 127.0.0.1:58811 check
 server S3.my.io 127.0.0.1:58812 check
#//

I've also tried other settings such as:
#//
appsession UID len 64 timeout 3h request-learn query-string
appsession SID len 64 timeout 3h request-learn query-string
#//
But without much luck.

Thanks very much in advanced.