Re: issues with ALPN and h2 on frontend

2017-03-16 Thread Matt Jamison
So from what I can find, mode http and alpn h2 are not supported together?
alpn h2 is only supported with mode tcp? I get no errors with my config, so
I don't know what is unsupported.

I need mode http so I can insert cookies and do other things not supported
in mode tcp.

If someone could give me a definitive yes or no, i would be most grateful.

If mode http and alpn h2 aren't supported together, do we know if any
release in the near future will support it? I thought it was coming in 1.7
but I can't find any documentation on it.

Thanks!

~Matt

On Thu, Mar 16, 2017 at 12:00 PM, Matt Jamison <m...@tblinux.com> wrote:

> I compiled openssl 1.0.2k, then compiled haproxy 1.7.3 against it but alpn
> and h2 just seem to not working right.
>
> [root@proxy01 ~]# haproxy -vv
> HA-Proxy version 1.7.3 2017/02/28
> Copyright 2000-2017 Willy Tarreau <wi...@haproxy.org>
>
> Build options :
>   TARGET  = linux2628
>   CPU = generic
>   CC  = gcc
>   CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
>   OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
> USE_PCRE=1 USE_PCRE_JIT=1
>
> Default settings :
>   maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
>
> Encrypted password support via crypt(3): yes
> Built with zlib version : 1.2.7
> Running on zlib version : 1.2.7
> Compression algorithms supported : identity("identity"),
> deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
> Built with OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
> Running on OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
> OpenSSL library supports TLS extensions : yes
> OpenSSL library supports SNI : yes
> OpenSSL library supports prefer-server-ciphers : yes
> Built with PCRE version : 8.32 2012-11-30
> Running on PCRE version : 8.32 2012-11-30
> PCRE library supports JIT : yes
> Built without Lua support
> Built with transparent proxy support using: IP_TRANSPARENT
> IPV6_TRANSPARENT IP_FREEBIND
>
> Available polling systems :
>   epoll : pref=300,  test result OK
>poll : pref=200,  test result OK
>  select : pref=150,  test result OK
> Total: 3 (3 usable), will use epoll.
>
> Available filters :
> [COMP] compression
> [TRACE] trace
> [SPOE] spoe
>
>
> When I have alpn and h2 set on the bind line, no requests can get past the
> frontend. I disabled all back ends so that at least the 503 error page I
> have set would come up but no go.
>
> If I remove h2 , it works just fine with http/1.1.
>
> Syslog shows BADREQ coming in.
>
> I attached my haproxy.cfg.
>
> Am I doing something wrong?
>
> Any help would be super appreciated.
>
>
> ~Matt
>


issues with ALPN and h2 on frontend

2017-03-16 Thread Matt Jamison
I compiled openssl 1.0.2k, then compiled haproxy 1.7.3 against it but alpn
and h2 just seem to not working right.

[root@proxy01 ~]# haproxy -vv
HA-Proxy version 1.7.3 2017/02/28
Copyright 2000-2017 Willy Tarreau <wi...@haproxy.org>

Build options :
  TARGET  = linux2628
  CPU = generic
  CC  = gcc
  CFLAGS  = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
  OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1 USE_PCRE_JIT=1

Default settings :
  maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k  26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : yes
Built without Lua support
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
  epoll : pref=300,  test result OK
   poll : pref=200,  test result OK
 select : pref=150,  test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe


When I have alpn and h2 set on the bind line, no requests can get past the
frontend. I disabled all back ends so that at least the 503 error page I
have set would come up but no go.

If I remove h2 , it works just fine with http/1.1.

Syslog shows BADREQ coming in.

I attached my haproxy.cfg.

Am I doing something wrong?

Any help would be super appreciated.


~Matt


haproxy.cfg
Description: Binary data


Re: the site needs some love

2016-07-20 Thread Matt .
I could try to hug it, I'm quite good at it :)

2016-07-20 21:56 GMT+02:00 Pavlos Parissis :
> On 20/07/2016 08:57 μμ, Willy Tarreau wrote:
>> Hi Pavlos,
>>
>> On Sat, Jul 16, 2016 at 09:43:24PM +0200, Pavlos Parissis wrote:
>>> Hi,
>>>
>>> www.haproxy.org needs some love as the last update on 'Quick news' section
>>> was on April 13th and mentions older releases.
>>
>> Yes I know and I'm sad about it. We've fixed so many bugs that it's hard
>> to emit a synthetized changelog in a few minutes, which is hardly what I
>> have available to update it on each release :-(
>>
>> Any suggestion is welcome.
>>
>
> copy-paste from your ANNOUNCE mail which is very verbose:-), just skip
> the part with links.
>
> You could adjust announce-release script. If the site is a git repo then you 
> can
> automate the whole process:-)
>
> Cheers,
> Pavlos
>
>



Re: Adding a custom tcp protocol to HAProxy

2016-07-10 Thread Matt Esch
Hmm, this is interesting. The size of a frame and its id are at fixed
positions. Bytes 1,2 are length, 3,4 are type and 5,6,7,8 are the id. I'm
just not sure I could get multiple frames received on a single socket to
independently route through different backends, and for response frames to
go back through the correct incoming socket.

On Sun, Jul 10, 2016 at 8:10 AM, Chad Lavoie <clav...@haproxy.com> wrote:

> Greetings,
>
> On 7/10/16 6:33 AM, Matt Esch wrote:
> > I need to load balance a custom tcp protocol and wonder if HAProxy
> > could be configured or extended for my use case.
> >
> > The protocol is a multiplexed frame-based protocol. An incoming socket
> > can send frames in arbitrary order. The first 2 bytes dictate the
> > frame length entirely (so max 64k per frame). The frame has a type and
> > a header format, followed by a payload.
> >
> > The multiplexing works by assigning long ids in the frame header and
> > pairing the responses based on this id.
>
> Depending on what the id's actually look like this may or may not work,
> but before I started writing a lot of C I'd try something such as
> payload() per
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.1.5 to
> match an acl for use_backend; or stick on (using a table) via
> https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#stick%20on.
>
> Again, entirely possible that its not usable for your use-case, but if
> it is it sounds much easier than trying to code another format.
>
> - Chad
> >
> > A load balancer would read frames from the socket, parse the frame,
> > select the correct backend from a header and route the request through
> > an available backend peer. The response would be matched to the
> > incoming socket based on the frame id.
> >
> > I expect (a lot of) c code will need to be written to support such a
> > custom protocol. I'm looking for specific pointers about how to add
> > such a protocol to the existing codebase in a way that would fit
> > cleanly, and preferably in a modular fashion.
> >
> > Any hints appreciated
> >
> >
> > ~Matt
> >
> >
>
>


Adding a custom tcp protocol to HAProxy

2016-07-10 Thread Matt Esch
I need to load balance a custom tcp protocol and wonder if HAProxy could be
configured or extended for my use case.

The protocol is a multiplexed frame-based protocol. An incoming socket can
send frames in arbitrary order. The first 2 bytes dictate the frame length
entirely (so max 64k per frame). The frame has a type and a header format,
followed by a payload.

The multiplexing works by assigning long ids in the frame header and
pairing the responses based on this id.

A load balancer would read frames from the socket, parse the frame, select
the correct backend from a header and route the request through an
available backend peer. The response would be matched to the incoming
socket based on the frame id.

I expect (a lot of) c code will need to be written to support such a custom
protocol. I'm looking for specific pointers about how to add such a
protocol to the existing codebase in a way that would fit cleanly, and
preferably in a modular fashion.

Any hints appreciated


~Matt


Fwd: Capture and forward extended PKI cert attributes (e.g. UPN) using HAProxy

2016-06-21 Thread Matt Park
Hey All --
Not sure if you saw this or if it got blocked by spam filter...
TL;DR how can I access extended PKI attributes for use in HTTP header?
-- Forwarded message --
From: Matt Park <matthew.james.p...@gmail.com>
Date: Fri, Jun 17, 2016 at 5:19 PM
Subject: Capture and forward extended PKI cert attributes (e.g. UPN) using
HAProxy
To: haproxy@formilux.org


Hey All,

I'm guessing it's a terrible idea to submit to the mailing list on the
Friday before Father's day weekend (could just be US-centric thinking
though)
At any rate -- to the dad's out there, Happy Father's Day.

I've put about 20 hours into this and I'm pretty familiar with HAProxy, PKI
and mutual auth in general.  The only difference is that I need a v3
attribute off a smart card vs a soft cert.

I'm shamelessly ripping this from my Server Fault post
<http://serverfault.com/questions/783906/capture-and-forward-extended-pki-cert-attributes-e-g-upn-using-haproxy>,
so synopsis is below:

I'm trying to pull an attribute from a client certificate in a mutual
authentication scenario and set it as a HTTP header in the request to
backend. See fig 1 below.

fig1
  [user with correct certificate]
 |
 | 1. presents cert with normal v1 attributes
 | has additional "extension" attributes
 | incl. "Subject Alt Name" which contains
 | "User Principal Name" ( UPN looks like an email addr)
 |
 [example.com:443 haproxy] --app1 / app2 CNAMEd to example.com
 |
 | 2. read Subject Alternative Name
 | 3. regex or parse out UPN
 | 4. set REMOTE_USER header to be UPN
 | 5. pass to backend(s)
 |
   ┌--┬
   |  |
   |  |
   |  |
   |  |
   V  V
 [app1svr:80]   [app2svr:80]

Normally, it's easy, you would just pull the attribute you want using the
built in functionality like so:

frontend https
 bind *:443 name https ssl crt ./server.pem ca-file ./ca.crt verify required

 http-request set-header X-SSL-Client-DN%{+Q}[ssl_c_s_dn]
 http-request set-header X-SSL-Client-CN%{+Q}[ssl_c_s_dn(cn)]
 http-request set-header X-SSL-Issuer   %{+Q}[ssl_c_i_dn]
 http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore]
 http-request set-header X-SSL-Client-NotAfter  %{+Q}[ssl_c_notafter]

 default_backend app1svr

backend app1svr
 server app1 app1svr.example.com:80

backend app2svr
 server app2 app2svr.example.com:80

List of attributes here:
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.4

Unfortunately, missing from the list of attributes are any of the COMMON
extension attributes such as:

   - Subject alternative name
  - RFC822 Name
  - Other Name
 - Principal Name
  - CRL Distribution Points

I can't seem to figure out the right way to access these attributes.
Looking at the code (below line 5815)
https://github.com/haproxy/haproxy/blob/master/src/ssl_sock.c it doesn't
seem to be *just* a documentation issue.

Any thoughts here? (possibly related issue):
http://stackoverflow.com/questions/22966461/reading-an-othername-value-from-a-subjectaltname-certificate-extension

Thanks for reading if you made it this far.

R,

Matt


Capture and forward extended PKI cert attributes (e.g. UPN) using HAProxy

2016-06-17 Thread Matt Park
Hey All,

I'm guessing it's a terrible idea to submit to the mailing list on the
Friday before Father's day weekend (could just be US-centric thinking
though)
At any rate -- to the dad's out there, Happy Father's Day.

I've put about 20 hours into this and I'm pretty familiar with HAProxy, PKI
and mutual auth in general.  The only difference is that I need a v3
attribute off a smart card vs a soft cert.

I'm shamelessly ripping this from my Server Fault post
<http://serverfault.com/questions/783906/capture-and-forward-extended-pki-cert-attributes-e-g-upn-using-haproxy>,
so synopsis is below:

I'm trying to pull an attribute from a client certificate in a mutual
authentication scenario and set it as a HTTP header in the request to
backend. See fig 1 below.

fig1
  [user with correct certificate]
 |
 | 1. presents cert with normal v1 attributes
 | has additional "extension" attributes
 | incl. "Subject Alt Name" which contains
 | "User Principal Name" ( UPN looks like an email addr)
 |
 [example.com:443 haproxy] --app1 / app2 CNAMEd to example.com
 |
 | 2. read Subject Alternative Name
 | 3. regex or parse out UPN
 | 4. set REMOTE_USER header to be UPN
 | 5. pass to backend(s)
 |
   ┌--┬
   |  |
   |  |
   |  |
   |  |
   V  V
 [app1svr:80]   [app2svr:80]

Normally, it's easy, you would just pull the attribute you want using the
built in functionality like so:

frontend https
 bind *:443 name https ssl crt ./server.pem ca-file ./ca.crt verify required

 http-request set-header X-SSL-Client-DN%{+Q}[ssl_c_s_dn]
 http-request set-header X-SSL-Client-CN%{+Q}[ssl_c_s_dn(cn)]
 http-request set-header X-SSL-Issuer   %{+Q}[ssl_c_i_dn]
 http-request set-header X-SSL-Client-NotBefore %{+Q}[ssl_c_notbefore]
 http-request set-header X-SSL-Client-NotAfter  %{+Q}[ssl_c_notafter]

 default_backend app1svr

backend app1svr
 server app1 app1svr.example.com:80

backend app2svr
 server app2 app2svr.example.com:80

List of attributes here:
https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#7.3.4

Unfortunately, missing from the list of attributes are any of the COMMON
extension attributes such as:

   - Subject alternative name
  - RFC822 Name
  - Other Name
 - Principal Name
  - CRL Distribution Points

I can't seem to figure out the right way to access these attributes.
Looking at the code (below line 5815)
https://github.com/haproxy/haproxy/blob/master/src/ssl_sock.c it doesn't
seem to be *just* a documentation issue.

Any thoughts here? (possibly related issue):
http://stackoverflow.com/questions/22966461/reading-an-othername-value-from-a-subjectaltname-certificate-extension

Thanks for reading if you made it this far.

R,

Matt


Re: Slowness on deployment

2016-03-10 Thread matt
I have the log, but a lot of the data is confidential. 
Can I send you by email in order for you to take a look?

We can post a edited version later in order to help others
debug the same issue

Thanks in advance




Re: Slowness on deployment

2016-03-09 Thread matt
Yes. Regarding the different times, I've made some
 editing in order to avoid exposing some information
 about our endpoints/ip addresses, but they are 
normal times.

Besides from that, sounds great. I'll collect
some data  tonight (im trying not to do this
now since our traffic is really high)

I'm thinking about requests being queued due to 
the maxconn parameter (I have a global maxconn 
of 4000, and a default of 3000). Could this be the
 case? I'll take a look at haproxy stats too to see if
 any of the limits is reached when the app is being
 deployed

I'll let you know about this data recollection. 
Thanks again for the help, is being super productive for me




Re: Slowness on deployment

2016-03-09 Thread matt
Chad! Thanks a lot for your response.

I've updated the configuration, and Im now 
logging all the request.
Is there any tool to process this kind of data?

This is a normal capture: 
https://gist.github.com/matiasdecarli/cd138d47a756d7b3d24e

I going to fire a deployment in order to look at the 
specific timeframe of the slowness

Can you see anything strange in this logs?






Slowness on deployment

2016-03-09 Thread matt
Hi guys. I been using HAproxy for two years now, and I really love the product.
Simple, quick and really well documented. 

Lately I been having an issue that keeps me awake by night, 
and maybe you could help me solving.

I have 4 VM's behind 2 HA proxies. On every VM I have a Docker 
container serving on port 80. 
So far is running great, but lately im having issues on deployments.

The deployment scenario is like this: 
I go through every VM (one at the time), 
remove the VM from both LB's with socat, stop the container 
and then create a new container. 

The thing is, just when I delete the container (not when I
remove from the LB), the response time of the OTHERS VM's 
starts increasing, which causes my deploys to
 have a peak in response time.

The way I have to test the response times is a app that
keeps pinging the app from the  outside and see the response 
payload to see which server is it, which lead me to two ideas

1) The apps on the others VM's gets overloaded 
with the traffic (which I don't believe is the case, because 
I've tried using 1 more VM and the issue remains 
the same)

2) HAproxy is rerouting some request in a way that causes slowness

Does this sound familiar to any of you?
How can I debug this kind of events?

Thanks in advance






Temporary Maintenance frontend for all port 80/443 ?

2016-01-24 Thread Matt .
Hi,

I wondered if it's possible, I seen some examples but was not sure
yet, to let HA proxy running as you do but place a frontend that
listens on 0.0.0.0 and port 80 and 443 and redirect all requests to a
temporary page ?

Maybe even a local page on HAproxy ?

As I'm on pfsense I need to disable all frontends, and that are quite some.

Suggestions are welcome!

Thanks,

Matt



haproxy + exim + sni

2015-11-01 Thread Matt Bryant

All,

exim supports SNI for multidomain certs off one running instance and can 
get that working ok ... but now trying to put that behind a haproxy LB 
... can this be done ??? Is there a way that haproxy can forward the SNI 
information on in the connection it makes ?? So far I seem to just get 
the default cert  or do I need to terminate the SSL at haproxy .. 
would rather not since it means more config and more places to the put 
the cert ..(to aupport starttls etc the cert has to be on the mailserver 
anyhow).


rgds

Matt Bryant
--
m...@the-bryants.net
<https://www.postbox-inc.com/?utm_source=email_medium=siglink_campaign=reach>


Re: RTMP offloading

2015-04-12 Thread Matt .
Hi Cyril,

I have tested your config in my Pfsense setup and it doesn't work.

As I connecting using a string for auth it might be that, I'm not
playing but recording.

As I can do for other servervices the same I wonder what goes wrong,
it's really not easy to find and plain rtmp through HA proxy goes
well.

So if you ask me HA does something strange to the offloading on rtmps,
could it be that it add something ? I get a long handshake error on
Red5 which acutually says the data which comes is in is not as
expected but works fine on plain rtmp.

Thanks,

Matt

2015-03-29 19:14 GMT+02:00 Matt . yamakasi@gmail.com:
 Hi Cyril,

 Thanks, I'm indeed using red5 in my setup, client is flex.

 Just non ssl at all, so only 1935 over HA works prefectly. When I set
 my frontend to ssl offloading on TCP 443 I see on my red5 server my
 client coming in when connecting but than it hangs, no other data in
 de red5 log.

 I will simply this setup again to see what happens. Red5 works perfectly.

 I will report.

 Cheers,

 Matt

 2015-03-29 19:08 GMT+02:00 Cyril Bonté cyril.bo...@free.fr:
 Hi Matt,

 Le 29/03/2015 16:19, Matt . a écrit :

 Whoops my fault while testing.

 Indeed, on the backends I connect to 1935 again, I see a connection
 coming in but no clear data. That part is actually my issue and
 difficult to trace.


 Then can you re-provide the expected configuration ? Because the one you
 provided is clearly not going to work.

 Making some quick tests here, it works (but it was really quick tests in a
 very simple configuration).

 Steps :
 1. Download a red5-server release, untar, and run it.
 2. Access to http://localhost:5080/installer/ and install OFLA Demo.
 3. Test a RTMP video provided with the demo :
vlc rtmp://localhost/oflaDemo/Avengers2.mp4
 4. Configure haproxy for offloading SSL
  haproxy.cfg content :
listen rtmps
  mode tcp
  bind :443 ssl crt localhost.pem
  server rtmp localhost:1935

listen status
  mode http
  bind :
  stats enable
  stats uri /
 5. Launch haproxy (in foreground for the tests):
sudo haproxy -f haproxy.cfg
 6. Test the RTMPS video :
vlc rtmps://localhost/oflaDemo/Avengers2.mp4
= The video is played and we can see that statistics in haproxy are
 updated when the connection is closed.

 At this point, I'd recommend simplifying the configuration during the debug.
 At least, use only one server for the backends.
 Also, how do you test your rtmps streams ? with which client ? which RTMP
 server ? ...



 2015-03-29 16:11 GMT+02:00 Baptiste bed...@gmail.com:

 frontend rtmp_https
  bindxxx.xxx.xxx.xxx:443 name
 xxx.xxx.xxx.xxx:443 ssl  crt /var/etc/haproxy/mycert.pem
  modetcp
  log global
  maxconn 9
  timeout client  60
  use_backend rtmpbackend_tcp_ipvANY if
  default_backend rtmpbackend_tcp_ipvANY


 backend rtmpbackend_tcp_ipvANY
  modetcp
  balance leastconn
  timeout connect 3
  timeout server  3
  retries 3
  option  httpchk GET /
  server  rtmp-01 172.16.5.11:443 check-ssl
 check inter 1000  weight 100 verify none
  server  rtmp-02 172.16.5.12:443 check-ssl
 check inter 1000  weight 100 verify none


 Weren't you supposed to connect on port 1935 where traffic is unciphered?
 Can you confirm wether traffic is ciphered or not on server's port 443
 ?? (you seem to be mixing clear traffic over a connection which expect
 ciphered traffic on the server side).
 Does haproxy says the servers are UP (logs, stats page, etc...)

 Baptiste




 --
 Cyril Bonté



Re: Complete rewrite of HAProxy in Lua

2015-04-01 Thread Matt .
Good luck!

(good to hear about this progress)

2015-04-01 10:43 GMT+02:00 Willy Tarreau w...@1wt.eu:
 Hi,

 As some might have noticed, HAProxy development is progressively slowing
 down over time. I have analyzed the situation and came to the following
 conclusions :

   - the code base is increasing and is becoming slower to build day
 after day. Ten years ago, version 1.1.31 was only 6716 lines
 everything included. Today, mainline is 108395 lines, or 16 times
 larger.

   - gcc is getting slower over time. Since version 2.7.2 I used to rely
 on ten years ago, we've seen important slowdowns with v2.95, several
 v3.x then v4.x. I'm currently on 4.7 and afraid to upgrade.

   - while the whole code base used to build in less than a second ten
 years ago on an Athlon XP-1800, now it takes about 10 seconds on a
 core i5 at 3 GHz. Multiply this by about 200 builds a day and you
 see that half an hour is wasted every single day dedicated to
 development. That's about 1/4 of the available time if you count
 the small amount of time available after processing e-mails.

   - people don't learn C anymore at school and this makes it harder to
 get new contributors. In fact, most of those who are proficient in C
 already have a job and little spare time to dedicate to an
 opensource project.

 In parallel, I'm seeing I'm getting old, I turned 40 last year and it's
 obvious that I'm not as much capable of optimizing code as I used to be.
 I'm of the old school, still counting the CPU cycles it takes a function
 to execute, the nanoseconds required to append an X-Forwarded-For header
 or to parse a cookie. And all of this is totally wasted when people run
 the software in virtual machines which only allocate portions of CPUs
 (ie they switch between multiple VMs at high rate), or install it in
 front of applications which saturate at 100 requests a second.

 Recently with the Lua addition, we found it to be quite fast. Maybe not
 as fast as C, but Lua is improving and C skills are diminishing, so I
 guess that in a few years the code written in Lua will be much faster
 than the code we'll be able to write in C. Thus I found it wise to
 declare a complete rewrite of HAProxy in Lua. It comes with many
 benefits.

 First, Lua is easy to learn, we'll get many more developers and
 contributors. One of the reason is that you don't need to care about
 resource allocation anymore. What's the benefit of doing an strdup() to
 keep a copy of a string when you can simply do a = b without having to
 care about the memory used behind. Machines are huge nowadays, much
 larger than the old Athlon XP I was using 10 years ago.

 Second, Lua doesn't require a compiler, so we'll save 30 minutes a day
 per 200 builds, this will definitely speed up development for each
 developer. And we won't depend on a given C compiler, won't be subject
 to its bugs, and more importantly we'll be able to get rid of the few
 lines of assembly that we currently have in some performance-critical
 parts.

 Third, last version of HAProxy saw a lot of new sample fetch functions
 and converters. This will not be needed anymore, because the code and
 the configuration will be mixed together, just as everyone does with
 Shell scripts. This means that any config will just look like an include
 directive for the haproxy code, followed by some code to declare the
 configuration.  It will then be possible to create infinite combinations
 of new functions, and the configuration will have access to anything
 internal to HAProxy.

 In the end, of the current HAProxy will only remain the Lua engine, and
 probably by then we'll find even better ones so that haproxy will be
 distributed as a Lua library to use anywhere, maybe even on IoT devices
 if that makes sense (anyone ever dreamed of having haproxy in their
 watches ?).

 This step forward will save us from having to continue to do any code
 versionning, because everyone will have his own fork and the code will
 grow much faster this way. That also means that Git will become useless
 for us. In terms of security, it will be much better as it will not be
 possible to exploit a vulnerability common to all versions anymore since
 each version will be different.

 HAProxy Technologies is going to assign a lot of resources to this task.
 Obviously all the development team will work on this full time, but we
 also realize that since customers will not be interested in the C
 version anymore after this public announce, we'll train the sales people
 to write Lua as well in order to speed up development.

 We'll continue to provide an enterprise version forked from HAPEE that
 we'll rename Luapee. It will still provide all the extras that make
 it a professional solution such as VRRP, SNMP etc and over the long term
 we expect to rewrite all of these components in Lua as well.

 The ALOHA appliances will change a little bit, they'll mostly be a Lua
 engine to run all 

Re: ldap-check with Active Directory

2015-03-31 Thread Matt .
I'm also testing some ldap checks but I see lots of logging and log
partitions filling up like crazy.

I wonder if it's really doable to check the ldap status in in a gracefull way.

2015-03-31 9:45 GMT+02:00 Neil - HAProxy List
maillist-hapr...@iamafreeman.com:
 Hello

 I was thinking of updating the ldap-check but I think I've a better idea.
 Macros (well ish).

   send-binary 300c0201 # LDAP bind request ROOT simple
   send-binary 01 # message ID
   send-binary 6007 # protocol Op
   send-binary 0201 # bind request
   send-binary 03 # LDAP v3
   send-binary 04008000 # name, simple authentication
   expect binary 0a0100 # bind response + result code: success
   send-binary 30050201034200 # unbind request

 could be in a file named macros/ldap-simple-bind

 then the option
  tcp-check-macro ldap-simple-bind

 would use it, I know this is close to includes.

 similarly macros/smtp-helo-quit
  connect port 25
  expect rstring ^220
  send QUIT\r\n
  expect rstring ^221


 or from
 http://blog.haproxy.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/
 # FCGI_BEGIN_REQUEST
  send-binary   01 # version
  send-binary   01 # FCGI_BEGIN_REQUEST
  send-binary 0001 # request id
  send-binary 0008 # content length
  send-binary   00 # padding length
  send-binary   00 #
  send-binary 0001 # FCGI responder
  send-binary  # flags
  send-binary  #
  send-binary  #
  # FCGI_PARAMS
  send-binary   01 # version
  send-binary   04 # FCGI_PARAMS
  send-binary 0001 # request id
  send-binary 0045 # content length
  send-binary   03 # padding length: padding for content % 8 = 0
  send-binary   00 #
  send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET
  send-binary 0b055343524950545f4e414d452f70696e67   # SCRIPT_NAME = /ping
  send-binary 0f055343524950545f46494c454e414d452f70696e67 # SCRIPT_FILENAME
 = /ping
  send-binary 040455534552524F4F54 # USER = ROOT
  send-binary 00 # padding
  # FCGI_PARAMS
  send-binary   01 # version
  send-binary   04 # FCGI_PARAMS
  send-binary 0001 # request id
  send-binary  # content length
  send-binary   00 # padding length: padding for content % 8 = 0
  send-binary   00 #

  expect binary 706f6e67 # pong

 (though for items like
 send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET
 I'd prefer a
 send-as-binary REQUEST_METHOD = GET
 )

 these and many others could be shipped with haproxy.

 this seems to make sense to me as they are small contained logical items

 Neil


 On 30 March 2015 at 23:02, Baptiste bed...@gmail.com wrote:

 you should believe it :)

 On Mon, Mar 30, 2015 at 11:34 PM, Neil - HAProxy List
 maillist-hapr...@iamafreeman.com wrote:
  Hello
 
  Thanks so much. That worked well, I now get
  L7OK/0 in 0ms
  not sure I believe the 0ms but maybe I should
 
  Thanks again,
 
  Neil
 
  On 30 March 2015 at 22:14, Baptiste bed...@gmail.com wrote:
 
  On Mon, Mar 30, 2015 at 10:33 PM, Neil - HAProxy List
  maillist-hapr...@iamafreeman.com wrote:
   Hello
  
   I'm trying to use ldap-check with active directory and the response
   active
   directory gives is not one ldap-check is happy to accept
  
   when I give a 389 directory backend ldap server all is well, when I
   use
   AD I
   get 'Not LDAPv3 protocol'
  
   I've done a little poking about and found that
   if ((msglen  2) ||
   (memcmp(check-bi-data + 2 + msglen,
   \x02\x01\x01\x61, 4) != 0)) {
   set_server_check_status(check,
   HCHK_STATUS_L7RSP, Not LDAPv3 protocol);
   is where I'm getting stopped as msglen is 4
  
   Here is tcpdump of 389 directory response (the one that works) 2
   packets
   21:29:34.195699 IP 389.ldap  HAPROXY.57109: Flags [.], ack 15, win
   905,
   options [nop,nop,TS val 856711882 ecr 20393440], length 0
   0x:  0050 5688 7042 0064 403b 2700 0800 4500
   .PV.pB.d@;'...E.
   0x0010:  0034 9d07 4000 3f06 3523 ac1b e955 ac18
   .4..@.?.5#...U..
   0x0020:  2810 0185 df15 5cab ffcd 63ba 77d3 8010
   (.\...c.w...
   0x0030:  0389 2c07  0101 080a 3310 62ca 0137
   ..,...3.b..7
   0x0040:  2de0 -.
   21:29:34.195958 IP 389.ldap  HAPROXY.57109: Flags [P.], seq 1:15,
   ack
   15,
   win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 14
   0x:  0050 5688 7042 0064 403b 2700 0800 4500
   .PV.pB.d@;'...E.
   0x0010:  0042 9d08 4000 3f06 3514 ac1b e955 ac18
   .B..@.?.5U..
   0x0020:  2810 0185 df15 5cab ffcd 63ba 77d3 8018
   (.\...c.w...
   0x0030:  0389 e878  0101 080a 3310 62ca 0137
   ...x..3.b..7
   0x0040:  2de0 300c 0201 0161 070a 0100 0400 0400
   -.0a
  
   Here is tcpdump of active directory (broken) 1 packet
  
   21:25:24.519883 IP ADSERVER.ldap  HAPROXY.57789: Flags [P.], seq
   1:23,
   ack
   15, win 260, options [nop,nop,TS val 1870785 ecr 

Re: ldap-check with Active Directory

2015-03-31 Thread Matt .
Hi Baptiste,

Yes I've seen it also and never got around large logs.

What do most people do, empty logt very often ?



2015-03-31 11:29 GMT+02:00 Baptiste bed...@gmail.com:
 Hi Matt,

 The issue with LDAP, is that it is not a banner protocol.
 So either you check the TCP port is well bound on the server for a
 simple L4 check, for L7, you don't have the choice, you must send a
 message and check the server's result.

 Baptiste


 On Tue, Mar 31, 2015 at 9:53 AM, Matt . yamakasi@gmail.com wrote:
 I'm also testing some ldap checks but I see lots of logging and log
 partitions filling up like crazy.

 I wonder if it's really doable to check the ldap status in in a gracefull 
 way.

 2015-03-31 9:45 GMT+02:00 Neil - HAProxy List
 maillist-hapr...@iamafreeman.com:
 Hello

 I was thinking of updating the ldap-check but I think I've a better idea.
 Macros (well ish).

   send-binary 300c0201 # LDAP bind request ROOT simple
   send-binary 01 # message ID
   send-binary 6007 # protocol Op
   send-binary 0201 # bind request
   send-binary 03 # LDAP v3
   send-binary 04008000 # name, simple authentication
   expect binary 0a0100 # bind response + result code: success
   send-binary 30050201034200 # unbind request

 could be in a file named macros/ldap-simple-bind

 then the option
  tcp-check-macro ldap-simple-bind

 would use it, I know this is close to includes.

 similarly macros/smtp-helo-quit
  connect port 25
  expect rstring ^220
  send QUIT\r\n
  expect rstring ^221


 or from
 http://blog.haproxy.com/2014/06/06/binary-health-check-with-haproxy-1-5-php-fpmfastcgi-probe-example/
 # FCGI_BEGIN_REQUEST
  send-binary   01 # version
  send-binary   01 # FCGI_BEGIN_REQUEST
  send-binary 0001 # request id
  send-binary 0008 # content length
  send-binary   00 # padding length
  send-binary   00 #
  send-binary 0001 # FCGI responder
  send-binary  # flags
  send-binary  #
  send-binary  #
  # FCGI_PARAMS
  send-binary   01 # version
  send-binary   04 # FCGI_PARAMS
  send-binary 0001 # request id
  send-binary 0045 # content length
  send-binary   03 # padding length: padding for content % 8 = 0
  send-binary   00 #
  send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET
  send-binary 0b055343524950545f4e414d452f70696e67   # SCRIPT_NAME = /ping
  send-binary 0f055343524950545f46494c454e414d452f70696e67 # SCRIPT_FILENAME
 = /ping
  send-binary 040455534552524F4F54 # USER = ROOT
  send-binary 00 # padding
  # FCGI_PARAMS
  send-binary   01 # version
  send-binary   04 # FCGI_PARAMS
  send-binary 0001 # request id
  send-binary  # content length
  send-binary   00 # padding length: padding for content % 8 = 0
  send-binary   00 #

  expect binary 706f6e67 # pong

 (though for items like
 send-binary 0e03524551554553545f4d4554484f44474554 # REQUEST_METHOD = GET
 I'd prefer a
 send-as-binary REQUEST_METHOD = GET
 )

 these and many others could be shipped with haproxy.

 this seems to make sense to me as they are small contained logical items

 Neil


 On 30 March 2015 at 23:02, Baptiste bed...@gmail.com wrote:

 you should believe it :)

 On Mon, Mar 30, 2015 at 11:34 PM, Neil - HAProxy List
 maillist-hapr...@iamafreeman.com wrote:
  Hello
 
  Thanks so much. That worked well, I now get
  L7OK/0 in 0ms
  not sure I believe the 0ms but maybe I should
 
  Thanks again,
 
  Neil
 
  On 30 March 2015 at 22:14, Baptiste bed...@gmail.com wrote:
 
  On Mon, Mar 30, 2015 at 10:33 PM, Neil - HAProxy List
  maillist-hapr...@iamafreeman.com wrote:
   Hello
  
   I'm trying to use ldap-check with active directory and the response
   active
   directory gives is not one ldap-check is happy to accept
  
   when I give a 389 directory backend ldap server all is well, when I
   use
   AD I
   get 'Not LDAPv3 protocol'
  
   I've done a little poking about and found that
   if ((msglen  2) ||
   (memcmp(check-bi-data + 2 + msglen,
   \x02\x01\x01\x61, 4) != 0)) {
   set_server_check_status(check,
   HCHK_STATUS_L7RSP, Not LDAPv3 protocol);
   is where I'm getting stopped as msglen is 4
  
   Here is tcpdump of 389 directory response (the one that works) 2
   packets
   21:29:34.195699 IP 389.ldap  HAPROXY.57109: Flags [.], ack 15, win
   905,
   options [nop,nop,TS val 856711882 ecr 20393440], length 0
   0x:  0050 5688 7042 0064 403b 2700 0800 4500
   .PV.pB.d@;'...E.
   0x0010:  0034 9d07 4000 3f06 3523 ac1b e955 ac18
   .4..@.?.5#...U..
   0x0020:  2810 0185 df15 5cab ffcd 63ba 77d3 8010
   (.\...c.w...
   0x0030:  0389 2c07  0101 080a 3310 62ca 0137
   ..,...3.b..7
   0x0040:  2de0 -.
   21:29:34.195958 IP 389.ldap  HAPROXY.57109: Flags [P.], seq 1:15,
   ack
   15,
   win 905, options [nop,nop,TS val 856711882 ecr 20393440], length 14
   0x:  0050 5688 7042 0064 403b 2700 0800 4500
   .PV.pB.d

Re: RTMP offloading

2015-03-29 Thread Matt .
Whoops my fault while testing.

Indeed, on the backends I connect to 1935 again, I see a connection
coming in but no clear data. That part is actually my issue and
difficult to trace.

2015-03-29 16:11 GMT+02:00 Baptiste bed...@gmail.com:
 frontend rtmp_https
 bindxxx.xxx.xxx.xxx:443 name
 xxx.xxx.xxx.xxx:443 ssl  crt /var/etc/haproxy/mycert.pem
 modetcp
 log global
 maxconn 9
 timeout client  60
 use_backend rtmpbackend_tcp_ipvANY if
 default_backend rtmpbackend_tcp_ipvANY


 backend rtmpbackend_tcp_ipvANY
 modetcp
 balance leastconn
 timeout connect 3
 timeout server  3
 retries 3
 option  httpchk GET /
 server  rtmp-01 172.16.5.11:443 check-ssl
 check inter 1000  weight 100 verify none
 server  rtmp-02 172.16.5.12:443 check-ssl
 check inter 1000  weight 100 verify none

 Weren't you supposed to connect on port 1935 where traffic is unciphered?
 Can you confirm wether traffic is ciphered or not on server's port 443
 ?? (you seem to be mixing clear traffic over a connection which expect
 ciphered traffic on the server side).
 Does haproxy says the servers are UP (logs, stats page, etc...)

 Baptiste



Re: RTMP offloading

2015-03-29 Thread Matt .
Hi,

I have tried all, also TCP, I'm configuring it using pfsense so I need
to grab it from there.

Do you have a small example of what should work ? I can paste that to
pfsense to than.

In my app I just should connect rtmps to port 443 on ha, offload and
connect to normal rtmp 1935 again was my idea ?

Thanks so far!

Cheers,

Matt

2015-03-29 15:47 GMT+02:00 Baptiste bed...@gmail.com:
 On Sun, Mar 29, 2015 at 1:05 PM, Matt . yamakasi@gmail.com wrote:
 Hi Guys,


 I'm trying to offload a rtmp connection where I connect using rtmps to
 ha proxy and offload the ssl layer there.

 In some strange way I can't get it working but I can with other
 services the same way.

 Is RTMP a hard one in this case ?

 Thanks,

 Matt


 Hi,

 Are you using mode tcp ?
 could you share your configuration?
 any error message provided by any equipement involved in your setup?

 Baptiste



Re: RTMP offloading

2015-03-29 Thread Matt .
Bapiste,

No that was not the idea but I was debugging with someone of
pfsense/haproxy so suggestions were good.

This is what I use for RTMP:

frontend rtmp_https
bindxxx.xxx.xxx.xxx:443 name
xxx.xxx.xxx.xxx:443 ssl  crt /var/etc/haproxy/mycert.pem
modetcp
log global
maxconn 9
timeout client  60
use_backend rtmpbackend_tcp_ipvANY if
default_backend rtmpbackend_tcp_ipvANY


backend rtmpbackend_tcp_ipvANY
modetcp
balance leastconn
timeout connect 3
timeout server  3
retries 3
option  httpchk GET /
server  rtmp-01 172.16.5.11:443 check-ssl
check inter 1000  weight 100 verify none
server  rtmp-02 172.16.5.12:443 check-ssl
check inter 1000  weight 100 verify none

2015-03-29 15:56 GMT+02:00 Baptiste bed...@gmail.com:
 Matt,

 I won't do your configuration since I have no idea what you want to do.
 Share what you did exactly, share more information about the issues
 (logs, etc...) and we may help.

 Baptiste


 On Sun, Mar 29, 2015 at 3:53 PM, Matt . yamakasi@gmail.com wrote:
 Hi,

 I have tried all, also TCP, I'm configuring it using pfsense so I need
 to grab it from there.

 Do you have a small example of what should work ? I can paste that to
 pfsense to than.

 In my app I just should connect rtmps to port 443 on ha, offload and
 connect to normal rtmp 1935 again was my idea ?

 Thanks so far!

 Cheers,

 Matt

 2015-03-29 15:47 GMT+02:00 Baptiste bed...@gmail.com:
 On Sun, Mar 29, 2015 at 1:05 PM, Matt . yamakasi@gmail.com wrote:
 Hi Guys,


 I'm trying to offload a rtmp connection where I connect using rtmps to
 ha proxy and offload the ssl layer there.

 In some strange way I can't get it working but I can with other
 services the same way.

 Is RTMP a hard one in this case ?

 Thanks,

 Matt


 Hi,

 Are you using mode tcp ?
 could you share your configuration?
 any error message provided by any equipement involved in your setup?

 Baptiste



Re: RTMP offloading

2015-03-29 Thread Matt .
Hi Cyril,

Thanks, I'm indeed using red5 in my setup, client is flex.

Just non ssl at all, so only 1935 over HA works prefectly. When I set
my frontend to ssl offloading on TCP 443 I see on my red5 server my
client coming in when connecting but than it hangs, no other data in
de red5 log.

I will simply this setup again to see what happens. Red5 works perfectly.

I will report.

Cheers,

Matt

2015-03-29 19:08 GMT+02:00 Cyril Bonté cyril.bo...@free.fr:
 Hi Matt,

 Le 29/03/2015 16:19, Matt . a écrit :

 Whoops my fault while testing.

 Indeed, on the backends I connect to 1935 again, I see a connection
 coming in but no clear data. That part is actually my issue and
 difficult to trace.


 Then can you re-provide the expected configuration ? Because the one you
 provided is clearly not going to work.

 Making some quick tests here, it works (but it was really quick tests in a
 very simple configuration).

 Steps :
 1. Download a red5-server release, untar, and run it.
 2. Access to http://localhost:5080/installer/ and install OFLA Demo.
 3. Test a RTMP video provided with the demo :
vlc rtmp://localhost/oflaDemo/Avengers2.mp4
 4. Configure haproxy for offloading SSL
  haproxy.cfg content :
listen rtmps
  mode tcp
  bind :443 ssl crt localhost.pem
  server rtmp localhost:1935

listen status
  mode http
  bind :
  stats enable
  stats uri /
 5. Launch haproxy (in foreground for the tests):
sudo haproxy -f haproxy.cfg
 6. Test the RTMPS video :
vlc rtmps://localhost/oflaDemo/Avengers2.mp4
= The video is played and we can see that statistics in haproxy are
 updated when the connection is closed.

 At this point, I'd recommend simplifying the configuration during the debug.
 At least, use only one server for the backends.
 Also, how do you test your rtmps streams ? with which client ? which RTMP
 server ? ...



 2015-03-29 16:11 GMT+02:00 Baptiste bed...@gmail.com:

 frontend rtmp_https
  bindxxx.xxx.xxx.xxx:443 name
 xxx.xxx.xxx.xxx:443 ssl  crt /var/etc/haproxy/mycert.pem
  modetcp
  log global
  maxconn 9
  timeout client  60
  use_backend rtmpbackend_tcp_ipvANY if
  default_backend rtmpbackend_tcp_ipvANY


 backend rtmpbackend_tcp_ipvANY
  modetcp
  balance leastconn
  timeout connect 3
  timeout server  3
  retries 3
  option  httpchk GET /
  server  rtmp-01 172.16.5.11:443 check-ssl
 check inter 1000  weight 100 verify none
  server  rtmp-02 172.16.5.12:443 check-ssl
 check inter 1000  weight 100 verify none


 Weren't you supposed to connect on port 1935 where traffic is unciphered?
 Can you confirm wether traffic is ciphered or not on server's port 443
 ?? (you seem to be mixing clear traffic over a connection which expect
 ciphered traffic on the server side).
 Does haproxy says the servers are UP (logs, stats page, etc...)

 Baptiste




 --
 Cyril Bonté



RTMP offloading

2015-03-29 Thread Matt .
Hi Guys,


I'm trying to offload a rtmp connection where I connect using rtmps to
ha proxy and offload the ssl layer there.

In some strange way I can't get it working but I can with other
services the same way.

Is RTMP a hard one in this case ?

Thanks,

Matt



Re: [PATCH] Also accept SIGHUP/SIGTERM in systemd-wrapper

2014-09-11 Thread Matt Robenolt
Hmm, so right now this is a bit confusing. The wrapper doesn't pass
along signals to the the actual haproxy process afaict, so I'm not
sure that'd be an issue. If you needed to SIGHUP haproxy itself, you'd
read the pid and whatnot and handle that.

I look at this behavior as exactly what the init.d script is currently
doing with `/etc/init.d/haproxy reload`.

Using runit, there is a very specific semantic around what reload
means, and that's by sending a SIGHUP. See:
http://smarden.org/runit/sv.8.html There's no way to tell it to behave
otherwise.

Now, this isn't unsolvable because I *can* send the SIGUSR2
explicitly, but that just means that there's an oddball in a world of
uniformity. Everything else we use works with a normal `sv reload ...`
and haproxy handles `sv 2 ` instead.

So yes, a SIGHUP is preferred. And to answer your original question,
sending a SIGHUP to the wrapper today still doesn't flush haproxy's
memory pools, does it? Unless I'm mistaken and the uncaught signal is
somehow getting passed through, but I can't see how that's happening.
So from my perspective, nothing is breaking or changing.

On Thu, Sep 11, 2014 at 9:23 AM, Willy Tarreau w...@1wt.eu wrote:
 Hi Marc-Antoine,

 On Thu, Sep 11, 2014 at 11:10:10AM +0200, Marc-Antoine Perennou wrote:
 On 11 September 2014 07:44, Willy Tarreau w...@1wt.eu wrote:
  On Wed, Sep 10, 2014 at 10:38:55PM -0700, Matt Robenolt wrote:
  Awesome, thanks. :)
 
  Is it possible to also get this applied into the 1.5 branch since this is 
  low risk and doesn???t break any backwards compatibility and whatnot?
 
  I've just backported it as well. 1.5 was still missing Conrad Hoffman's
  improved signal handling, but now both patches have been merged.
 
  Willy

 Iirc, the reason why I did not use SIGHUP for the reload (which I'd
 have preferred too) is that haproxy itself uses SIGHUP, and if I used
 it in the wrapper, it became a noop for haproxy.
 Maybe I did something wrong and with the actual state of it it works
 fine now, but did you check that haproxy still handles it properly?

 Argh. Indeed that's a good reason. I must confess I never use it in
 haproxy so I hadn't thought about it. The SIGHUP is used to flush the
 memory pools. And I'm sure it will not do that anymore, instead it will
 reload haproxy. I don't know if that's a problem but I don't like much
 changing this behaviour in the stable branch.

 Matt, do you really need to use SIGHUP here ? For SIGTERM I understand,
 but SIGHUP is a bit different and less universal for that usage.

 Note, I still haven't pushed the patches to the public trees, so it's still
 time to revert/update them.

 Willy




[PATCH] Also accept SIGHUP/SIGTERM in systemd-wrapper

2014-09-10 Thread Matt Robenolt
My proposal is to let haproxy-systemd-wrapper also accept normal
SIGHUP/SIGTERM signals to play nicely with other process managers
besides just systemd. In my use case, this will be for using with
runit which has to ability to change the signal used for a
reload or stop command. It also might be worth renaming this
bin to just haproxy-wrapper or something of that sort to separate
itself away from systemd. But that's a different discussion. :)

Thanks.

---
 src/haproxy-systemd-wrapper.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/src/haproxy-systemd-wrapper.c b/src/haproxy-systemd-wrapper.c
index 90a94ce..cc8baa8 100644
--- a/src/haproxy-systemd-wrapper.c
+++ b/src/haproxy-systemd-wrapper.c
@@ -158,7 +158,9 @@ int main(int argc, char **argv)
memset(sa, 0, sizeof(struct sigaction));
sa.sa_handler = signal_handler;
sigaction(SIGUSR2, sa, NULL);
+   sigaction(SIGHUP, sa, NULL);
sigaction(SIGINT, sa, NULL);
+   sigaction(SIGTERM, sa, NULL);
 
if (getenv(REEXEC_FLAG) != NULL) {
/* We are being re-executed: restart HAProxy gracefully */
@@ -180,11 +182,11 @@ int main(int argc, char **argv)
 
status = -1;
while (-1 != wait(status) || errno == EINTR) {
-   if (caught_signal == SIGUSR2) {
+   if (caught_signal == SIGUSR2 || caught_signal == SIGHUP) {
caught_signal = 0;
do_restart();
}
-   else if (caught_signal == SIGINT) {
+   else if (caught_signal == SIGINT || caught_signal == SIGTERM) {
caught_signal = 0;
do_shutdown();
}
-- 
2.0.4




Re: [PATCH] Also accept SIGHUP/SIGTERM in systemd-wrapper

2014-09-10 Thread Matt Robenolt
Awesome, thanks. :)




Is it possible to also get this applied into the 1.5 branch since this is low 
risk and doesn’t break any backwards compatibility and whatnot?
--
Matt Robenolt
@mattrobenolt

On Thu, Sep 11, 2014 at 5:33 AM, Willy Tarreau w...@1wt.eu wrote:

 Hi Matt,
 On Thu, Sep 11, 2014 at 05:19:30AM +, Matt Robenolt wrote:
 My proposal is to let haproxy-systemd-wrapper also accept normal
 SIGHUP/SIGTERM signals to play nicely with other process managers
 besides just systemd. In my use case, this will be for using with
 runit which has to ability to change the signal used for a
 reload or stop command. It also might be worth renaming this
 bin to just haproxy-wrapper or something of that sort to separate
 itself away from systemd. But that's a different discussion. :)
 Thank you for this. I've got a recent report from someone who had to
 configure supervisord to use SIGINT instead of SIGTERM because of this.
 I agree that we should probably rename this wrapper. Another improvement
 would be to make it capable of only stripping -wrapper from its name
 to know what binary to call instead of searching the hardcoded haproxy.
 This is handy for people running multiple versions on the same system.
 I've applied your patch to 1.6.
 Thanks,
 Willy

help!

2014-09-01 Thread James, Matt
If you have received this email in error, please notify us by telephone on 
01437 764551 and delete it from your computer immediately. Os ydych chi wedi 
derbyn yr e-bost hwn trwy gamgymeriad, byddwch cystal â rhoi gwybod inni trwy 
ffonio 01437 764551. Wedyn dylech ddileu’r e-bost ar unwaith oddi ar eich 
cyfrifiadur.


Hi

I have some basic queries on setting haproxy up for a novice!

Are you aware of any good support forums out there?

Thanks

Matt


**

This document should only be read by those persons to whom it is addressed, and 
be used by them for its intended purpose; and must not otherwise be reproduced, 
copied, disseminated, disclosed, modified, distributed, published or actioned. 
If you have received this email in error, please notify us immediately by 
telephone on 01437 764551 and delete it from your computer immediately. This 
email address must not be passed on to any third party nor be used for any 
other purpose.

Pembrokeshire County Council Website - http://www.pembrokeshire.gov.uk

Please Note: Incoming and outgoing e-mail messages are routinely monitored for 
compliance with our IT Security, and Email/Internet Policy.

This signature also confirms that this email message has been swept for the 
presence of computer viruses and malicious code.

***

Dim ond y sawl y mae'r ddogfen hon wedi'i chyfeirio atynt ddylai ei darllen, 
a'i defnyddio ganddynt ar gyfer ei dibenion bwriadedig; ac ni ddylid fel arall 
ei hatgynhyrchu, copio, lledaenu, datgelu, addasu, dosbarthu, cyhoeddi na'i 
rhoi ar waith chwaith. Os ydych chi wedi derbyn yr e-bost hwn trwy gamgymeriad, 
byddwch cystal a rhoi gwybod i ni ar unwaith trwy ffonio 01437 764551 a'i 
ddileu oddi ar eich cyfrifiadur ar unwaith. Ni ddylid rhoi'r cyfeiriad e-bost i 
unrhyw drydydd parti na'i ddefnyddio ar gyfer unrhyw ddiben arall chwaith.

Gwefan Cyngor Sir Penfro - http://www.pembrokeshire.gov.uk

Sylwer: Mae negeseuon e-bost sy’n cael eu hanfon a’u derbyn yn cael eu 
monitro’n rheolaidd ar gyfer cydymffurfio â’n Diogelwch TG, a’n Polisi 
E-bost/Rhyngrwyd.

Mae'r llofnod hwn hefyd yn cadarnhau bod y neges e-bost hon wedi cael ei 
harchwilio am fodolaeth firysau cyfrifiadurol a chod maleisus.

***

 


Re: Rewrite domain.com to other domain.com/dir/subdir

2014-05-28 Thread Matt .
The normal redirect is working but convirt it to a rewrite is where I'm stuck.

Should I use an ACL upfront that looks in the map and do an if on that
or is the ACL not needed at all ?

As I was busy too look how Varnish can accomplish this (using a MySQL
Database) I need to check this again, but I know I was stuck at that
part already because of the various examples that do the same rewrites
in a different way.

2014-05-28 20:50 GMT+02:00 Bryan Talbot bryan.tal...@playnext.com:
 On Wed, May 28, 2014 at 2:49 AM, Matt . yamakasi@gmail.com wrote:

 I'm still struggeling here and also looking at Varnish if it can
 accomplish it.


 What have you tried and what part of that is not working as you expect?




 I think HA proxy is the way as I also use it for normal loadbalancing,
 but this is another chapter for sure...

 Any help is welcome!



 -Bryan




Re: Rewrite domain.com to other domain.com/dir/subdir

2014-05-28 Thread Matt .
Hi Bryan,

Yes I cam up to that part, but about the search in the map, do I need
to do this twice ?

2014-05-28 23:28 GMT+02:00 Bryan Talbot bryan.tal...@playnext.com:
 On Wed, May 28, 2014 at 11:57 AM, Matt . yamakasi@gmail.com wrote:

 The normal redirect is working but convirt it to a rewrite is where I'm
 stuck.

 Should I use an ACL upfront that looks in the map and do an if on that
 or is the ACL not needed at all ?



 The example in the reqirep section of the documentation seems to mostly do
 what you're asking.


 http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#reqirep

 Does that not work?

 This will rewrite foo.com/baz.jpg - newdomain.com/com/foo/baz.jpg

   reqirep ^Host:\ foo.com Host:\ newdomain.com
   reqirep ^GET\ /(.*) GET\ /com/foo/\1




 -Bryan




Re: Rewrite domain.com to other domain.com/dir/subdir

2014-05-27 Thread Matt .
HI All,

I have searched a lot abotu this and it's not clear to me.

De we need to use ACL's in this matter or not. I can make a simple rewrite
but its unclear to me how to use the lookup from a map(file).

Any suggestions ?

Thanks!

Matt




2014-05-26 17:33 GMT+02:00 Matt . yamakasi@gmail.com:

 Hi Baptiste,

 OK, I also have seen examples (not for domains as I need them) that use
 ACL's in front.

 Waht do you think about that and can you give me some example ? It's kinda
 confusing what I found using google and mailinglists.

 Cheers,

 Matt


 2014-05-26 17:22 GMT+02:00 Baptiste bed...@gmail.com:

 On Mon, May 26, 2014 at 4:15 PM, Matt . yamakasi@gmail.com wrote:
  Hi All,
 
  In order to my earlier topic about redirecting a domainname I actually
 want
  to rewrite one too.
 
  Let's says I have domainname2.com and I want to show
  domainname.com/dir/subdir under that domain, can this be done with
 reqrep ?
 
  It seems that this is the way, but I cannot find any example that does
 so.
 
  Do I also need the forward option ?
 
  Thanks!
 
  Matt

 Hi Matt,

 You have to do a couple of reqirep/reqrep.
 One for the Host header, one for the URL path.

 Baptiste





Re: Add Domain redirects using API or ?

2014-05-26 Thread Matt .
Hi guys.

Now I have this working I see it's a real redirect and I actually need a
rewrite, is this possible too in this matter ?

Cheers,

Matt


2014-05-24 0:46 GMT+02:00 Matt . yamakasi@gmail.com:

 Hi,

 I'm getting a strange error, which is various when I change it in the
 frontend.

 Is there maybe a typo in yours ?




 2014-05-23 16:34 GMT+02:00 Baptiste bed...@gmail.com:

 You can set a map entry, it will erase then create the entry.

 And HAProxy will take it into account on the fly, without doing anything.
 You could even forward traffic to your webservers and let haproxy
 learn the redirect on the fly.

 Remember, HAProxy is art:
 https://twitter.com/malditogeek/status/243020846875152384#

 Baptiste

 On Fri, May 23, 2014 at 4:00 PM, Matt . yamakasi@gmail.com wrote:
  So when you remove a line and there is no line like it... just nothing
  happens as it should ?
 
  But what if you add one that is already there ? Will it be added twice
 ? If
  so and you do a remove will both be removed ?
 
 
  2014-05-23 15:22 GMT+02:00 Baptiste bed...@gmail.com:
 
  There is no reply, it is silently performed.
 
 
  Baptiste
 
  On Fri, May 23, 2014 at 3:07 PM, Matt . yamakasi@gmail.com
 wrote:
   Hi,
  
   OK, that is a very good explanation!
  
   It's also very flexible in my opinion.
  
   Does hsproxy give a reply/callback after adding/removing ? I'm not
 sure
   but
   I thought it did.
  
   I also did a reply-all this time, sorry for last time!
  
   Cheers,
  
   Matt
  
  
   2014-05-23 14:07 GMT+02:00 Baptiste bed...@gmail.com:
  
   Hi Matt,
  
   I'm Ccing the ML since the answer can interest everybody here.
  
Thanks for you explanation... I found something indeed on the
 devel
version
yesterday, you can also remove this way I saw ?
  
   yes, you can delete content from a map thanks to the socket or
 through
   information found in HTTP headers.
  
What do you mean by filecontents on reload ?
  
   I mean that the content of the map is read from a flat file.
   If you modify running map, HAProxy only updates its memory, not the
   flat
   file.
   So after a reload, if the flat file does not contain same content as
   HAProxy's memory, then updates are lost.
  
What I add this was is added to memory and not to the file ?
  
   exactly
  
So, I need to sync the file with the memory in some way ?
  
   yes.
   This can be done easily with a tool since you can dump a map content
   from HAProxy's socket.
  
   Baptiste
  
  
   
   
2014-05-23 10:17 GMT+02:00 Baptiste bed...@gmail.com:
   
Hi Matt,
   
You have to use HAProxy 1.5.
You can load redirects from a map file.
Map file content, 2 columns, with on the left the reference (what
you're looking from in the client request) and on the right the
response to send back.
domain2.com subdomain.domain1.com
   
Then, in your frontend, simply add:
http-request redirect code 302 prefix
http://%[req.hdr(host),map_str(map_redirects.lst)] if {
req.hdr(Host),map_str(map_redirects.lst) -m found }
   
Content of map_redirects.lst:
domain2.com subdomain.domain1.com
   
If the domain is not listed, then HAProxy will return a 503.
   
Here are some results:
GET http://127.0.0.1:8080/ -H Host: domain2.com
   
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: http://subdomain.domain1.com/
Connection: close
   
   
GET http://127.0.0.1:8080/blah -H Host: domain2.com
   
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: http://subdomain.domain1.com/blah
Connection: close
   
   
   
GET http://127.0.0.1:8080/ -H Host: domain1.com
   
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
   
   
   
   
The content of the map can be updated through the HAProxy socket
 or
though HTTP headers.
Read the manual to know how.
   
Bear in mind HAProxy will reset its memory with the content of
 the
file when reloading. So it's up to you to sync the memory of
 HAProxy
and the content of the file.
   
Baptiste
   
   
On Thu, May 22, 2014 at 11:08 PM, Matt . yamakasi@gmail.com
 
wrote:
 Babtiste,

 I'm not able to find any solution to add such rewrites, am I
 looking
 wrong ?

 Cheers,

 Matt


 2014-05-22 16:37 GMT+02:00 Matt . yamakasi@gmail.com:

 Hi,

 That is nice, is that in the development version ? I didn't
 see
 it
 in
 1.4
 as I'm right.

 I need to forward domain2.com to subdomain.domain1.com

 and subdomain.domain1.com may be a various of webservers that
 serve
 that
 content.

 Thanks!

 Matt


   
   
  
  
 
 





Rewrite domain.com to other domain.com/dir/subdir

2014-05-26 Thread Matt .
Hi All,

In order to my earlier topic about redirecting a domainname I actually want
to rewrite one too.

Let's says I have domainname2.com and I want to show
domainname.com/dir/subdir under that domain, can this be done with reqrep ?

It seems that this is the way, but I cannot find any example that does so.

Do I also need the forward option ?

Thanks!

Matt


Re: Rewrite domain.com to other domain.com/dir/subdir

2014-05-26 Thread Matt .
Hi Baptiste,

OK, I also have seen examples (not for domains as I need them) that use
ACL's in front.

Waht do you think about that and can you give me some example ? It's kinda
confusing what I found using google and mailinglists.

Cheers,

Matt


2014-05-26 17:22 GMT+02:00 Baptiste bed...@gmail.com:

 On Mon, May 26, 2014 at 4:15 PM, Matt . yamakasi@gmail.com wrote:
  Hi All,
 
  In order to my earlier topic about redirecting a domainname I actually
 want
  to rewrite one too.
 
  Let's says I have domainname2.com and I want to show
  domainname.com/dir/subdir under that domain, can this be done with
 reqrep ?
 
  It seems that this is the way, but I cannot find any example that does
 so.
 
  Do I also need the forward option ?
 
  Thanks!
 
  Matt

 Hi Matt,

 You have to do a couple of reqirep/reqrep.
 One for the Host header, one for the URL path.

 Baptiste



Re: Add Domain redirects using API or ?

2014-05-23 Thread Matt .
Hi,

OK, that is a very good explanation!

It's also very flexible in my opinion.

Does hsproxy give a reply/callback after adding/removing ? I'm not sure but
I thought it did.

I also did a reply-all this time, sorry for last time!

Cheers,

Matt


2014-05-23 14:07 GMT+02:00 Baptiste bed...@gmail.com:

 Hi Matt,

 I'm Ccing the ML since the answer can interest everybody here.

  Thanks for you explanation... I found something indeed on the devel
 version
  yesterday, you can also remove this way I saw ?

 yes, you can delete content from a map thanks to the socket or through
 information found in HTTP headers.

  What do you mean by filecontents on reload ?

 I mean that the content of the map is read from a flat file.
 If you modify running map, HAProxy only updates its memory, not the flat
 file.
 So after a reload, if the flat file does not contain same content as
 HAProxy's memory, then updates are lost.

  What I add this was is added to memory and not to the file ?

 exactly

  So, I need to sync the file with the memory in some way ?

 yes.
 This can be done easily with a tool since you can dump a map content
 from HAProxy's socket.

 Baptiste


 
 
  2014-05-23 10:17 GMT+02:00 Baptiste bed...@gmail.com:
 
  Hi Matt,
 
  You have to use HAProxy 1.5.
  You can load redirects from a map file.
  Map file content, 2 columns, with on the left the reference (what
  you're looking from in the client request) and on the right the
  response to send back.
  domain2.com subdomain.domain1.com
 
  Then, in your frontend, simply add:
  http-request redirect code 302 prefix
  http://%[req.hdr(host),map_str(map_redirects.lst)] if {
  req.hdr(Host),map_str(map_redirects.lst) -m found }
 
  Content of map_redirects.lst:
  domain2.com subdomain.domain1.com
 
  If the domain is not listed, then HAProxy will return a 503.
 
  Here are some results:
  GET http://127.0.0.1:8080/ -H Host: domain2.com
 
  HTTP/1.1 302 Found
  Cache-Control: no-cache
  Content-length: 0
  Location: http://subdomain.domain1.com/
  Connection: close
 
 
  GET http://127.0.0.1:8080/blah -H Host: domain2.com
 
  HTTP/1.1 302 Found
  Cache-Control: no-cache
  Content-length: 0
  Location: http://subdomain.domain1.com/blah
  Connection: close
 
 
 
  GET http://127.0.0.1:8080/ -H Host: domain1.com
 
  HTTP/1.0 503 Service Unavailable
  Cache-Control: no-cache
  Connection: close
  Content-Type: text/html
 
 
 
 
  The content of the map can be updated through the HAProxy socket or
  though HTTP headers.
  Read the manual to know how.
 
  Bear in mind HAProxy will reset its memory with the content of the
  file when reloading. So it's up to you to sync the memory of HAProxy
  and the content of the file.
 
  Baptiste
 
 
  On Thu, May 22, 2014 at 11:08 PM, Matt . yamakasi@gmail.com
 wrote:
   Babtiste,
  
   I'm not able to find any solution to add such rewrites, am I looking
   wrong ?
  
   Cheers,
  
   Matt
  
  
   2014-05-22 16:37 GMT+02:00 Matt . yamakasi@gmail.com:
  
   Hi,
  
   That is nice, is that in the development version ? I didn't see it in
   1.4
   as I'm right.
  
   I need to forward domain2.com to subdomain.domain1.com
  
   and subdomain.domain1.com may be a various of webservers that serve
   that
   content.
  
   Thanks!
  
   Matt
  
  
 
 



Re: Add Domain redirects using API or ?

2014-05-23 Thread Matt .
So when you remove a line and there is no line like it... just nothing
happens as it should ?

But what if you add one that is already there ? Will it be added twice ? If
so and you do a remove will both be removed ?


2014-05-23 15:22 GMT+02:00 Baptiste bed...@gmail.com:

 There is no reply, it is silently performed.

 Baptiste

 On Fri, May 23, 2014 at 3:07 PM, Matt . yamakasi@gmail.com wrote:
  Hi,
 
  OK, that is a very good explanation!
 
  It's also very flexible in my opinion.
 
  Does hsproxy give a reply/callback after adding/removing ? I'm not sure
 but
  I thought it did.
 
  I also did a reply-all this time, sorry for last time!
 
  Cheers,
 
  Matt
 
 
  2014-05-23 14:07 GMT+02:00 Baptiste bed...@gmail.com:
 
  Hi Matt,
 
  I'm Ccing the ML since the answer can interest everybody here.
 
   Thanks for you explanation... I found something indeed on the devel
   version
   yesterday, you can also remove this way I saw ?
 
  yes, you can delete content from a map thanks to the socket or through
  information found in HTTP headers.
 
   What do you mean by filecontents on reload ?
 
  I mean that the content of the map is read from a flat file.
  If you modify running map, HAProxy only updates its memory, not the flat
  file.
  So after a reload, if the flat file does not contain same content as
  HAProxy's memory, then updates are lost.
 
   What I add this was is added to memory and not to the file ?
 
  exactly
 
   So, I need to sync the file with the memory in some way ?
 
  yes.
  This can be done easily with a tool since you can dump a map content
  from HAProxy's socket.
 
  Baptiste
 
 
  
  
   2014-05-23 10:17 GMT+02:00 Baptiste bed...@gmail.com:
  
   Hi Matt,
  
   You have to use HAProxy 1.5.
   You can load redirects from a map file.
   Map file content, 2 columns, with on the left the reference (what
   you're looking from in the client request) and on the right the
   response to send back.
   domain2.com subdomain.domain1.com
  
   Then, in your frontend, simply add:
   http-request redirect code 302 prefix
   http://%[req.hdr(host),map_str(map_redirects.lst)] if {
   req.hdr(Host),map_str(map_redirects.lst) -m found }
  
   Content of map_redirects.lst:
   domain2.com subdomain.domain1.com
  
   If the domain is not listed, then HAProxy will return a 503.
  
   Here are some results:
   GET http://127.0.0.1:8080/ -H Host: domain2.com
  
   HTTP/1.1 302 Found
   Cache-Control: no-cache
   Content-length: 0
   Location: http://subdomain.domain1.com/
   Connection: close
  
  
   GET http://127.0.0.1:8080/blah -H Host: domain2.com
  
   HTTP/1.1 302 Found
   Cache-Control: no-cache
   Content-length: 0
   Location: http://subdomain.domain1.com/blah
   Connection: close
  
  
  
   GET http://127.0.0.1:8080/ -H Host: domain1.com
  
   HTTP/1.0 503 Service Unavailable
   Cache-Control: no-cache
   Connection: close
   Content-Type: text/html
  
  
  
  
   The content of the map can be updated through the HAProxy socket or
   though HTTP headers.
   Read the manual to know how.
  
   Bear in mind HAProxy will reset its memory with the content of the
   file when reloading. So it's up to you to sync the memory of HAProxy
   and the content of the file.
  
   Baptiste
  
  
   On Thu, May 22, 2014 at 11:08 PM, Matt . yamakasi@gmail.com
   wrote:
Babtiste,
   
I'm not able to find any solution to add such rewrites, am I
 looking
wrong ?
   
Cheers,
   
Matt
   
   
2014-05-22 16:37 GMT+02:00 Matt . yamakasi@gmail.com:
   
Hi,
   
That is nice, is that in the development version ? I didn't see it
in
1.4
as I'm right.
   
I need to forward domain2.com to subdomain.domain1.com
   
and subdomain.domain1.com may be a various of webservers that
 serve
that
content.
   
Thanks!
   
Matt
   
   
  
  
 
 



Re: Add Domain redirects using API or ?

2014-05-23 Thread Matt .
I like art! Thanks!!


2014-05-23 16:34 GMT+02:00 Baptiste bed...@gmail.com:

 You can set a map entry, it will erase then create the entry.
 And HAProxy will take it into account on the fly, without doing anything.
 You could even forward traffic to your webservers and let haproxy
 learn the redirect on the fly.

 Remember, HAProxy is art:
 https://twitter.com/malditogeek/status/243020846875152384#

 Baptiste

 On Fri, May 23, 2014 at 4:00 PM, Matt . yamakasi@gmail.com wrote:
  So when you remove a line and there is no line like it... just nothing
  happens as it should ?
 
  But what if you add one that is already there ? Will it be added twice ?
 If
  so and you do a remove will both be removed ?
 
 
  2014-05-23 15:22 GMT+02:00 Baptiste bed...@gmail.com:
 
  There is no reply, it is silently performed.
 
 
  Baptiste
 
  On Fri, May 23, 2014 at 3:07 PM, Matt . yamakasi@gmail.com wrote:
   Hi,
  
   OK, that is a very good explanation!
  
   It's also very flexible in my opinion.
  
   Does hsproxy give a reply/callback after adding/removing ? I'm not
 sure
   but
   I thought it did.
  
   I also did a reply-all this time, sorry for last time!
  
   Cheers,
  
   Matt
  
  
   2014-05-23 14:07 GMT+02:00 Baptiste bed...@gmail.com:
  
   Hi Matt,
  
   I'm Ccing the ML since the answer can interest everybody here.
  
Thanks for you explanation... I found something indeed on the devel
version
yesterday, you can also remove this way I saw ?
  
   yes, you can delete content from a map thanks to the socket or
 through
   information found in HTTP headers.
  
What do you mean by filecontents on reload ?
  
   I mean that the content of the map is read from a flat file.
   If you modify running map, HAProxy only updates its memory, not the
   flat
   file.
   So after a reload, if the flat file does not contain same content as
   HAProxy's memory, then updates are lost.
  
What I add this was is added to memory and not to the file ?
  
   exactly
  
So, I need to sync the file with the memory in some way ?
  
   yes.
   This can be done easily with a tool since you can dump a map content
   from HAProxy's socket.
  
   Baptiste
  
  
   
   
2014-05-23 10:17 GMT+02:00 Baptiste bed...@gmail.com:
   
Hi Matt,
   
You have to use HAProxy 1.5.
You can load redirects from a map file.
Map file content, 2 columns, with on the left the reference (what
you're looking from in the client request) and on the right the
response to send back.
domain2.com subdomain.domain1.com
   
Then, in your frontend, simply add:
http-request redirect code 302 prefix
http://%[req.hdr(host),map_str(map_redirects.lst)] if {
req.hdr(Host),map_str(map_redirects.lst) -m found }
   
Content of map_redirects.lst:
domain2.com subdomain.domain1.com
   
If the domain is not listed, then HAProxy will return a 503.
   
Here are some results:
GET http://127.0.0.1:8080/ -H Host: domain2.com
   
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: http://subdomain.domain1.com/
Connection: close
   
   
GET http://127.0.0.1:8080/blah -H Host: domain2.com
   
HTTP/1.1 302 Found
Cache-Control: no-cache
Content-length: 0
Location: http://subdomain.domain1.com/blah
Connection: close
   
   
   
GET http://127.0.0.1:8080/ -H Host: domain1.com
   
HTTP/1.0 503 Service Unavailable
Cache-Control: no-cache
Connection: close
Content-Type: text/html
   
   
   
   
The content of the map can be updated through the HAProxy socket
 or
though HTTP headers.
Read the manual to know how.
   
Bear in mind HAProxy will reset its memory with the content of the
file when reloading. So it's up to you to sync the memory of
 HAProxy
and the content of the file.
   
Baptiste
   
   
On Thu, May 22, 2014 at 11:08 PM, Matt . yamakasi@gmail.com
wrote:
 Babtiste,

 I'm not able to find any solution to add such rewrites, am I
 looking
 wrong ?

 Cheers,

 Matt


 2014-05-22 16:37 GMT+02:00 Matt . yamakasi@gmail.com:

 Hi,

 That is nice, is that in the development version ? I didn't see
 it
 in
 1.4
 as I'm right.

 I need to forward domain2.com to subdomain.domain1.com

 and subdomain.domain1.com may be a various of webservers that
 serve
 that
 content.

 Thanks!

 Matt


   
   
  
  
 
 



Add Domain redirects using API or ?

2014-05-22 Thread Matt .
Hi,

That is nice, is that in the development version ? I didn't see it in 1.4
as I'm right.

I need to forward domain2.com to subdomain.domain1.com

and subdomain.domain1.com may be a various of webservers that serve that
content.

Thanks!

Matt


Re: Add Domain redirects using API or ?

2014-05-22 Thread Matt .
Babtiste,

I'm not able to find any solution to add such rewrites, am I looking wrong ?

Cheers,

Matt


2014-05-22 16:37 GMT+02:00 Matt . yamakasi@gmail.com:

 Hi,

 That is nice, is that in the development version ? I didn't see it in 1.4
 as I'm right.

 I need to forward domain2.com to subdomain.domain1.com

 and subdomain.domain1.com may be a various of webservers that serve that
 content.

 Thanks!

 Matt



How do I used the map feature in haproxy to build massive redirect tables

2014-04-11 Thread Matt Higgins
I think the subject covers it. I need to build a list of many 301 redirects 
within the same domain i.e. redirect /some_old_place to /some_new_place.
We have lots of content the gets retired or replaced  based on url. The quote 
from the current home page of haproxy states maps may be used to build 
massive redirect tables but for the life of me I can figure it out. I am using 
version 1.5 and here are the lines of the config which I am currently am 
working on.

redirect location %[map(/opt/local/etc/haproxy/redirect.map), hdr(url)] 
code 301

Given the above config line I get

[ALERT] 099/205518 (27981) : parsing 
[/opt/local/etc/haproxy/haproxy.cfg:83] : 
error detected in frontend 'LoadBalancer' while parsing redirect rule : 
expects 'code', 'prefix', 'location', 'scheme', 'set-cookie', 'clear-cookie', 
'drop-query' or  'append-slash' (was 'hdr(url)]').

I have tried many variations of this but have not found any success. If anyone
knows how to do this please let me know?





HttpOnly flag for persistence cookies

2012-05-30 Thread Matt Brock
Hi.

I have a client who needed all cookies to contain the HttpOnly flag in order to 
pass a penetration test for PCI compliance. I couldn't see a way of adding this 
flag to HAProxy's persistence cookies. Would it therefore be possible to add an 
'httponly' option for the 'cookie' parameter?

As an interim measure I modified src/proto_http.c to add the flag to all 
persistence cookies:

5348a5349,5350
   len += sprintf(trash+len, ; HttpOnly);
 

I hope this is something which can be added permanently as an option, otherwise 
it seems quite awkward for certain HAProxy users needing to pass compliance 
tests.

Cheers,

Matt.

--
mattbrock.co.uk


Re: VM vs bare metal and threading

2012-01-17 Thread Matt Banks
Willy,

As always, you were spot on.

The VSphere 5/No swap/disable SMP build of Gentoo (that Vsphere thinks is RHEL 
so we can use the vmxnet driver) we have now is basically running on par with 
the bare metal one.

Thanks for the help. I'll probably have to look at our other configs (including 
some in Solaris x86 zones) and disable swap to see if that helps (they don't 
have nearly the traffic.)

matt


On Jan 14, 2012, at 12:24 AM, Willy Tarreau wrote:

 Hi John,
 
 On Fri, Jan 13, 2012 at 06:02:54PM -0500, John Lauro wrote:
 There are all sorts of kernel tuning parameters under /proc that can make
 a big difference, not to mention what type of virtual NIC you have in the
 VM.  Are they running the same kernel version and Gentoo release?  Have
 you compared sysctl.conf (or whatever gento uses to customize settings in
 /proc)?
 
 Generally I prefer to run haproxy (and only haproxy) in 1 CPU vms (less
 CPUs, lower latency from the vm scheduler), with haproxy, my only
 exception is if I also want ssl and load is higher.  When dealing with
 high rates and larger number of connections make sure you don't go low on
 RAM.  Haproxy goes exponentially worse as it starts to swap, in fact
 running swapoff -a isn't a bad idea, especially for bench testing... and
 it takes a lot more ram to support 8000 connections/sec than 300.
 
 In summary, check RAM, and /proc tuning.
 
 You're perfectly right with the RAM issues, as swapping should *NEVER*
 happen on any component involved in web processing. A single swapout/swapin
 cycle is noticeable by the user, and this goes worse with more users, to the
 point the site totally stops responding. This is why admins are always very
 careful not to push MaxClients too far on Apache.
 
 And I too run with swap disabled !
 
 I remmeber having spent weeks tracking down an issue where haproxy was
 logging network issues (retransmits when establishing connections and
 even receiving requests). In the end we found that a script on the machine
 was using curl to upload the daily logs, and this old version of curl used
 to buffer all the file to RAM before sending it. Days of large traffic were
 causing many things to be swapped out, and TCP buffers to be shrunk, so that
 for about half a day after the even, drops and retransmits were still quite
 common, causing huge response time delays.
 
 Getting back to Matt's issue, it's nothing new that VMs are *much* slower
 than bare metal for latency sensitive applications. If you look at how the
 CPU usage is spread in haproxy, you'll often see 15% user and 85% system,
 and the ratio can derive to 1%+99% when transfering large objects. In the
 system, haproxy only uses the network stack. So that's simple : on average,
 the network stack is responsible for 85% to 99% of the performance. That's
 why we try hard to reduce the number of system calls and to merge TCP
 segments when that's possible.
 
 When you add an hypervisor between the kernel and the hardware, there's no
 secret : you have to pass through 2 layers, and whatever optimizations have
 been performed in the kernel are lost due to this extra work.
 
 At Exceliance, we've spent a lot of time benchmarking hypervisors. It
 happens that VSphere 5 is much much faster than ESX 3, around 5x, meaning
 a much lower overhead. But still it's around half the performance of the
 bare metal. Other hypervisors we've tried are still even slower than ESX 3.
 Sometimes, a network driver can make a major change, and even changing the
 hardware NIC can make important changes.
 
 Virtualization is fine for CPU intensive jobs where latency is not a problem,
 such as number crunching. SSL offloading is not much affected by 
 virtualization
 since most of the work already takes maximum user-land CPU. Same for Java or
 Ruby apps. But if you need very high performance, you wouldn't want to run
 components such as a firewall, router, load balancer or proxy in a VM, unless
 you're ready to waste a lot of power. Note that many people do that for
 convenience reasons, but I still find it wasted to consume twice the power
 for the same job.
 
 Please note that your numbers seem low, and I don't know if this is because
 of the object sizes or not. Session rate is measured on small (ideally empty)
 objects, and byte rate is measured on large objects. Small objects on a 2 GHz
 machine should be around 20-25000, not 8000. But since you say that you're
 limited by the backends too, it can still be normal.
 
 And your 3500 in Vsphere 4 seems low too, unless those are already large
 objects. I have memories of 6500 on ESX3 with a Core2Duo 3 GHz. Check what
 NIC you're emulating, prefer vmxnet and try with several hardware NICs in
 this machine (e1000e are fine in general). And also ensure that no other
 VM is started when you run the test, otherwise you'll never get acceptable
 numbers (which is the height of virtualization) !
 
 Regards,
 Willy
 
 
 -Original Message-
 From: Matt Banks [mailto:mattba

VM vs bare metal and threading

2012-01-13 Thread Matt Banks
All,

I'm not sure what the issue is here, but I wanted to know if there was an easy 
explanation for this.

We've been doing some load testing of HAProxy and have found the following:

HAProxy (both 1.4.15 and 1.4.19 builds) running under Gentoo in a 2 vCPU VM 
(Vsphere 4.x) running on a box with a Xeon x5675 (3.06 GHz current gen 
Westmere) maxes out (starts throwing 50x errors) at around a session rate of 
3500.

However, copies of the same binaries pointed at the same backend servers on a 
Gentoo box (bare metal) with 2x E5405 (2.00GHz - Q4,2007 launch) top out at a 
session rate of around 8000 - at which point the back end servers start to fall 
over. And that HAProxy machine is doing LOTS of other things at the same time.

Here's the reason for the query: We're not sure why, but the bare metal box 
seems to be balancing the load better across cpu's. (We're using the same 
config file, so nbproc is set to 1 for both setups). Most of our HAProxy setups 
aren't really getting hit hard enough to tell if multiple CPU's are being used 
or not as their session rates typically stay around 300-400.

We know it's not virtualization in general because we have a virtual machine in 
the production version of this system that achieves higher numbers on lesser 
hardware.

Just wondering if there is somewhere we should start looking.

TIA.
matt


Re: HAProxy Response time performance

2011-06-10 Thread Matt Christiansen
Thats good to know, while 2000 concurrent connections what we do right
now, it will be closer to 10,000 concurrent connections come the
holiday season which is closer to 2.5 GB of ram (still less then whats
on the server).

One though I have is our requests can be very large at times (big
headers, super huge cookies), it may not be packet loss that the
bigger buffer is fixing but a  better ability to buffer our large
requests. Which might explain why nginx wasn't showing this issue
where as haproxy was.

We don't have any HP Servers or Broadcom NICs (all Intel). I too have
had a lot of issues in general with both HP and Broadcom and choose
hardware for our LB that didn't have those nics.

Our switches are new, but not super high quality (netgears) its
possible they are not performing as well as we would like, ill have to
do some more tests on them.

I'm working on creating a more production like lab where I can test a
number of different aspects of the LB to see what else I can do in
terms of performance. I will make lots of use of halog -srv along with
other tools to measure performance and to see if I can crackdown any
issues in our current H/W setup.

Thanks for all the help,

Matt C

On Thu, Jun 9, 2011 at 10:20 PM, Willy Tarreau w...@1wt.eu wrote:
 On Thu, Jun 09, 2011 at 04:04:26PM -0700, Matt Christiansen wrote:
 I added in the tun.bufsize 65536 and right away things got better, I
 doubled that to 131072 and all of the outliers went way. Set at that
 with my tests it looks like haproxy is faster then nginx on 95% of
 responses and on par with nginx for the last 5% which is fine with me
 =).

 Nice, at least we have a good indication of what may be wrong. I'm
 pretty sure you're having an important packet loss rate.

 What is the negative to setting this high like that? If its just ram
 usage all of our LBs have 16GB of ram (don't ask why) so if thats all
 I don't think it will be an issue having that so high.

 Yes it's just an impact on RAM. There are two buffers per connection,
 so each connection consumes 256kB of RAM in your case. If you do that
 times 2000 concurrent connections, that's 512MB, which is still small
 compared to what is present in the machine :-)

 However, you should *really* try to spot what is causing the issue,
 because right now you're just hiding it under the carpet, and it's not
 completely hidden as retransmits still take some time to be sent.

 Many people have encountered the same problem with Broadcom NetXtreme2
 network cards, which was particularly marked on those shipped with a
 lot of HP machines (firmware 1.9.6). The issue was a huge Tx drop rate
 (which is not reported in netstat). A tcpdump on the machine and another
 one on the next hop can show that some outgoing packets never reach their
 destination.

 It is also possible that one equipment is dying (eg: a switch port) and
 that the issue will get worse with time.

 You should pass halog -srv on your logs which exhibit the varying
 times. It will output the average connection times and response times
 per server. If you see that all servers are affected, you'll conclude
 that the issue is closer to haproxy. If you see that just a group of
 servers is affected, you'll conclude that the issue only lies around
 them (maybe you'll identify a few older servers too).

 Regards,
 Willy





HAProxy Response time performance

2011-06-09 Thread Matt Christiansen
Hello,

I am wanting to move to HAProxy for my load balancing solution. Over
all I have been greatly impressed with it. It has way more throughput
and can handle way more connections then our current LB Solution
(nginx). I have been noticing one issue in all of our tests though, it
seems like in the TP99.9 (and greater) of response times is much MUCH
higher then nginx and we have a lot of outliers.

Our test makes a call to the VIP and times the time it takes to
receive the data back then pauses for a sec or two and makes the next
response. In both of the sample results below I did 2000 requests.

HAProxy

Average: 39.71128451818
Median: 29.4217891182
tp90: 67.48199012481
tp99: 313.29083442688
tp99.9: 562.318801879883
Over 500ms: 10
Over 2000ms: 0

nginx

Average: 69.6072148084641
Median: 59.2541694641113
tp90: 87.6350402832031
tp99: 112.42142221222
tp99.9: 180.88918274272
Over 500ms: 0
Over 2000ms: 0

So as you can see a big difference in the TP99.9 and a big difference
in the outlier count but the average and median response time are
really low.

We are running a pretty stock centos 5.6 server install with HAProxy
1.4.15, HAProxy isn't using more then like 4% of the CPU and the
System CPU is closer to 12%.

I was wondering if you guys had any obvious response time related
performance tweaks I can try. If you need more info let me know too.

Thanks,
Matt C.



Re: HAProxy Response time performance

2011-06-09 Thread Matt Christiansen
I turned on those two options and seemed to help a little.

We don't have a 2.6.30+ kernel so I don't believe option
splice-response will work(?). Thats one of the things I'm going to try
next.

I used halog to narrow down the sample, it was still a few 100 lines
so I picked three at random.

Jun  1 14:19:59 localhost haproxy[3124]: 76.102.107.85:28023
[01/Jun/2011:14:19:48.502] recs runtimes/sf-102 8062/0/0/3123/+11185
200 +814 - -  1267/1267/18/14/0 0/0 {Apache-Coyote/1.1|3827|||}
Jun  1 14:19:09 localhost haproxy[3124]: 96.229.202.77:56011
[01/Jun/2011:14:19:00.861] recs runtimes/sf-103 4982/0/0/3956/+8938
200 +426 - -  1214/1212/39/39/0 0/0 {Apache-Coyote/1.1|622|||}
Jun  1 14:22:09 localhost haproxy[3124]: 108.68.28.81:59854
[01/Jun/2011:14:19:02.218] recs runtimes/sf-110 3731/0/0/3844/+7575
200 +523 - -  1214/1212/45/43/0 0/0 {Apache-Coyote/1.1|4856|||}

If you need more I can attach the log, im removing the request url and
referrer just because it has client info it, I'll have to ask if thats
ok to post.

Matt C.


2011/6/9 Hervé COMMOWICK hcommow...@exosec.fr:
 Hello Matt,

 You need to activate logging to see what occurs to your requests, you
 can use halog tool (in the contrib folder) to filter out fast
 requests.

 Other things you can enable to reduce latency is :
 option tcp-smart-accept
 option tcp-smart-connect

 and finally you can test :
 option splice-response
 But this one will be dependent of your kind of traffic.

 next release 1.4.16 have some improvements in latency
 (http://www.mail-archive.com/haproxy@formilux.org/msg05080.html), i
 think you can give it a try, take the daily snapshot for this.

 Regards,

 Hervé.

 On Wed, 8 Jun 2011 23:57:38 -0700
 Matt Christiansen ad...@nikore.net wrote:

 Hello,

 I am wanting to move to HAProxy for my load balancing solution. Over
 all I have been greatly impressed with it. It has way more throughput
 and can handle way more connections then our current LB Solution
 (nginx). I have been noticing one issue in all of our tests though, it
 seems like in the TP99.9 (and greater) of response times is much MUCH
 higher then nginx and we have a lot of outliers.

 Our test makes a call to the VIP and times the time it takes to
 receive the data back then pauses for a sec or two and makes the next
 response. In both of the sample results below I did 2000 requests.

 HAProxy

 Average: 39.71128451818
 Median: 29.4217891182
 tp90: 67.48199012481
 tp99: 313.29083442688
 tp99.9: 562.318801879883
 Over 500ms: 10
 Over 2000ms: 0

 nginx

 Average: 69.6072148084641
 Median: 59.2541694641113
 tp90: 87.6350402832031
 tp99: 112.42142221222
 tp99.9: 180.88918274272
 Over 500ms: 0
 Over 2000ms: 0

 So as you can see a big difference in the TP99.9 and a big difference
 in the outlier count but the average and median response time are
 really low.

 We are running a pretty stock centos 5.6 server install with HAProxy
 1.4.15, HAProxy isn't using more then like 4% of the CPU and the
 System CPU is closer to 12%.

 I was wondering if you guys had any obvious response time related
 performance tweaks I can try. If you need more info let me know too.

 Thanks,
 Matt C.




 --
 Hervé COMMOWICK, EXOSEC (http://www.exosec.fr/)
 ZAC des Metz - 3 Rue du petit robinson - 78350 JOUY EN JOSAS
 Tel: +33 1 30 67 60 65  -  Fax: +33 1 75 43 40 70
 mailto:hcommow...@exosec.fr





Re: HAProxy Response time performance

2011-06-09 Thread Matt Christiansen
Hi Willy,

I agree the haproxy logs show that, but we also monitor the time spent
processing the request which takes in to account, GC, reading data off
the FS and a number of things inside the app and I see no 3sec times
in there or anything near it. Also I have no 3 sec outliers in output
from my test so that seems a little weird it says 3secs. Also I have
the connections set really high to prevent queueing for now, we
usually only have around 1000-2000 connections open.

uname -a

Linux 2.6.18-194.17.1.el5 #1 SMP Wed Sep 29 12:50:31 EDT 2010 x86_64
x86_64 x86_64 GNU/Linux

haproxy -vv

HA-Proxy version 1.4.15 2011/04/08
Copyright 2000-2010 Willy Tarreau w...@1wt.eu

Build options :
 TARGET  = linux26
 CPU = generic
 CC  = gcc
 CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
 OPTIONS = USE_PCRE=1

Default settings :
 maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes

Available polling systems :
sepoll : pref=400,  test result OK
 epoll : pref=300,  test result OK
  poll : pref=200,  test result OK
select : pref=150,  test result OK
Total: 4 (4 usable), will use sepoll.

My config

global
   maxconn 20
   stats socket /var/run/haproxy.stat mode 600
   pidfile /var/run/haproxy.pid
   daemon

defaults
   modehttp
   timeout client 8000
   timeout server 5000
   timeout connect 5000
   timeout queue 5000
   option http-server-close
   option forwardfor
   option tcp-smart-connect
   option tcp-smart-accept
   balance roundrobin

frontend recs-west-cost
   bind 
   maxconn 10
   default_backend runtimes

frontend recs-central
   bind XXX
   maxconn 10
   default_backend runtimes


frontend ssl-recs
   bind 127.0.0.2:8443
   maxconn 10
   default_backend ssl-runtimes

backend runtimes
   mode http
   http-check  disable-on-404
   http-check  expect status 200
   option  httpchk /rrserver/healthcheck

   server  sf-rt-101 10.108.0.101:8101 weight 1 check
   server  sf-rt-102 10.108.0.102:8101 weight 1 check
   server  sf-rt-103 10.108.0.103:8101 weight 1 check
   server  sf-rt-104 10.108.0.104:8101 weight 1 check
   server  sf-rt-105 10.108.0.105:8101 weight 1 check
   server  sf-rt-106 10.108.0.106:8101 weight 1 check
   server  sf-rt-107 10.108.0.107:8101 weight 1 check
   server  sf-rt-108 10.108.0.108:8101 weight 1 check
   server  sf-rt-109 10.108.0.109:8101 weight 1 check
   server  sf-rt-110 10.108.0.110:8101 weight 1 check
   server  sf-rt-111 10.108.0.111:8101 weight 1 check
   server  sf-rt-141 10.108.0.141:8101 weight 1 check
   server  sf-rt-142 10.108.0.142:8101 weight 1 check


backend ssl-runtimes
   mode http
   http-check  disable-on-404
   http-check  expect status 200
   option  httpchk /rrserver/healthcheck

   server  sf-rt-101 10.108.0.101:8151 weight 1 check
   server  sf-rt-102 10.108.0.102:8151 weight 1 check
   server  sf-rt-103 10.108.0.103:8151 weight 1 check
   server  sf-rt-104 10.108.0.104:8151 weight 1 check
   server  sf-rt-105 10.108.0.105:8151 weight 1 check
   server  sf-rt-106 10.108.0.106:8151 weight 1 check
   server  sf-rt-107 10.108.0.107:8151 weight 1 check
   server  sf-rt-108 10.108.0.108:8151 weight 1 check
   server  sf-rt-109 10.108.0.109:8151 weight 1 check
   server  sf-rt-110 10.108.0.110:8151 weight 1 check
   server  sf-rt-111 10.108.0.111:8151 weight 1 check
   server  sf-rt-141 10.108.0.141:8151 weight 1 check
   server  sf-rt-142 10.108.0.142:8151 weight 1 check

userlist UsersFor_HAProxyStatistics
   group admin users XXX
   user X
   user X

listen stats *:9000
   mode http
   stats enable
   option contstats
   stats uri /haproxy_stats
   stats show-node
   stats show-legends
   acl AuthOkay_ReadOnly http_auth(UsersFor_HAProxyStatistics)
   acl AuthOkay_Admin http_auth_group(UsersFor_HAProxyStatistics) admin
   stats http-request auth realm HAProxy-Statistics unless AuthOkay_ReadOnly
   stats admin if AuthOkay_Admin

Thanks,
Matt C.

On Thu, Jun 9, 2011 at 1:01 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Matt,

 On Thu, Jun 09, 2011 at 11:37:00AM -0700, Matt Christiansen wrote:
 I turned on those two options and seemed to help a little.

 We don't have a 2.6.30+ kernel so I don't believe option
 splice-response will work(?). Thats one of the things I'm going to try
 next.

 Splicing is OK since 2.6.27.something. But it will not affect the
 time distribution at all.

 I used halog to narrow down the sample, it was still a few 100 lines
 so I picked three at random.

 Jun  1 14:19:59 localhost haproxy[3124]: 76.102.107.85:28023
 [01/Jun/2011:14:19:48.502] recs runtimes/sf-102 8062/0/0/3123/+11185
 200 +814 - -  1267/1267/18/14/0 0/0 {Apache-Coyote/1.1|3827|||}
 Jun  1 14:19:09 localhost haproxy[3124]: 96.229.202.77:56011
 [01/Jun/2011:14:19:00.861] recs runtimes/sf-103 4982/0/0/3956/+8938
 200 +426

Re: HAProxy Response time performance

2011-06-09 Thread Matt Christiansen
I added in the tun.bufsize 65536 and right away things got better, I
doubled that to 131072 and all of the outliers went way. Set at that
with my tests it looks like haproxy is faster then nginx on 95% of
responses and on par with nginx for the last 5% which is fine with me
=).

What is the negative to setting this high like that? If its just ram
usage all of our LBs have 16GB of ram (don't ask why) so if thats all
I don't think it will be an issue having that so high.

Matt C.

On Thu, Jun 9, 2011 at 2:11 PM, Willy Tarreau w...@1wt.eu wrote:
 Hi Matt,

 On Thu, Jun 09, 2011 at 01:50:11PM -0700, Matt Christiansen wrote:
 Hi Willy,

 I agree the haproxy logs show that, but we also monitor the time spent
 processing the request which takes in to account, GC, reading data off
 the FS and a number of things inside the app and I see no 3sec times
 in there or anything near it. Also I have no 3 sec outliers in output
 from my test so that seems a little weird it says 3secs.

 What I really hate with 3sec is that it's the common TCP retransmit time,
 and normally it indicates packet losses. I have implicitly excluded that
 possibility since it runs well with nginx on the same machine, but still
 that must not be omitted.

 Still, the time measured by application servers generally does not include
 the time spent in queues, so you should be very careful with this. For all
 components there will always be an un monitored area. For instance, haproxy
 cannot know the time spent by the request in the system's backlog, which can
 be huge under a syn flood attack or when maxconn is too low.

 Also I have
 the connections set really high to prevent queueing for now, we
 usually only have around 1000-2000 connections open.

 uname -a

 Linux 2.6.18-194.17.1.el5 #1 SMP Wed Sep 29 12:50:31 EDT 2010 x86_64
 x86_64 x86_64 GNU/Linux

 OK, RH5 so I agree you won't do TCP splicing on this one.

 Could you check if the number of TCP retransmits increases between two
 runs (with netstat -s) ? It's worth archiving a full copy before and
 after the dump in order to focus on things we could discover there.

 Also, would you happen to have nf_conntrack running (check with lsmod) ?
 When this is the case, we always have very ugly results, but it mainly
 affects connect times and in your case I saw large response times too.

 haproxy -vv

 HA-Proxy version 1.4.15 2011/04/08
 Copyright 2000-2010 Willy Tarreau w...@1wt.eu

 Build options :
   TARGET  = linux26
   CPU     = generic
   CC      = gcc
   CFLAGS  = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing
   OPTIONS = USE_PCRE=1

 Everything's fine here.
 (...)

 My config

 Everything OK here too. You said that numbers slightly improved
 with tcp-smart-accept and tcp-smart-connect. Normally it can be
 caused by congested network or by losses. What really puzzles me
 is that while those issues are very common, I don't see why they
 wouldn't show up with nginx too.

 Oh one thing I forgot which can make a difference : buffer sizes.
 The larger the buffer, the smoother losses will be absorbed because
 they'll induce fewer timeouts/RTTs. I don't know what size nginx
 uses, but I remember it has dynamic buffer sizes. Haproxy defaults
 to 16 kB. You can try to increase to 64 kB and see if it changes
 anything :

   global
        tune.bufsize 65536

 Maybe you should run a tcpdump between haproxy and the server, or
 even better, on the haproxy machine AND on one of the servers (you
 can disable a number of servers if it's a test config). That way
 we'll know how the response time spreads around.

 Regards,
 Willy





Using reqirep causes 504 timeout

2011-05-31 Thread Matt Beckman
I am trying to rewrite a single path via HAProxy. Without the rule in
place, a 404 is returned for the /directory page (from an IIS node).
Once I add the rule, the page returns a 504 Gateway Timeout after a
minute or so.

backend Example_Backend
balance uri
reqirep ^([^\ ]*)\ (.*)/directory/?[\ ]\1\ \2/target.asp\
server Server1 x.x.x.xx4:80 weight 25 check fall 10 rise 2 maxconn 40
server Server2 x.x.x.xx5:80 weight 25 check fall 10 rise 2 maxconn 40
server Server3 x.x.x.xx6:80 weight 25 check fall 10 rise 2 maxconn 40
server Server4 x.x.x.xx7:80 weight 25 check fall 10 rise 2 maxconn 40

I have tested going directly to /target.asp (works as expected), and
confirmed that /directory returns a 404 when the rule is not in place.
Is there anything wrong with the way I'm using reqirep?

haproxy[25957]: x.x.x.100:50941 [31/May/2011:14:53:24.836]
Example_Frontend Example_Backend/Server4 77/0/3/-1/50083 504 194 - -
sH-- 1/1/1/1/0 0/0 GET /directory/ HTTP/1.1
haproxy[25957]: x.x.x.100:50942 [31/May/2011:14:53:26.124]
Example_Frontend Example_Backend/Server4 78/0/5/-1/50087 504 194 - -
sH-- 0/0/0/0/0 0/0 GET /directory HTTP/1.1

Thanks!

- Matt



Re: Using reqirep causes 504 timeout

2011-05-31 Thread Matt Beckman
Nice! It works. Thanks!

- Matt

On Tue, May 31, 2011 at 4:01 PM, Cyril Bonté cyril.bo...@free.fr wrote:
 Hi Matt,

 Le mercredi 1 juin 2011 00:46:16, Matt Beckman a écrit :
 I am trying to rewrite a single path via HAProxy. Without the rule in
 place, a 404 is returned for the /directory page (from an IIS node).
 Once I add the rule, the page returns a 504 Gateway Timeout after a
 minute or so.

 backend Example_Backend
     balance uri
     reqirep ^([^\ ]*)\ (.*)/directory/?[\ ]    \1\ \2/target.asp\

 Your regexp removes the protocol part (HTTP/1.0, HTTP/1.1) in the request
 line.
 Can you retry with something like this ?
 reqirep ^([^\ ]*)\ (.*)/directory/?\ (.*) \1\ \2/target.asp\ \3

 --
 Cyril Bonté




Using ratelimit as shown on serverfault.com

2010-09-08 Thread Matt
Hi guys,

I'm trying out the rate limit feature in 1.5-dev2.  My config is
below.  It appears to work in the sense that after the limit the
connection is dropped, but I actually want the connection to go to the
error backend, rather than the webserver backend and get dropped.  I'm
guessing my logic in the frontend config is wrong rather than it being
a bug.

Thanks,

Matt

defaults
mode   http
option  httplog
option  log-separate-errors
option  httpchk HEAD /available HTTP/1.0
monitor-uri /haproxy_test
option  allbackups
http-check disable-on-404
retries 3
option  redispatch
maxconn 2000
timeout connect 5s
timeout client  60s
timeout server  60s
timeout http-request 10s
timeout http-keep-alive 2s
timeout check 10s
frontend ha-01-apache *:80
log 127.0.0.1:516   local0 info
option http-pretend-keepalive

stick-table type ip size 200k expire 10m store gpc0
acl source_is_abuser src_get_gpc0(http) gt 0
use_backend error if source_is_abuser
tcp-request connection track-sc1 src if ! source_is_abuser

acl apache_01 hdr_sub(host) -i example.com
use_backend webserver if apache_01
backend webserver
log 127.0.0.1:516   local0 info
option http-server-close

stick-table type ip size 200k expire 30s store conn_rate(100s)
tcp-request content track-sc2 src
acl conn_rate_abuse sc2_conn_rate gt 5
acl mark_as_abuser sc1_inc_gpc0 gt 0
tcp-request content reject if conn_rate_abuse mark_as_abuser

server apache 127.0.0.1:81 check inter 15s rise 2 fall 2
backend error
errorfile 503 /etc/haproxy/errorfiles/503.http



Re: Using ratelimit as shown on serverfault.com

2010-09-08 Thread Matt
Okay, think I found it:

- acl source_is_abuser src_get_gpc0(http) gt 0
+ acl source_is_abuser sc1_get_gpc0(http) gt 0

On 8 September 2010 17:56, Matt mattmora...@gmail.com wrote:
 Hi guys,

 I'm trying out the rate limit feature in 1.5-dev2.  My config is
 below.  It appears to work in the sense that after the limit the
 connection is dropped, but I actually want the connection to go to the
 error backend, rather than the webserver backend and get dropped.  I'm
 guessing my logic in the frontend config is wrong rather than it being
 a bug.

 Thanks,

 Matt

 defaults
        mode   http
        option  httplog
        option  log-separate-errors
        option  httpchk HEAD /available HTTP/1.0
        monitor-uri /haproxy_test
        option  allbackups
        http-check disable-on-404
        retries         3
        option  redispatch
        maxconn         2000
        timeout connect 5s
        timeout client  60s
        timeout server  60s
        timeout http-request 10s
        timeout http-keep-alive 2s
        timeout check 10s
 frontend ha-01-apache *:80
        log 127.0.0.1:516   local0 info
        option http-pretend-keepalive

        stick-table type ip size 200k expire 10m store gpc0
        acl source_is_abuser src_get_gpc0(http) gt 0
        use_backend error if source_is_abuser
        tcp-request connection track-sc1 src if ! source_is_abuser

        acl apache_01 hdr_sub(host) -i example.com
        use_backend webserver if apache_01
 backend webserver
        log 127.0.0.1:516   local0 info
        option http-server-close

        stick-table type ip size 200k expire 30s store conn_rate(100s)
        tcp-request content track-sc2 src
        acl conn_rate_abuse sc2_conn_rate gt 5
        acl mark_as_abuser sc1_inc_gpc0 gt 0
        tcp-request content reject if conn_rate_abuse mark_as_abuser

        server apache 127.0.0.1:81 check inter 15s rise 2 fall 2
 backend error
        errorfile 503 /etc/haproxy/errorfiles/503.http




where to start with 503 errors

2010-07-28 Thread Matt Banks
OK, this is somewhat funny, but I'm mostly done with this email and a VERY 
similar sounding problem was just asked a few minutes ago...

All,

Long story short(ish):

We put haproxy in front of a few servers that generate dynamic pages from a 
database.  Here's a crude description of the setup:

HAProxy - 2 to 10 Apache servers - Gateway (connection to db) - Local 
caching database server ---(LAN or WAN)- Database

The point is that if the page is cached, the local caching db server will reply 
very fast.  If not, it may take a few seconds to respond.

We've also found that we basically HAVE to use keep alive (eg loading an image 
takes well under a second to load without HAProxy and perhaps .5 to 1.5 seconds 
with keepalive on whereas with keepalive off, the same image on the same page 
takes 12-18 seconds) if that makes a difference.

Here's where things get a bit... tricky?

We have httpcheck disabled.  This is essentially because it's not working for 
us - at least how we'd like it to be.  In a nutshell, we're getting a LOT of 
false positives where a server is listed as up going down or down when in 
reality, a non-cached page was simply taking a couple seconds (probably 3-5 but 
definitely less than 10) to load.

The point is, we get several 503 errors throughout the day.  And they appear to 
be random.  Apache never goes down nor reports an error.  Frankly, I think 
what's happening is that haproxy is hitting a server which takes too long to 
respond, so it tries another server (which also doesn't have the page cached) 
and goes through the list until it gives up and reports a 503.

Meanwhile, if you go directly to the page on the Apache server, it loads fine.  
Or if you re-load using HAProxy, it works fine as well.

I'm just wondering where to start with this.  We have several sites 
experiencing the same problem, but since we're using roughly the same setup for 
each one, I'm not opposed to saying it could be how we have HAProxy set up.

TIA.

keep-alive time out error code

2010-05-19 Thread Matt
Hi all,

I'm currently seeing requests hitting a 10s http-request time out in my logs
but not logging the cR code.  I'm thinking this is because the time out is
actually the http-keepalive one which I haven't set and therefore is using
the http-request time out.

Do people think its a good idea to log a code for when the http-keepalive
time out strikes? or is this seen as such a common thing that it's not worth
doing?

I'm going to insert the http-keepalive time out so I can be sure it
is separated from the http-request time outs.

Thanks,

Matt


Solaris x86 tuning...

2010-05-19 Thread Matt Banks
All,

In a nutshell, we REALLY like HAProxy.  We've been using it on RHEL/Cent for a 
while with great success (running under VMWare/VSphere.)  However, most of what 
we do is under Solaris, and we're finding that we don't get nearly as good of 
results running under Solaris 10 x86.  We've compiled it using gcc 3 and gcc 4, 
we've tried with USE_STATIC_PCRE=1 and without.  (With proved better.) We've 
even tried tweaking some of the ndd settings (rather blindly after a google 
search gave us this: 
http://serverfault.com/questions/134578/solaris-tcp-stack-tuning) to no avail.  
We've tried it in a zone with up to 1GB of RAM, and directly on the server 
itself pointing to 127.0.0.1.  Things are just slower.  They work, but slowly.

Frankly, we're baffled.  Using a backend of two servers, there are delays of up 
to 5 seconds over a direct connection to the apache server itself.  An offsite 
RHEL version of HAProxy (with a latency of around 30ms) provided us MUCH faster 
results than any Solaris install has.

Is there something we're missing?  We're about to the point of invoking dtrace 
to dig into what's going on, but I just wanted to make sure we weren't missing 
something obvious...

Thanks,
matt

Re: Downgrade backend request/response to HTTP/1.0

2010-05-04 Thread Matt
On 4 May 2010 20:43, Holger Just hapr...@meine-er.de wrote:

 Hi Dave,

 On 2010-05-04 18:55, Dave Pascoe wrote:
  Is there a way in haproxy 1.4 to perform the equivalent function that
  these Apache directives perform?
 
   SetEnv downgrade-1.0 1
   SetEnv force-response-1.0 1
 
  i.e., force haproxy to downgrade to HTTP/1.0 even though the client is
  HTTP/1.1

 I'm not really sure what you are trying to achieve with this (as you
 should really reconsider using software which does not understand HTTP
 1.1 nowadays), but you could force the HTTP version using the following
 statements:

 # replace the HTTP version in the request
 reqrep ^(.*)\ HTTP/[^\ ]+$ \1\ HTTP/1.0


I had to this for a short period for some Jetty backends, it worked well
causing the Jetty servers to respond with a HTTP/1.0 response.

Matt


1.4.4 option http-pretend-keepalive not available in backend

2010-04-26 Thread Matt
The doc says that http-pretend-keepalive can be set in a backend only, but
when haproxy is reloaded with this option set :-

'http-pretend-keepalive' ignored because backend  has no frontend
capability.

Matt


Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
On 6 April 2010 19:43, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Apr 06, 2010 at 11:42:53AM +0100, Matt wrote:
  Hi all,
 
  Using HA-Proxy version 1.3.19 2009/07/27.  Set-up is HA-Proxy balancing a
  pool of Jetty servers.
 
  We had a tomcat application using keep-alive that was having issues (kept
 on
  opening many connections), so to stop that and other clients getting the
  same problem we used the option httpclose which fixed the problem.
 
  This though has added another issue when using digest authentication with
  curl.  When sending to the HA-Proxy IP:-
 
  **request**
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ...
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
 
  **response**
   HTTP/1.1 100 Continue
   Connection: close
  * Empty reply from server
  * Closing connection #0
  curl: (52) Empty reply from server
 
  It looks like HA-Proxy is sending 100-continue and not 401 and adding the
  connection closed header.  If I use curl with the --http1.0 option, then
 it
  works as expected, but I guess this is forcing Jetty to work in http 1.0
  mode.

 This was fixed in 1.3.23 and 1.3.24. The issue is not what you describe
 above.
 What happens is that the client sends the Expect: 100-continue header,
 which
 is forwarded to the server. The server then replies with HTTP/1.1 100
 Continue
 and haproxy adds the Connection: close response there. Strictly speaking,
 both
 curl and haproxy are incorrect here :
  - haproxy should not add any header on a 100-continue response
  - libcurl should ignore any header in a 100-continue response.

 But the reality is that both do probably not consider the 100-continue
 response as a special case, which it is.

 There is nothing you can do with the configuration to fix this, you should
 really update your version (also other annoying issues have been fixed
 since
 1.3.19). Either you install 1.3.24 (or 1.3.23 if you don't find 1.3.24 yet
 for
 your distro), or you can switch to 1.4.3.

 Well, maybe if you remove option httpclose and replace it with
 reqadd Connection:\ close, without the corresponding rspadd, it could
 work,
 if you don't have anything else touching the response (no cookie insertion,
 ...).
 This would rely on the server to correctly close the response. But it would
 be
 an awful hack.

  When using apache in front of HA-Proxy with both force-proxy-request-1.0
 and
  proxy-nokeepalive the request is successful.

 This is because the Expect header appeared in 1.1, so the client cannot use
 it
 if you force the request as 1.0.

 Thanks, i'll test 1.3.23/24 in our lab

Matt


Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
On 6 April 2010 19:43, Willy Tarreau w...@1wt.eu wrote:

 On Tue, Apr 06, 2010 at 11:42:53AM +0100, Matt wrote:
  Hi all,
 
  Using HA-Proxy version 1.3.19 2009/07/27.  Set-up is HA-Proxy balancing a
  pool of Jetty servers.
 
  We had a tomcat application using keep-alive that was having issues (kept
 on
  opening many connections), so to stop that and other clients getting the
  same problem we used the option httpclose which fixed the problem.
 
  This though has added another issue when using digest authentication with
  curl.  When sending to the HA-Proxy IP:-
 
  **request**
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ...
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
 
  **response**
   HTTP/1.1 100 Continue
   Connection: close
  * Empty reply from server
  * Closing connection #0
  curl: (52) Empty reply from server
 
  It looks like HA-Proxy is sending 100-continue and not 401 and adding the
  connection closed header.  If I use curl with the --http1.0 option, then
 it
  works as expected, but I guess this is forcing Jetty to work in http 1.0
  mode.

 This was fixed in 1.3.23 and 1.3.24. The issue is not what you describe
 above.
 What happens is that the client sends the Expect: 100-continue header,
 which
 is forwarded to the server. The server then replies with HTTP/1.1 100
 Continue
 and haproxy adds the Connection: close response there. Strictly speaking,
 both
 curl and haproxy are incorrect here :
  - haproxy should not add any header on a 100-continue response
  - libcurl should ignore any header in a 100-continue response.

 But the reality is that both do probably not consider the 100-continue
 response as a special case, which it is.

 There is nothing you can do with the configuration to fix this, you should
 really update your version (also other annoying issues have been fixed
 since
 1.3.19). Either you install 1.3.24 (or 1.3.23 if you don't find 1.3.24 yet
 for
 your distro), or you can switch to 1.4.3.

 Well, maybe if you remove option httpclose and replace it with
 reqadd Connection:\ close, without the corresponding rspadd, it could
 work,
 if you don't have anything else touching the response (no cookie insertion,
 ...).
 This would rely on the server to correctly close the response. But it would
 be
 an awful hack.

  When using apache in front of HA-Proxy with both force-proxy-request-1.0
 and
  proxy-nokeepalive the request is successful.

 This is because the Expect header appeared in 1.1, so the client cannot use
 it
 if you force the request as 1.0.

 On second thoughts I don't think this is going to work.  If 1.3.24 is the
same as 1.4.3, i'm getting an error on the first request not the challenge
when using 1.4.3 and option httpclose, or option http-server-close.

When using curl :-
* Server auth using Digest with user 'su'
 PUT . HTTP/1.1
 User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
 Host: ..
 Accept: */*
 content-type:application/xml
 Content-Length: 0
 Expect: 100-continue

 HTTP/1.1 100 Continue
* HTTP 1.0, assume close after body
 HTTP/1.0 502 Bad Gateway
 Cache-Control: no-cache
 Connection: close
 Content-Type: text/html

htmlbodyh1502 Bad Gateway/h1
The server returned an invalid or incomplete response.
/body/html
* Closing connection #0

The Jetty server throws an exception :-
HTTP/1.1 PUT
Request URL: http://..
User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
Host: 
Accept: */*
Content-Type: application/xml
Content-Length: 0
Expect: 100-continue
X-Forwarded-For: ...
Connection: close
Querystring: null
-ERROR Authenticator Authenticator caught IO Error when trying
to authenticate user!
org.mortbay.jetty.EofException
org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)
org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:565)
org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:904)
org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:633)
org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:586)
org.mortbay.jetty.security.DigestAuthenticator.authenticate(DigestAuthenticator.java:131)
...
Caused by: java.nio.channels.ClosedChannelException
...

HA Proxy debug:-
accept(0007)=0008 from [...:49194]
clireq[0008:]: PUT ... HTTP/1.1
clihdr[0008:]: User-Agent: curl/7.19.5 (i486-pc-linux-gnu)
libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
clihdr[0008:]: Host: 
clihdr[0008:]: Accept: */*
clihdr[0008:]: content-type:application/xml
clihdr[0008:]: Content-Length: 0
clihdr[0008:]: Expect: 100-continue
srvrep[0008:0009]: HTTP/1.1 100 Continue
srvcls[0008:0009]
clicls[0008:0009]
closed[0008:0009]

Making sure that both httpclose

Changing HA Proxy return codes

2010-04-07 Thread Matt
I'm guessing the answer is no as i'm unable to find anything in the
documentation that suggests otherwise, but..

If I wanted to change the error return code submitted by haproxy (not the
backend server) is this possible? i.e. change haproxy to return a 502 when
it's going to return a 504?

I know the current return codes are correct as of
http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html - I could just do
with this short hack.

Thanks,

Matt


Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
Hi Willy,

That trace is from curl using --verbose, looks like one empty line after
Expect: 100-continue

Here using --trace-ascii it definitely looks like an empty line after

00b7: content-type:application/xml
00d9: Content-Length: 0
00ec: Expect: 100-continue
0102:
== Info: HTTP 1.0, assume close after body
= Recv header, 26 bytes (0x1a)
: HTTP/1.0 502 Bad Gateway
= Recv header, 25 bytes (0x19)
: Cache-Control: no-cache
= Recv header, 19 bytes (0x13)
: Connection: close
= Recv header, 25 bytes (0x19)
: Content-Type: text/html
= Recv header, 2 bytes (0x2)
:
= Recv data, 107 bytes (0x6b)
: htmlbodyh1502 Bad Gateway/h1.The server returned an inva
0040: lid or incomplete response../body/html.
htmlbodyh1502 Bad Gateway/h1
The server returned an invalid or incomplete response.
/body/html
== Info: Closing connection #0

I'll try the latest snapshot now.

Thanks,

Matt

On 7 April 2010 13:44, Willy Tarreau w...@1wt.eu wrote:

 Hi Matt,

 On Wed, Apr 07, 2010 at 11:10:58AM +0100, Matt wrote:
  When using curl :-
  * Server auth using Digest with user 'su'
   PUT . HTTP/1.1
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ..
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
  
   HTTP/1.1 100 Continue
  * HTTP 1.0, assume close after body
   HTTP/1.0 502 Bad Gateway
   Cache-Control: no-cache
   Connection: close
   Content-Type: text/html
 (...)

 Where was this trace caught ? Are you sure there was no empty line after
 the HTTP/1.1 100 Continue ? That would be a protocol error, but maybe
 it's just an interpretation of the tool used to dump the headers.

  The Jetty server throws an exception :-
  HTTP/1.1 PUT
  Request URL: http://..
  User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
  Host: 
  Accept: */*
  Content-Type: application/xml
  Content-Length: 0
  Expect: 100-continue
  X-Forwarded-For: ...
  Connection: close
  Querystring: null
  -ERROR Authenticator Authenticator caught IO Error when
 trying
  to authenticate user!
  org.mortbay.jetty.EofException
  org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)
 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:565)
  org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:904)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:633)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:586)
 
 org.mortbay.jetty.security.DigestAuthenticator.authenticate(DigestAuthenticator.java:131)
  ...
  Caused by: java.nio.channels.ClosedChannelException
  ...
 
  HA Proxy debug:-
  accept(0007)=0008 from [...:49194]
  clireq[0008:]: PUT ... HTTP/1.1
  clihdr[0008:]: User-Agent: curl/7.19.5 (i486-pc-linux-gnu)
  libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
  clihdr[0008:]: Host: 
  clihdr[0008:]: Accept: */*
  clihdr[0008:]: content-type:application/xml
  clihdr[0008:]: Content-Length: 0
  clihdr[0008:]: Expect: 100-continue
  srvrep[0008:0009]: HTTP/1.1 100 Continue
  srvcls[0008:0009]
  clicls[0008:0009]
  closed[0008:0009]
 
  Making sure that both httpclose and http-server-close are absent causes
 the
  requests to work.

 This would make me think about another funny behaviour in the server,
 related to Connection: close. Could you try latest 1.4 snapshot and
 add option http-pretend-keepalive ? It is possible that the server
 disables handling of the 100-continue when it sees a close (which is
 not related at all, but since this is the only difference, we can think
 about another home-made HTTP implementation).

 Regards,
 Willy




Re: issue with using digest with jetty backends

2010-04-07 Thread Matt
On 7 April 2010 17:16, Matt mattmora...@gmail.com wrote:

 Hi Willy,

 That trace is from curl using --verbose, looks like one empty line after
 Expect: 100-continue

 Here using --trace-ascii it definitely looks like an empty line after

 00b7: content-type:application/xml
 00d9: Content-Length: 0
 00ec: Expect: 100-continue
 0102:
 == Info: HTTP 1.0, assume close after body
 = Recv header, 26 bytes (0x1a)
 : HTTP/1.0 502 Bad Gateway
 = Recv header, 25 bytes (0x19)
 : Cache-Control: no-cache
 = Recv header, 19 bytes (0x13)
 : Connection: close
 = Recv header, 25 bytes (0x19)
 : Content-Type: text/html
 = Recv header, 2 bytes (0x2)
 :
 = Recv data, 107 bytes (0x6b)
 : htmlbodyh1502 Bad Gateway/h1.The server returned an inva
 0040: lid or incomplete response../body/html.
 htmlbodyh1502 Bad Gateway/h1
 The server returned an invalid or incomplete response.
 /body/html
 == Info: Closing connection #0

 I'll try the latest snapshot now.

 Thanks,

 Matt

 On 7 April 2010 13:44, Willy Tarreau w...@1wt.eu wrote:

 Hi Matt,

 On Wed, Apr 07, 2010 at 11:10:58AM +0100, Matt wrote:
  When using curl :-
  * Server auth using Digest with user 'su'
   PUT . HTTP/1.1
   User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
   Host: ..
   Accept: */*
   content-type:application/xml
   Content-Length: 0
   Expect: 100-continue
  
   HTTP/1.1 100 Continue
  * HTTP 1.0, assume close after body
   HTTP/1.0 502 Bad Gateway
   Cache-Control: no-cache
   Connection: close
   Content-Type: text/html
 (...)

 Where was this trace caught ? Are you sure there was no empty line after
 the HTTP/1.1 100 Continue ? That would be a protocol error, but maybe
 it's just an interpretation of the tool used to dump the headers.

  The Jetty server throws an exception :-
  HTTP/1.1 PUT
  Request URL: http://..
  User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5
 OpenSSL/0.9.8g
  zlib/1.2.3.3 libidn/1.15
  Host: 
  Accept: */*
  Content-Type: application/xml
  Content-Length: 0
  Expect: 100-continue
  X-Forwarded-For: ...
  Connection: close
  Querystring: null
  -ERROR Authenticator Authenticator caught IO Error when
 trying
  to authenticate user!
  org.mortbay.jetty.EofException
  org.mortbay.jetty.HttpGenerator.flush(HttpGenerator.java:760)
 
 org.mortbay.jetty.AbstractGenerator$Output.flush(AbstractGenerator.java:565)
  org.mortbay.jetty.HttpConnection$Output.flush(HttpConnection.java:904)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:633)
 
 org.mortbay.jetty.AbstractGenerator$Output.write(AbstractGenerator.java:586)
 
 org.mortbay.jetty.security.DigestAuthenticator.authenticate(DigestAuthenticator.java:131)
  ...
  Caused by: java.nio.channels.ClosedChannelException
  ...
 
  HA Proxy debug:-
  accept(0007)=0008 from [...:49194]
  clireq[0008:]: PUT ... HTTP/1.1
  clihdr[0008:]: User-Agent: curl/7.19.5 (i486-pc-linux-gnu)
  libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
  clihdr[0008:]: Host: 
  clihdr[0008:]: Accept: */*
  clihdr[0008:]: content-type:application/xml
  clihdr[0008:]: Content-Length: 0
  clihdr[0008:]: Expect: 100-continue
  srvrep[0008:0009]: HTTP/1.1 100 Continue
  srvcls[0008:0009]
  clicls[0008:0009]
  closed[0008:0009]
 
  Making sure that both httpclose and http-server-close are absent causes
 the
  requests to work.

 This would make me think about another funny behaviour in the server,
 related to Connection: close. Could you try latest 1.4 snapshot and
 add option http-pretend-keepalive ? It is possible that the server
 disables handling of the 100-continue when it sees a close (which is
 not related at all, but since this is the only difference, we can think
 about another home-made HTTP implementation).

 Regards,
 Willy


Latest snapshot on 1.4.3

- option http-pretend-keepalive  - works
- option http-pretend-keepalive and httpclose - same behaviour as before,
errors
- option http-server-close - same behaviour as before, errors
- option http-server-close and http-pretend-keepalive - works

What exactly does pretend-keepalive do?

Thanks,

Matt


issue with using digest with jetty backends

2010-04-06 Thread Matt
Hi all,

Using HA-Proxy version 1.3.19 2009/07/27.  Set-up is HA-Proxy balancing a
pool of Jetty servers.

We had a tomcat application using keep-alive that was having issues (kept on
opening many connections), so to stop that and other clients getting the
same problem we used the option httpclose which fixed the problem.

This though has added another issue when using digest authentication with
curl.  When sending to the HA-Proxy IP:-

**request**
 User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
 Host: ...
 Accept: */*
 content-type:application/xml
 Content-Length: 0
 Expect: 100-continue

**response**
 HTTP/1.1 100 Continue
 Connection: close
* Empty reply from server
* Closing connection #0
curl: (52) Empty reply from server

It looks like HA-Proxy is sending 100-continue and not 401 and adding the
connection closed header.  If I use curl with the --http1.0 option, then it
works as expected, but I guess this is forcing Jetty to work in http 1.0
mode.


When using apache in front of HA-Proxy with both force-proxy-request-1.0 and
proxy-nokeepalive the request is successful.
**request**
 User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
 Host: 
 Accept: */*
 content-type:application/xml
 Content-Length: 0
 Expect: 100-continue

 HTTP/1.1 401 Unauthorized
 Date: Tue, 06 Apr 2010 09:56:32 GMT
 WWW-Authenticate: Digest realm
 Content-Type: text/plain; charset=UTF-8
 Content-Length: 12
 Connection: close

* Closing connection #0
* Issue another request to this URL..
 User-Agent: curl/7.19.5 (i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g
zlib/1.2.3.3 libidn/1.15
 Host: ...
 Accept: */*
 content-type:application/xml
 Content-Length: 450
 Expect: 100-continue

 HTTP/1.1 100 Continue
 HTTP/1.1 200 OK
 Date: Tue, 06 Apr 2010 09:56:32 GMT
 Server: 
 Cache-Control: max-age=7200, must-revalidate
 Content-Length: 0
 Connection: close
 Content-Type: text/plain; charset=UTF-8

* Closing connection #0

It also works fine sending directly to Jetty, using http 1.1 and 1.0

So is it possible for me to get this working with 1.3? or will 1.4 support
of disabling keep-alive be a better option and I should try that?

Thanks,

Matt


Re: issue with using digest with jetty backends

2010-04-06 Thread Matt

 2010/4/6 Matt mattmora...@gmail.com:

   HTTP/1.1 100 Continue
   HTTP/1.1 200 OK

 Somehow this looks very odd to me :)
 Dunno if that helps, but we had problems with curl and digest
 authentication some time ago and solved it using

  curl --digest -H Expect: [...]

 but we might have used a very old (buggy) version of curl.
 Please let me know if that helps in your case.

 Looking at the HTTP 1.1 docs it looks normal.  The client expects a 100
Continue or a final status code, the server can respond with 100 continue,
process, and then respond with a final code.

Adding the httpclose option causes haproxy to return the 100 but also add
connection: closed which rightly causes the client (curl) to not wait for
the final code and exit.

I rewrote the header to pass the request to Jetty in HTTP 1.0 with :-

reqirep ^(.*)(HTTP/1.1)(.*) \1HTTP/1.0\3

And i'm currently testing this, on the face of it - it appears to be
working, though it's slow and now and again a request returns a 502 always
after the 401, it looks like haproxy didn't get a response from the backend
:-

0017:frontend.clihdr[0008:]: User-Agent: curl/7.19.5
(i486-pc-linux-gnu) libcurl/7.19.5 OpenSSL/0.9.8g zlib/1.2.3.3 libidn/1.15
0017:frontend.clihdr[0008:]: Host: ...
0017:frontend.clihdr[0008:]: Accept: */*
0017:frontend.clihdr[0008:]: content-type:application/xml
0017:frontend.clihdr[0008:]: Content-Length: 448
0017:frontend.clihdr[0008:]: Expect: 100-continue
0017:pool-00.srvcls[0008:0009]
0017:pool-00.clicls[0008:0009]
0017:pool-00.closed[0008:0009]

I've ran tests against the Jetty server with curl in http1.1 and http1.0 and
never have an issue.  If I remove my header rewrite and httpclose I get the
same throughput and success as if I was hitting Jetty directly.

I'm going to enable logging on Jetty and see what's been passed to it by HA
Proxy when the failure happens.

Thanks,

Matt


Re: Problem getting http-server-close to work with Jetty backend

2010-04-06 Thread Matt
Patrik does digest authentication work for you? I've just tried 1.4.3 and
got a :- org.mortbay.jetty.EofException when attempting digest auth with
http-server-close or httpclose set.

It works as expected if I don't set any of those options.

Matt

On 30 March 2010 13:34, Patrik Nilsson pat...@jalbum.net wrote:

 Sorry, forgot to mention what version I was using. This was with
 haproxy 1.4.2. I just tried with 1.4.3 and the problem remains.

 Thanks,

 Patrik

 On Tue, Mar 30, 2010 at 11:51 AM, Patrik Nilsson pat...@jalbum.net
 wrote:
  Hi,
 
  We have been trying to get the new keep-alive functionality, with the
  http-server-close option, to work with our Jetty back-end web servers.
  There seems to be something in the response from the Jetty servers
  that makes HaProxy always add a Connection: close header in the
  response to the client though.
 
  Running the same HaProxy configuration with an Apache backend works fine.
 
  I've included examples below showing the requests and responses when
  going directly to the backend server, bypassing haproxy, and then the
  same request going through haproxy, for the Apache and Jetty backends.
 
  One obvious difference in the response from the Apache server is that
  it includes explicit keep-alive headers, but if I understand the
  matrix in the connection-header.txt (included in doc/internals) that
  shouldn't matter - as long as the Jetty server doesn't send a
  Connection: Close, includes a Content-Length header and both client
  and server use http/1.1 HaProxy should not add a Connection: Close
  header in the response to the client.
 
  Any ideas what might be causing our problems?
 
  Thank you,
 
  Patrik
 
---
JETTY backend.
---
 
  Direct:
  ---
 
  *Request*
 
  GET /res/jalogo.png HTTP/1.1
  Host: jetty.jalbum.test
  User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US;
  rv:1.9.2) Gecko/20100115 Firefox/3.6 GTB6
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  Accept-Language: en-us,en;q=0.5
  Accept-Encoding: gzip,deflate
  Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  Keep-Alive: 115
  Connection: keep-alive
  Pragma: no-cache
  Cache-Control: no-cache
 
  *Response*
 
  HTTP/1.1 200 OK
  Date: Mon, 29 Mar 2010 15:32:10 GMT
  Expires: Tue, 30 Mar 2010 15:32:10 GMT
  Content-Type: image/png
  Cache-Control: max-age=86400
  Last-Modified: Tue, 16 Mar 2010 10:55:16 GMT
  Accept-Ranges: bytes
  Content-Length: 7491
  Server: Jetty(6.1.21)
 
  Through HaProxy:
  
 
  *Request*
 
  GET /res/jalogo.png HTTP/1.1
  Host: jalbum.test
  User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US;
  rv:1.9.2) Gecko/20100115 Firefox/3.6 GTB6
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  Accept-Language: en-us,en;q=0.5
  Accept-Encoding: gzip,deflate
  Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  Keep-Alive: 115
  Connection: keep-alive
  Pragma: no-cache
  Cache-Control: no-cache
 
  *Response*
 
  HTTP/1.1 200 OK
  Date: Mon, 29 Mar 2010 15:34:42 GMT
  Expires: Tue, 30 Mar 2010 15:34:42 GMT
  Cache-Control: max-age=86400
  Content-Type: image/png
  Last-Modified: Tue, 16 Mar 2010 10:55:16 GMT
  Accept-Ranges: bytes
  Connection: close
  Server: Jetty(6.1.21)
 
---
APACHE backend.
---
 
  Direct:
  ---
 
  *Request*
 
  GET /gifs/green.gif HTTP/1.1
  Host: apache.jalbum.test
  User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US;
  rv:1.9.2) Gecko/20100115 Firefox/3.6 GTB6
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  Accept-Language: en-us,en;q=0.5
  Accept-Encoding: gzip,deflate
  Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  Keep-Alive: 115
  Connection: keep-alive
  Pragma: no-cache
  Cache-Control: no-cache
 
  *Response*
 
  HTTP/1.1 200 OK
  Date: Mon, 29 Mar 2010 15:37:15 GMT
  Server: Apache/2.2.10 (Linux/SUSE)
  Last-Modified: Wed, 27 May 2009 15:02:43 GMT
  Etag: de39-76-46ae622a36ac0
  Accept-Ranges: bytes
  Content-Length: 118
  Keep-Alive: timeout=15, max=100
  Connection: Keep-Alive
  Content-Type: image/gif
 
  Through HaProxy:
  
 
  *Request*
 
  GET /gifs/green.gif HTTP/1.1
  Host: jalbum.test
  User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US;
  rv:1.9.2) Gecko/20100115 Firefox/3.6 GTB6
  Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
  Accept-Language: en-us,en;q=0.5
  Accept-Encoding: gzip,deflate
  Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
  Keep-Alive: 115
  Connection: keep-alive
  Pragma: no-cache
  Cache-Control: no-cache
 
  *Response*
 
  HTTP/1.1 200 OK
  Date: Mon, 29 Mar 2010 15:26:17 GMT
  Server: Apache/2.2.10 (Linux/SUSE)
  Last-Modified: Wed, 27 May 2009 15:02:43 GMT
  Etag: de39-76-46ae622a36ac0
  Accept-Ranges: bytes
  Content-Length: 118
  Content-Type: image/gif
 




Re: redirecting many urls

2009-12-04 Thread Matt
Replying to the list for archive / searching purposes.

Hi Chris,

These rules aren't good practice, just needed to be done for cosmetic
reasons for our application :-S

Basically you may want to redirect a lot of requests with a 301,302,
or 303 code but only have one rule.

I wanted many usersN pages i.e. http://mydomain.com/users1/page to
redirect with a 301 code to http://mydomain.com/users1/mypage

This will rewrite/proxy /users1/page to /users1/mypage
- reqrep ^([^\ ]*)\ /([a-z]+)/page(.*)   \1\ /\2/mypage\3

Now I want to redirect those pages so set up a acl to capture the proxied URL
- acl users_redirect1 url_reg /([a-z]+)/mypage

This will capture the proxied pages, now I can redirect them with just
a prefix as the URL i'm redirecting has already been rewritten.
- redirect prefix http://mydomain.com code 301 if users_redirect1

Obviously when the redirect comes back the rule could redirect it
again, but I already have a rule before all of the above that rewrites
that. i.e.

- reqrep ^([^\ ]*)\ /([a-z]+)/mypage(.*)   \1\ /pages/page-\2

Hope this makes sense.

Thanks,

Matt

2009/12/4 Chris Sarginson ch...@sargy.co.uk:
 I didnt quite follow Willy on this - any chance you could provide a sample,
 as this sounds like something we could do well to use in future?

 Cheers
 Chris

 Matt wrote:

 Willy that makes perfect sense, can't believe I didn't see it earlier.
  Testing it right now and it appears to work fine.

 Many thanks,

 Matt

 2009/12/3 Willy Tarreauw...@1wt.eu:

 Hi Matt,

 replying quickly this time.

 On Thu, Dec 03, 2009 at 10:13:57AM +, Matt wrote:

 Hi Willy, sure.

 I guess what i'd be writing with my knowledge of haproxy at the moment
 is one of these for every user:-

 acl user1 url_beg /user1/mypage
 redirect location http://mysite.com/user1/page code 301 if user1

 It would be great if I could do:
 acl user1 url_reg /(.*)/mypage
 redirect location http://mysite.com/\1/page code 301 if user1

 Prefix doesn't work in this context as the thing I want to change is
 after the value i'm capturing.

 I've tried to think about how I could do it with the reqrep and rsprep
 but i'm finding it hard to get my head round.  Especially since I
 really wouldn't like the request to hit a server when I want to
 redirect them.

 OK. What I think you should try, though I've not tested it, is to
 first rewrite the request using reqrep and put the fields in the
 order you like, then perform a prefix-based redirect. I think it
 should work, but there may be corner cases I'm not thinking about.

 Regards,
 Willy







Re: redirecting many urls

2009-12-03 Thread Matt
Hi Willy, sure.

I guess what i'd be writing with my knowledge of haproxy at the moment
is one of these for every user:-

acl user1 url_beg /user1/mypage
redirect location http://mysite.com/user1/page code 301 if user1

It would be great if I could do:
acl user1 url_reg /(.*)/mypage
redirect location http://mysite.com/\1/page code 301 if user1

Prefix doesn't work in this context as the thing I want to change is
after the value i'm capturing.

I've tried to think about how I could do it with the reqrep and rsprep
but i'm finding it hard to get my head round.  Especially since I
really wouldn't like the request to hit a server when I want to
redirect them.

Any help appreciated.

Thanks,

Matt

2009/12/2 Willy Tarreau w...@1wt.eu:
 Hi Matt,

 On Wed, Dec 02, 2009 at 12:58:50PM +, Matt wrote:
 I guess this can't be done at the moment? Is there a feature request
 for this already?
 (...)
  RewriteRule ^/([a-z]+)/mypage$  http://mydomain.com/$1/yourpage 
  [R=301,L,NE]

 Sorry, but I'm unable to parse apache's cryptic rewriterules.
 You're saying that you want to perform a redirect which keeps
 some part of the request. Could you please post a concrete
 example of what you'd have in the request and what you'd like
 to see in the Location header ? There are some possibilities
 with the redirect statement (prefix, etc...) but maybe not what
 you're looking for. But maybe we can do it differently.

 Regards,
 Willy





Re: redirecting many urls

2009-12-03 Thread Matt
Hi Chris,

We use similar for proxying requests, but what's needed here is a
redirect with a 301 code.  It's purely for cosmetic reasons but
unfortunately required.  If i'm unable to do it with haproxy, i'll
have to farm them off to a nginx/apache cluster.

Thanks,

Matt

2009/12/3 Chris Sarginson ch...@sargy.co.uk:
 We use the following header rewrite rule:

 reqirep ^([^\ ]*)\ /stats/(.*)  \1\ /cgi-bin/\2

 This means the browser window says http://www.domain.com/stats/stats.cgi,
 but the backend server receives a request for
 www.domain.com/cgi-bin/stats.cgi

 Would this not work as follows:

 reqirep ^([^\ ]*)\ /(.*)/mypage  \1\ //\2/page

 Note - I've not tested that so it may break stuff - check before running
 this.

 Chris


 Matt wrote:

 Hi Willy, sure.

 I guess what i'd be writing with my knowledge of haproxy at the moment
 is one of these for every user:-

 acl user1 url_beg /user1/mypage
 redirect location http://mysite.com/user1/page code 301 if user1

 It would be great if I could do:
 acl user1 url_reg /(.*)/mypage
 redirect location http://mysite.com/\1/page code 301 if user1

 Prefix doesn't work in this context as the thing I want to change is
 after the value i'm capturing.

 I've tried to think about how I could do it with the reqrep and rsprep
 but i'm finding it hard to get my head round.  Especially since I
 really wouldn't like the request to hit a server when I want to
 redirect them.

 Any help appreciated.

 Thanks,

 Matt

 2009/12/2 Willy Tarreauw...@1wt.eu:

 Hi Matt,

 On Wed, Dec 02, 2009 at 12:58:50PM +, Matt wrote:

 I guess this can't be done at the moment? Is there a feature request
 for this already?

 (...)

 RewriteRule ^/([a-z]+)/mypage$  http://mydomain.com/$1/yourpage
 [R=301,L,NE]

 Sorry, but I'm unable to parse apache's cryptic rewriterules.
 You're saying that you want to perform a redirect which keeps
 some part of the request. Could you please post a concrete
 example of what you'd have in the request and what you'd like
 to see in the Location header ? There are some possibilities
 with the redirect statement (prefix, etc...) but maybe not what
 you're looking for. But maybe we can do it differently.

 Regards,
 Willy







Re: redirecting many urls

2009-12-03 Thread Matt
Willy that makes perfect sense, can't believe I didn't see it earlier.
 Testing it right now and it appears to work fine.

Many thanks,

Matt

2009/12/3 Willy Tarreau w...@1wt.eu:
 Hi Matt,

 replying quickly this time.

 On Thu, Dec 03, 2009 at 10:13:57AM +, Matt wrote:
 Hi Willy, sure.

 I guess what i'd be writing with my knowledge of haproxy at the moment
 is one of these for every user:-

 acl user1 url_beg /user1/mypage
 redirect location http://mysite.com/user1/page code 301 if user1

 It would be great if I could do:
 acl user1 url_reg /(.*)/mypage
 redirect location http://mysite.com/\1/page code 301 if user1

 Prefix doesn't work in this context as the thing I want to change is
 after the value i'm capturing.

 I've tried to think about how I could do it with the reqrep and rsprep
 but i'm finding it hard to get my head round.  Especially since I
 really wouldn't like the request to hit a server when I want to
 redirect them.

 OK. What I think you should try, though I've not tested it, is to
 first rewrite the request using reqrep and put the fields in the
 order you like, then perform a prefix-based redirect. I think it
 should work, but there may be corner cases I'm not thinking about.

 Regards,
 Willy





Re: redirecting many urls

2009-12-02 Thread Matt
I guess this can't be done at the moment? Is there a feature request
for this already?

Thanks,

Matt

2009/12/1 Matt mattmora...@gmail.com:
 Hi all,

 I'm trying to do the following apache rewrite rule in haproxy -

 RewriteRule ^/([a-z]+)/mypage$  http://mydomain.com/$1/yourpage [R=301,L,NE]

 I've used the redirect directive before in Haproxy but noticed i'm
 unable to pass in the value captured in the acl to the redirect which
 would mean i'd have to list every acl/redirect for every capture :-(

 Is there a way to do this redirect with the rsprep? or is there
 another obvious way i'm missing?

 Thanks,

 Matt




Re: http stats page not found

2009-10-22 Thread Matt
Makes sense.  So just adding :

listen  stats *:8010

Will give me a port for just reading the stats output.

2009/10/21 Willy Tarreau w...@1wt.eu:
 On Wed, Oct 21, 2009 at 10:49:44AM +0100, Matt wrote:
 Does anyone else have trouble with the haproxy stats page returning not 
 found?

 Running on CentOS 5.3 version 1.3.19 from EPEL rpm

 I have the following in my config

         stats enable
         stats hide-version
         stats auth      admin:admin
         stats uri       /admin?stats
         stats refresh 5s

 I can usually see the stats page for a couple of minutes, but then
 it's not found.  haproxy is still running and proxing requests.  If I
 keep on hitting the page sometimes it comes back.

 Anyone else had this issue?

 You're probably missing option httpclose, so the connection stays
 alive but haproxy forwards any subsequent request to the application
 server and does not intercept them.

 It's preventing use for production as I need to be able to query the
 page for the csv output.

 If you use it for monitoring, you should really have a dedicated port
 for stats. It will help you a lot to distinguish between production
 traffic and monitoring traffic when reading stats and/or logs.

 Regards,
 Willy





Re: http stats page not found

2009-10-21 Thread Matt
2009/10/21 Matt mattmora...@gmail.com:
 Does anyone else have trouble with the haproxy stats page returning not found?

 Running on CentOS 5.3 version 1.3.19 from EPEL rpm

 I have the following in my config

        stats enable
        stats hide-version
        stats auth      admin:admin
        stats uri       /admin?stats
        stats refresh 5s

 I can usually see the stats page for a couple of minutes, but then
 it's not found.  haproxy is still running and proxing requests.  If I
 keep on hitting the page sometimes it comes back.

 Anyone else had this issue?

 It's preventing use for production as I need to be able to query the
 page for the csv output.

 Thanks,

 Matt

I should have added that it looks like it passes it through to one of
the backends, as i'm getting a reply from nginx that isn't on the
haproxy node.

Is it better to put the stats options in the listen section rather
than the defaults?