haproxy load balancing methods

2016-04-22 Thread Craig Craig
Hi,

I've got a question about balancing alogrithms haproxy can do; if I set a weight
to my backend servers and use leastconn, would that configuration equal to
"Weighted Least Connections" that F5 can do?

"Like the Least Connections methods, these load balancing methods select pool
members or nodes based on the number of active connections. However, the
Weighted Least Connections methods also base their selections on server
capacity. The Weighted Least Connections (member) method specifies that the
system uses the value you specify in Connection Limit to establish a
proportional algorithm for each pool member. The system bases the load balancing
decision on that proportion and the number of current connections to that pool
member. For example, member_a has 20 connections and its connection limit is
100, so it is at 20% of capacity. Similarly, member_b has 20 connections and its
connection limit is 200, so it is at 10% of capacity. In this case, the system
select selects member_b. This algorithm requires all pool members to have a
non-zero connection limit specified. The Weighted Least Connections (node)
method specifies that the system uses the value you specify in the node's
Connection Limit setting and the number of current connections to a node to
establish a proportional algorithm. This algorithm requires all nodes used by
pool members to have a non-zero connection limit specified. If all servers have
equal capacity, these load balancing methods behave in the same way as the Least
Connections methods."

(https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm_configuration_guide_10_1/ltm_pools.html)

If my understanding is correct, haproxy does take the server weight into account
when calculating leastconn?

Thank you,

Craig



Re: Q: about HTTP/2

2016-04-01 Thread Craig Craig
Hi,

> Do you guys, on the ML, really need HTTP/2?
> If so what's your deadline??

Yea, we will definitively need it, our customers started asking about it two
months ago. Management will probably start to worry about pissing off premium
managed hosting customers if they keep asking and we can't add HTTP/2 for them
at some point.
It's not really urgent for us, deadline might be the end of the year. It might
be problematic, if we want to take part in a public bidding and someone just
snuck "HTTP/2 support" into the requirements for bidders and we just can't do
that with haproxy.
I'm pretty sure my boss would be willing to invest some €€€ if that helps.

- Craig



haproxy as a login portal

2016-02-05 Thread Craig Craig
Hi,

I'd like to use haproxy as login portal, has anyone done a configuration like
that?

I've got some users connecting from dynamic IPs to access a 3rd party content
management system which I don't want to expose globally and would like to
authenticate them not by IP, but by session/actual user before they actually can
try to login to the real system.

My idea is that haproxy is forwarding all unauthenticated requests to a portal
server, and after successfully logging in, that system sets a specific cookie
which I can match to in haproxy and forward authenticated users to the real
server. It's not possible to access stick-tables from a external source, e.g.
via admin socket for this, correct? Maybe I could code the login portal in LUA
and write to a data structure?

This is just a quick idea, I didn't look deeply into this yet, and was wondering
if anyone had done it before or has some ideas. :)

Best regards,

craig



RE: No TCP RST on tcp-request connection reject

2015-01-16 Thread Craig Craig
Hi,

> I don't see how. The socket is immediately close()'ed when it hits
> "tcp-request
> connection reject", this is as cheap as it gets.
 
If you're getting attacked, you try to send as few unnecessary packets as
possible, I guess a silent drop could be nice.
 
> > a) HAProxy (configured with rate limiting etc.) does a "tcp-request
> > connection reject" which ends up as a TCP RST. The attacker gets the
> > RST and immediately again
>
> Are you saying that an attacker retransmits faster because of the RST?
> Thats nonsense, an attacker doesn't care about the RST at all.
 
His tools might care about it, for example if it's an automated SQLi-Test?
 
> > b) the same as a) but the socket will be closed on the server side but no
> > RST,
> > nothing will be sent back to the remote side. The connections on the remote
> > side
> > will be kept open until timeout.
>
> An attacker doesn't keeps states on his local machine if his intention is to
> SYN
> flood you.
 
I think he's talking about established connections.
 

- Craig

consistent server status between config reloads

2014-02-24 Thread Craig Craig
Hi,

I'm running some scripts that can disable a server for maintainance/application
deployments. However a config reload enables the server again, and we have
frequent changes to our haproxy config. Would it be possible to leave disabled
servers in that state between reloads? Maybe with an additional config option
like disable_permanent or the like?
Any opinions on this?

Best regards,

Craig

Re: External Monitoring of https on LB's

2012-08-27 Thread Craig Craig
Hi,

a patch is already upstream. I put some effort into getting patches upstream:

http://groups.google.com/group/mailing.unix.stunnel-users/tree/browse_frm/month/2011-02/a1956cc49beaf689?rnum=11&_done=%2Fgroup%2Fmailing.unix.stunnel-users%2Fbrowse_frm%2Fmonth%2F2011-02%3Ffwc%3D1%26#doc_2d06864707c888ef

Changelog:

Version 4.36, 2011.05.03, urgency: LOW:
New features
*Backlog parameter of listen(2) changed from 5 to SOMAXCONN: improved
behavior on heavy load.

Also, notice the last posting. I really don't like his attitude on integrating
patches, but I don't care anymore - we moved away to nginx.


Best regards,

Stefan Behte



On August 27, 2012 at 7:14 AM Willy Tarreau  wrote:> Hi,
>
> On Mon, Aug 27, 2012 at 09:11:43AM +1000, s...@summerwinter.com wrote:
> > Hi there,
> >
> > Forgive me if this is the wrong place for advice, but I figure a lot
> > of people here must use a similar setup.
> >
> > I've got 2 LB's setup with haproxy, heartbeat & stunnel. Http & https
> > is working correctly.
> >
> > I am using HyperSpin.com for external monitoring to receive alerts
> > based on ping, http & https on the float IP.
> >
> > Ping & http work without issue. However, 75% of there 20 or so global
> > monitoring servers appear return errors 'couldn't connect to port
> > 443', so every 10-15minutes a server that can't connect on 443 tests
> > it, fails and my inbox fills.
> >
> > There is no firewall on the LB, nothing I can tell that would be
> > blocking access to 443.
> >
> > I've received the following logs from HyperSpin on a server that is
> > unable to connect:
> >
> > -
> >
> > We do not know the cause of the problem, but we can confirm it is a SSL
> > issue.
> >
> > We logged to our Singapore server and tried using curl and wget to
> > access your website. Both returned errors.
> >
> > ===
> > [admin@sg ~]$ curl https://
> > curl: (35) Unknown SSL protocol error in connection to :443
> >
> > [admin@sg ~]$ wget -O - https://
> > --21:44:16-- https://
> > => `-'
> > Connecting to :443... connected.
> > Unable to establish SSL connection.
> >
> > ===
> >
> > I thought it may be an issue with the intermediate certificate, but I
> > have tacked that on at the end of the ssl.crt file i'm using.
> >
> > Any ideas?
>
> I could suspect something else. Did you patch your stunnel ? By default
> it has a very tiny listen queue of only 5 entries which can cause exactly
> this issue if there is even a moderate load on it. A patch to change this
> is available here if you want :
>
> http://www.exceliance.fr/download/free/patches/stunnel/
>
> It adds a "listenqueue" parameter allowing you to increase the backlog.
> I would really not be surprised if this was the issue.
>
> Regards,
> Willy
>
>



Re: Re: Re: balance (hdr) problem (maybe bug?)

2011-02-11 Thread Craig Craig
Hi,

this is an addition to the cases I sent previously, I accidentally found out 
that haproxy behaves differently when the last header is not a "Connection:" 
header.

This is a case for config #2 ("reqidel ^X-Forwarded-For:.*" is set):

case i): stays on the same backend.
nc 127.0.0.1 8085 <

> Hi,
> 
> I decided to narrow the bug a bit and deleted all other backends/frontends
> we have; I've defined three servers in the backend which all query
> www.google.de. Thus you do not have to set up your own server if you want to
> test this config, google can take the load. ;)
> 
> The problem is reproducable with this config, I used need netcat here.
> 
> Running with #1 config ("reqidel ^X-Forwarded-For:.*" in frontend_btg not
> set).
> 
> case a) jumps between backends:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> Connection: keep-alive
> 
> EOF
> 
> case b) stays on same backend:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> Connection: close
> 
> EOF
> 
> case c) stays on same backend:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> X-Forwarded-For: 127.0.0.1
> Connection: close
> 
> EOF
> 
> case d) stays on same backend:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> X-Forwarded-For: 127.0.0.1
> Connection: keep-alive
> 
> EOF
> 
> Expected behaviour:
> Case a) should not jump between servers. An empty x-forwarded-for header
> means that always the "same" header (an empty one) should be hashed, you
> should always end up on the same server.
> Case a) and b) should behave the same. What does it matter if the connection
> is set to keep-alive or close, I've set option httpclose anyways.
> 
> 
> Running with #2 config ("reqidel ^X-Forwarded-For:.*" in frontend_btg is
> set).
> 
> case e) jumps between backends:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> Connection: keep-alive
> 
> EOF
> 
> case f) stays on same backend:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> Connection: close
> 
> EOF
> 
> case g) stays on same backend:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> X-Forwarded-For: 127.0.0.1
> Connection: close
> 
> EOF
> 
> case h) jumps between backends:
> nc 127.0.0.1 8085 < GET / HTTP/1.1
> Host: www.google.de
> X-Forwarded-For: 127.0.0.1
> Connection: keep-alive
> 
> EOF
> 
> Expected behaviour:
> Case e) same expections as with case a) and config-1
> Case h) should really stay on one backend. I haproxy to delete
> X-Forwarded-For on the frontend, add new "X-Forwarded-For: SRC-IP", and
> balance based on that header in the backend.
> 
> With this behaviour you will get some problems with http/https and sessions,
> stunnel will add an X-Forwarded-For header which contains the actual IP, but
> the user might have sent a different one (or none) resulting in the client
> to access different backends with http than with https.
> 
> 
> Best regards,
> 
> Craig
> 
> 
> 
> - original Nachricht 
> 
> Betreff: Re: balance (hdr) problem (maybe bug?)
> Gesendet: Do, 10. Feb 2011
> Von: Willy Tarreau
> 
> > Hi Craig,
> > 
> > On Mon, Feb 07, 2011 at 09:24:24PM +0100, Craig wrote:
> > > Hi,
> > > 
> > > >> The X-Forwarded-For header is only added once at the end of all
> > > processing.
> > > >> Otherwise, having it in the defaults section would result in both
> your
> > > >> frontend and your backend adding it.
> > > Then the possibility to add it only to a frontend or a backend in the
> > > defaults section would be nice?
> > 
> > It is already the case. The fact is that we're telling haproxy that we
> > want an outgoing request to have the header. If you set the option in
> > the frontend, it will have it. If you set it in the backend, it will
> > have it. If you set it in both, it will only be added once. It's really
> > a flag : when the request passes through a frontend or backend which
> > has the option, then it will have the header appended.
> > 
> > > >> So in your case, what happens is that you delete it in the frontend
> > > (using
> > > >> reqidel) then you tag the session for adding a new one after all
> > > processing
> > > >> is done.
> > > >>
> > > >> When at the last point we have to establish a connection to the
> > > server, we
> > > >> check the header and balance based on it. I agree we should always
> > > have it
> > > >> filled with the same value, so there's a bug.
> > > So if I got it right, I cannot balance based on the new header because
> > > it was not added yet. That behaviour comes really unexpected because
> one
> > > usually would believe it was already added in the frontend.
> > 
> > It can come unexpected when you reason with header addition, but it's
> sort
> > of an implicit header addition. The opposite would be much more
> unexpected,
> > you'd really not want the header to be added twice because it was enable
> in
> > both sections. It's possible that the doc is not clear enough :
> > 
> >  "This option may be specified either in the frontend or in the

Re: Re: balance (hdr) problem (maybe bug?)

2011-02-11 Thread Craig Craig
Hi,

I decided to narrow the bug a bit and deleted all other backends/frontends we 
have; I've defined three servers in the backend which all query www.google.de. 
Thus you do not have to set up your own server if you want to test this config, 
google can take the load. ;)

The problem is reproducable with this config, I used need netcat here.

Running with #1 config ("reqidel ^X-Forwarded-For:.*" in frontend_btg not set).

case a) jumps between backends:
nc 127.0.0.1 8085 <

> Hi Craig,
> 
> On Mon, Feb 07, 2011 at 09:24:24PM +0100, Craig wrote:
> > Hi,
> > 
> > >> The X-Forwarded-For header is only added once at the end of all
> > processing.
> > >> Otherwise, having it in the defaults section would result in both your
> > >> frontend and your backend adding it.
> > Then the possibility to add it only to a frontend or a backend in the
> > defaults section would be nice?
> 
> It is already the case. The fact is that we're telling haproxy that we
> want an outgoing request to have the header. If you set the option in
> the frontend, it will have it. If you set it in the backend, it will
> have it. If you set it in both, it will only be added once. It's really
> a flag : when the request passes through a frontend or backend which
> has the option, then it will have the header appended.
> 
> > >> So in your case, what happens is that you delete it in the frontend
> > (using
> > >> reqidel) then you tag the session for adding a new one after all
> > processing
> > >> is done.
> > >>
> > >> When at the last point we have to establish a connection to the
> > server, we
> > >> check the header and balance based on it. I agree we should always
> > have it
> > >> filled with the same value, so there's a bug.
> > So if I got it right, I cannot balance based on the new header because
> > it was not added yet. That behaviour comes really unexpected because one
> > usually would believe it was already added in the frontend.
> 
> It can come unexpected when you reason with header addition, but it's sort
> of an implicit header addition. The opposite would be much more unexpected,
> you'd really not want the header to be added twice because it was enable in
> both sections. It's possible that the doc is not clear enough :
> 
>  "This option may be specified either in the frontend or in the backend. If
> at
>   least one of them uses it, the header will be added. Note that the
> backend's
>   setting of the header subargument takes precedence over the frontend's if
>   both are defined."
> 
> Maybe we should insist on the fact that it's done only at the end.
> 
> We could try to add it in the frontend and tag the session to know it was
> already performed. But this would slightly change the semantics to a new
> one which might not necessarily be desirable. For instance, it's possible
> in a backend to delete the header and set the option. That way you know
> that your servers will receive exactly one occurrence of it. Many people
> are doing that because their servers are having issues with this header
> passed as a list. Changing the behaviour would result in the backend's
> delete rule to suppress the header that was just added, and the new one
> won't be added anymore since it already was.
> 
> > >> My guess is that you're running a version prior to 1.4.10 which has
> the
> > >> header deletion bug : the header list can become corrupted when
> exactly
> > >> two consecutive headers are removed from the request (eg: connection
> and
> > >> x-forwarded-for). Then the newly added X-Forwarded-For could not be
> seen
> > >> by the code responsible for hashing it.
> > >>
> > >> If so, please try to upgrade to the last bug fix (1.4.10) and see if
> the
> > >> problem persists.
> > I am already using 1.4.10 - sorry, it seems I somehow forgot to mention
> > it! :/
> 
> OK so I'm interested in any reliabe reproducer for this bug (eg: config
> and/or
> request exhibiting the issue). You can send me your config privately if you
> don't want to post it to the list.
> 
> > That is a good hint, but I also have a frontend for SSL (with stunnel
> > which adds the X-Forward-For header) that I'd want to use the same
> > backend. I did not like defining backends twice as it introduces
> > redundancy and might lead to inconsistency, it is a good workaround
> > though. Note: my testing and the bug happened with the normal frontend.
> 
> OK I see. Be aware that this setup is not compatible with keep-alive
> though,
> as stunnel will only add the header in the first request. An alternative is
> to apply the patch for the proxy protocol to stunnel and use it with either
> haproxy 1.5-dev, or use the 1.4 backports that were recently posted to the
> list.
> 
> > Also, I could leave out the reqidel of the header, but then a malicious
> > party could theoretically choose the server it accesses (by forging
> > x-forwarded-for) and overload one after another; I prefer to take away
> > this possibility (yea I am overdoing it, maybe). ;)
> 
> Targett

balance (hdr) problem (maybe bug?)

2011-02-03 Thread Craig Craig
Hi,

I've stumbled upon a problem with balance(hdr), specefically with 
X-Forwarded-For.
When you use the config that I've attached, you get different results wheather 
you send a X-Forwarded-For or not.

The source IP does not change when I perform those queries, hosts did not 
change state:

curl http://www.foo.de/host.jsp -s
Stays always on the same server.

curl http://www.foo.de/host.jsp -s -H "X-Forwarded-For: x.x.x.x"
Jumps between the three hosts.

This is strange: I delete the header that is sent by the client on the frontend 
with reqidel and set a new one with "option forwardfor" - I expected the 
backend to balance based on that new header.

If my assumption was wrong, and the original header is used, then I should not 
jump between hosts when I am always sending the same header.

Something smells fishy here...is this a bug? A Feature? ;) Or misunderstanding 
on my part?


Thanks,

Craig


haproxy.cfg:
---
global
user haproxy
group haproxy
maxconn 75000
log 127.0.0.1 local0
stats socket /var/run/haproxy.stat mode 600

defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s

backend backend_btg
mode http
balance hdr(X-Forwarded-For)
option redispatch
option httpchk HEAD / HTTP/1.1\r\nHost:\ www.foo.de
server S43 192.168.x.43:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2
server S56 192.168.x.56:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2
server S76 192.168.x.76:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2

frontend frontent_btg
bind 0.0.0.0:8085
maxconn 3
mode http
option httplog
reqidel ^X-Forwarded-For:.*
option forwardfor except 192.168.X.Y
option httpclose
log 127.0.0.1 local0
capture request header Host len 192

default_backend backend_btg




haproxy bug or wrong kernel settings?

2010-06-30 Thread Craig Craig
Hi list,

I'm having a strange problem with haproxy 1.3.24, when the server gets more 
connections. Load is still ok by then (about 0.5), througput is about 
50-100MBit.

Right now, everything is fine, I'm seeing:

Server connection states (it also runs a squid, which is not used for this 
domain):

 92 CLOSE_WAIT
 21 CLOSING
   3315 ESTABLISHED
 86 FIN_WAIT1
171 FIN_WAIT2
 60 LAST_ACK
 34 LISTEN
 99 SYN_RECV
  1 SYN_SENT
   9532 TIME_WAIT

Port 8085 (haproxy-frontend) only:

ESTABLISHED  1544
FIN_WAIT116
FIN_WAIT2141
LAST_ACK 59
SYN_RECV 44
SYN_SENT 0
TIME_WAIT1101
CLOSE_WAIT   1
CLOSING  0


It seems that somewhere over 2000 and between 4500 established connections, the 
problems start; I've not been able to determine
the exact number, as I've changed the NAT to the server directly - it could 
handle the ~6600 connections without problems.

When I was querying a server through haproxy (on the haproxy itself), I saw 
this huge lag:

1.) time printf "GET / HTTP/1.1\r\nhost: www.foo.de\r\nConnection: 
close\r\nCookie: -\r\n\r\n" | nc -v 192.168.92.11 8085 &>/dev/null
real0m19.976s
user0m0.000s
sys 0m0.008s

And at the same time, I queried the server directly, from the server running 
haproxy again:
2.) time printf "GET / HTTP/1.1\r\nhost: www.foo.de\r\nConnection: 
close\r\nCookie: -\r\n\r\n" | nc -v 192.168.70.43 80 &>/dev/null
real0m0.049s
user0m0.000s
sys 0m0.004s

Nr. 1.)  always had the lag, Nr. 2.) was always slow, but it seemed to get 
slower, the more connections were open.. After switching the NAT from haproxy 
to the host directly, the query times are in the range of #2 again. It seems, 
that after a specific limit is reached by haproxy, the connections get slower 
and slower. 

It might also be a linux kernel setting but, any hint would be much 
appreciated...

Best regards,
Craig



My config:

# haproxy.cfg
global
user haproxy
group haproxy
maxconn 75000
ulimit-n 192000

log 127.0.0.1 local0

defaults
timeout client 300s
timeout server 300s
timeout queue 60s
timeout connect 7s
timeout http-request 10s

backend backend_btg
mode http
balance hdr(X-Forwarded-For)
option redispatch
option httpchk HEAD / HTTP/1.1\r\nHost:\ www.foo.de
server Sxxx 192.168.71.43:80 weight 100 maxconn 16384 check inter 1 fall 2 
rise 2

frontend frontend_btg
bind 0.0.0.0:8085
mode http
option httplog
reqidel ^X-Forwarded-For:.*
option forwardfor except 192.168.97.11
log 127.0.0.1 local0
capture request header Host len 192
timeout client 1m

acl request_btgdomain hdr_reg(host) -i (^|\.)foo\.de

acl redirect1   url_beg /1
acl redirect2   url_beg /2
acl redirect3   url_beg /3
acl redirect4   url_beg /4
acl redirect5   url_beg /5
acl forum_request   hdr_dom(host)   -i forum.foo.de

acl forum_allow_bt1 src 193.17.232.0/24
acl forum_allow_bt2 src 193.17.236.0/24
acl forum_allow_bt3 src 193.17.243.0/24
acl forum_allow_bt4 src 193.17.244.0/24

redirect location https://www.foo.de/1 if redirect1 request_btgdomain
redirect location https://www.foo.de/2 if redirect2 request_btgdomain
redirect location https://www.foo.de/3 if redirect3 request_btgdomain
redirect location https://www.foo.de/4 if redirect4 request_btgdomain
redirect location https://www.foo.de/5 if redirect5 request_btgdomain

default_backend backend_btg



## sysctl -a output:

kernel.sched_rt_period_us = 100
kernel.sched_rt_runtime_us = 95
kernel.sched_compat_yield = 0
kernel.panic = 0
kernel.core_uses_pid = 0
kernel.core_pattern = core
kernel.tainted = 0
kernel.print-fatal-signals = 0
kernel.ctrl-alt-del = 0
kernel.modprobe = /sbin/modprobe
kernel.hotplug = 
kernel.sg-big-buff = 32768
kernel.cad_pid = 1
kernel.threads-max = 274432
kernel.random.poolsize = 4096
kernel.random.entropy_avail = 130
kernel.random.read_wakeup_threshold = 64
kernel.random.write_wakeup_threshold = 128
kernel.overflowuid = 65534
kernel.overflowgid = 65534
kernel.pid_max = 32768
kernel.panic_on_oops = 0
kernel.printk = 1   4   1   7
kernel.printk_ratelimit = 5
kernel.printk_ratelimit_burst = 10
kernel.ngroups_max = 65536
kernel.unknown_nmi_panic = 0
kernel.nmi_watchdog = 0
kernel.panic_on_unrecovered_nmi = 0
kernel.bootloader_type = 113
kernel.kstack_depth_to_print = 12
kernel.io_delay_type = 0
kernel.randomize_va_space = 1
kernel.acpi_video_flags = 0
kernel.compat-log = 1
kernel.max_lock_depth = 1024
kernel.poweroff_cmd = /sbin/poweroff
kernel.scan_unevictable_pages = 0
kernel.vsyscall64 = 1
kernel.ostype = Linux
kernel.osrelease = 2.6.29-gentoo-r3
kernel.version = #2 SMP Tue May 11 19:55:13 CEST 2010
kernel.hostname = N111
kernel.domainname = (none)
kernel.shmma

Re: stick-table question

2010-05-11 Thread Craig Craig
Hi,

you need to add the header in stunnel, not in haproxy. Have alook at the 
xforwarded-for patches at http://haproxy.1wt.eu/download/patches/. There are at 
least two people using them in a productive environment, so you should have a 
try, too. ;)

Best regards,

Craig

- original Nachricht 

Betreff: stick-table question
Gesendet: Di, 11. Mai 2010
Von: Shannon Lee

Hey guys,
The stick-table feature is great, but it turns out I'm not smart enough to use 
it :  
We got the stick-table feature working ok, and 'src' seems to be exactly what 
we're looking for, except that the majority of our connections come in via ssl 
-- via stunnel -- which means that 'src' is 'localhost.'  It seems like we 
ought to be able to use the X-Forwarded-For header as the key for the table, 
but I don't understand how, what am I missing?
Thanks,
--S

--- original Nachricht Ende 


Location rewriting

2010-04-06 Thread Craig Craig
Hi,

I'm currently using two frontends on different ports for SSL/non-SSL Traffic. I 
run stunnel for SSL-termination, it forwards to one of the frontends.

Unfortunately, it seems that haproxy on the non-ssl backend can't redirect a 
request to /$foo to https://mydomain.com/$foo ($foo is meant to be a 
variable...). You can already guess why I want this: it be a nice option to be 
able to fix websites/webapps with lots of hardcoded URLs to http://mydomain.com 
so that they always get redirected to the secure version. Not a very nice 
solution - I know.
I guess, it would be the best thing to be able to use external scripts for 
rewriting, what do you think?
Or is it possible to do this cleanly with some option I haven't seen? I thought 
about somehow rewriting headers, inserting a Location header etc. - but could 
that work?

Best regards,

Craig