I like the default message. If you want to suppress it, then you can use -q.
Having some standard output that can be suppressed with -q is also
fairly standard for UNIX commands.
On Mon, Nov 13, 2023 at 4:07 AM William Lallemand
wrote:
>
> On Mon, Nov 13, 2023 at 09:52:57AM +0100, Baptiste wro
I agree defaulting to alpn h2,http/1.1 sooner (don't wait for 2.9),
and even 2.6 would be fine IMO. Wouldn't be a new feature for 2.6,
only a non breaking (AFAIK) default change...
I would have concerns making QUIC default for 443 ssl (especially
prior to 2.8), but you are not suggesting that any
Assuming no direct access to apache servers, does anyone know if
haproxy would by default protect against these vulnerabilities?
What exactly is needed to reproduce the poor performance issue with openssl
3? I was able to test 20k req/sec with it using k6 to simulate 16k users
over a wan. The k6 box did have openssl1. Probably could have sustained
more, but that's all I need right now. Openssl v1 tested a little faster,
The SYN-ACK tracking works in transparert mode with haproxy. I have setup
haproxy to rebind all connections before and basically proxy the internet
(and use NAT for udp). That said, I assume the point of DSR is that it's
not always going to take the same path and that is where the real issue
is.
That's what 50s? You are probably doing pooling and it's using LRU instead of
actually cycling through connections. At least that is what I have seen node
typically do.
Instead of 50 seconds, try:
timeout client 12h
timeout server 12h
You might want to enable logging
Not positive the only use case, but I have a number of udp ports also open
so ran tcpdump on them and they are all talking to syslog. Seems to line up
about 1 per cpu on a couple of machines I checked.
On Fri, Aug 5, 2022 at 7:19 PM Shawn Heisey wrote:
> I am running haproxy in a couple of place
Here is your answer:
Layer7 wrong status, code: 401, info: "Unauthorized"
Your health check is not providing the required credentials and failing.
You can either fix that, or as you only have one backend, you might want to
remove the check as it's not gaining you little with only one backend.
On
http-request deny deny_status 405 if { url_sub -i "\$\{jndi:" or
hdr_sub(user-agent) -i "\$\{jndi:" }
was not catching the bad traffic. I think the escapes were causing issues
in the matching.
The following did work:
http-request deny deny_status 405 if { url_sub -i -f
/etc/haprox
If you want them to all use the same outgoing IP, you could place them
behind a NAT router instead of using outgoing proxy server.
That said, if you do want to use haproxy, I think you will want to use the
"usesrc client" on the haproxy config and the haproxy server will also need
the prerouting a
Sounds like the biggest part of hot restarts is the cost of leaving the old
process running as they have a lot of long running TCP connections, and if
you do a lot of restarts the memory requirements build up. Not much of an
issue for short lived http requests (although it would be nice if keep
al
A couple of possible options...
You could use tcp-request inspect-delay to delay the response a number of
seconds (and accept it quick if legitimate traffic).
You could use redirects which will have the clients do more requests
(Possibly with the inspect delays).
That said, it would be useful to f
CentOS 6 isn't EOL until the end of the month, so there is a couple of more
weeks left.
There is at least one place to pay for support through 2024.
($3/month/server)
Might be good to keep for a a bit past EOL, as I know when migrating
services sometimes I'll throw a proxy server on the old serve
I could be wrong, but I think he is stating that if you have that
allowed, it can be used to get a direct connection to the backend
bypassing any routing or acls you have in the load balancer, so if you
some endpoints are blocked, or internal only, they could potentially
be accessed this way.
For e
Yes, that is a bug in your configuration. You need to tell haproxy the
connections needs to go to that same server if that is what the servers
need. When I require session affinity, personally I prefer to use cookies
to make that happen but they might not work for some situations.
Add/adjust
Look into module rpaf for apache along with "option forwardfor" in haproxy
and no need for routing changes, or you can setup haproxy as a transparent
proxy (source usesrc client) and not change apache but would require
routing changes on the apache servers.
> -Original Message-
> From: Sim
This isn't tested, just a sample idea... obviously parts missing...
Create you acls something like:
frontendweb
acl is_bot hdr_sub(User-Agent) -i bot
...
use_backend botq if is_bot
default_backendnormalq
backend normalq
You could setup the acls so they all goto one backend, and thus limit the
number of connections on that backend to something low like 1. Not exactly
rate limit, but at most 1 connection to server them all...
> -Original Message-
> From: hapr...@serverphorums.com [mailto:hapr...@server
I tend to have really large rise, and small fall like 2 and 9 (or 99 or
higher would be good if you want to ensure it stays down long enough to
trigger). That way they stay dead for awhile, but can go down quickly.
Anyways, so that it shows in my monitoring system I have this in my zabbix
cfg
There are all sorts of kernel tuning parameters under /proc that can make
a big difference, not to mention what type of virtual NIC you have in the
VM. Are they running the same kernel version and Gentoo release? Have
you compared sysctl.conf (or whatever gento uses to customize settings in
/proc
There is a brief time between the switchover from the old process to the
new where new connections can not be accepted. Better to mark the backend
servers down without switching processes. (Several ways to do that).
If the refused connection concerns you, and you cant avoid starting
haproxy,
, newer version of stunnel probably perform better.
> -Original Message-
> From: "Brane F. Gračnar" [mailto:brane.grac...@tsmedia.si]
> Sent: Tuesday, December 13, 2011 5:21 PM
> To: David Prothero
> Cc: John Lauro; haproxy@formilux.org
> Subject: Re: SSL bes
Been using haproxy for some time. but have not used it with SSL yet.
What is the best option to implement SSL? There seems to be several
options, some requiring 1.5 (which isn't exactly ideal as 1.5 isn't
considered stable yet).
I do need to route based on the incoming request, so decode
Also, how large is large? >4GB?
> -Original Message-
> From: Baptiste [mailto:bed...@gmail.com]
> Sent: Friday, October 28, 2011 5:48 PM
> To: Justin Rice
> Cc: haproxy@formilux.org
> Subject: Re: HAProxy and Downloading Large Files
>
> hi,
>
> What do HAProxy logs report you when the
I suggest you use balance leastconn instead of roundrobin. That way the
weights effect the ratios, but they are not locked in. If a server clears
connections faster than the others, it will get more requests... if it
falls behind it will get less...
Given that multiple factors impact how many r
e more complex setup, but can
be done.
> -Original Message-
> From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
> Sent: Tuesday, September 27, 2011 8:03 PM
> To: John Lauro
> Subject: Re: TPROXY + Hearbeat
>
> Hey John,
>
> Thank you for the giving me
Thanks, that worked.
> -Original Message-
> From: Baptiste [mailto:bed...@gmail.com]
> Sent: Tuesday, September 27, 2011 6:02 PM
> To: John Lauro
> Cc: haproxy@formilux.org
> Subject: Re: Log host info with uri
>
> You might want to use "capture request h
light load...
> -Original Message-
> From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
> Sent: Tuesday, September 27, 2011 6:13 PM
> To: John Lauro
> Cc: haproxy@formilux.org
> Subject: Re: TPROXY + Hearbeat
>
> Hey John,
>
> Thanks for the quick response. That
Works great. I have several pairs of vm haproxy servers in transparent mode
and running heartbeat to take over the shared IP.
> -Original Message-
> From: Jason J. W. Williams [mailto:jasonjwwilli...@gmail.com]
> Sent: Tuesday, September 27, 2011 3:46 PM
> To: haproxy@formilux.org
> Sub
Is there an easy way to have haproxy log the host with the uri instead of
just the relative uri? I have some 503 errors, and they are going to
virtual hosts on the backend and I am having some trouble tracking them
down. and the uri isn't specific enough as it is common among multiple
hosts. I'm
Reserving memory is critical especially if you over commit. Haproxy degrades
extremely quickly when things swap. Don't allow swapping inside or outside of
the vm. Reserving a small amount of CPU is good. That said I doubt if either
of these is your problem. It's probably more related to the
Are you using connection tracking with iptables? If so, you might want to
consider using a more basic configuration without connection tracking.
What does your iptables configuration look like?
From: Joe Torsitano [mailto:jtorsit...@weatherforyou.com]
Sent: Saturday, December 19, 20
> Is there some simple configuration option(s) staring me in the face
> that I'm missing, or is this more complex than it seems on the surface?
>
In terms of some simple configuration option...
Why not just have 3 active? If one is down, it's load will automatically be
routed to the other two.
I think haproxy can only do header manipulation with HTTP. In other words,
rspadd will not work with mode tcp.
You should be able to have your PHP scrip add the custom header.
Postfix can handle a lot of outgoing mail. If you don't mind me asking, I'm
just curious how much mail are you sending
Not saying this is a good way or not, but one method is to do something like
the following on the servers:
iptables -A INPUT -p tcp --dport 3306 --syn -j REJECT
when it wants to mark itself down. (replace 3306 with whatever port you
want to flag down). Only matching on syn packets, so existing c
s and limit on the number
of connections / sec based on ip addresses...
> -Original Message-
> From: Wout Mertens [mailto:wout.mert...@gmail.com]
> Sent: Monday, November 16, 2009 9:19 AM
> To: John Lauro
> Cc: haproxy@formilux.org
> Subject: Re: Preventing bots from starvi
me ways to tune apache. As other have mentioned, telling
the crawlers to behave themselves or totally ignore the wiki with a robots
file is probably best.
> -Original Message-
> From: Wout Mertens [mailto:wout.mert...@gmail.com]
> Sent: Monday, November 16, 2009 7:31 AM
> To: Jo
I would probably do that sort of throttling at the OS level with iptables,
etc...
That said, before that I would investigate why the wiki is so slow...
Something probably isn't configured right if it chokes with only a few
simultaneous accesses. I mean, unless it's embedded server with under 32MB
have full logging in the firewall that
forwards to haproxy... and can easily merge the two logs...
From: XANi [mailto:xani...@gmail.com]
Sent: Wednesday, November 04, 2009 8:30 AM
To: John Lauro
Cc: 'Dave'; haproxy@formilux.org
Subject: Re: Using HAProxy In Place of WCCP
On Wed, 4 No
I see two potential issues (which may or may not be important for you).
1. Non http 1.1 clients may have trouble (ie: they don't send the host
on the URL request, or if they are not really http but using port 80).
2. Back tracking if you get a complaint from some website (ie: RIAA
esent the client's
IP with "source haproxyinterfaceip usesrc client"
Might be good if the transparent mode had a reference to usesrc..
From: Dirk Taggesell [mailto:dirk.tagges...@googlemail.com]
Sent: Wednesday, October 28, 2009 9:48 AM
To: John Lauro
Subject: Re: Backen
You could run mode tcp if you setup haproxy in transparent mode .
From: Dirk Taggesell [mailto:dirk.tagges...@googlemail.com]
Sent: Wednesday, October 28, 2009 9:03 AM
To: haproxy@formilux.org
Subject: Backend sends 204, haproxy sends 502
Hi all,
I want to load balance a new server applic
...@gmail.com]
> Sent: Wednesday, October 21, 2009 7:07 AM
> To: John Lauro
> Cc: haproxy
> Subject: Re: slow tcp handshake
>
> On Wed, Oct 21, 2009 at 3:51 AM, John Lauro
> wrote:
> > You mention loopback interface. You could be running out of port
> numbers to
> >
nge under "netstat -s"
> -Original Message-
> From: David Birdsong [mailto:david.birds...@gmail.com]
> Sent: Wednesday, October 21, 2009 7:07 AM
> To: John Lauro
> Cc: haproxy
> Subject: Re: slow tcp handshake
>
> On Wed, Oct 21, 2009 at 3:51 AM, John Lau
You mention loopback interface. You could be running out of port numbers to
for the connections.
What's your /proc/sys/net/ipv4/ip_local_port_range?
What's netstat -s | grep -i listshow on the server?
> -Original Message-
> From: David Birdsong [mailto:david.birds...@gmail.com]
>
I am starting to have problems with one of our servers behind haproxy. It's
busy, but it has more resources to handle more connections, but is having a
bit of trouble with the incoming rate and is getting flagged down. I am
having trouble finding what /proc (linux 2.6) to tweak.
(These are fr
Original Message-
> From: Krzysztof Olędzki [mailto:o...@ans.pl]
> Sent: Wednesday, October 14, 2009 5:54 AM
> To: John Lauro
> Cc: haproxy@formilux.org
> Subject: Re: [ANNOUNCE] haproxy 1.4-dev4 and 1.3.21
>
> On 2009-10-14 10:47, John Lauro wrote:
> > Sorry to report,
Sorry to report, from 1.3.21:
Oct 13 23:36:43 haf1a kernel: haproxy[25428]: segfault at 19 ip
0041620f sp 7381ef60 error 4 in haproxy[40+3d000]
(I know, kind of old, as we were running 1.3.18 on this box, so not sure
which version the problem started)
Compiled with:
make T
freym...@gmail.com]
Sent: Friday, September 25, 2009 10:17 PM
To: John Lauro; geoffreym...@gmail.com
Cc: haproxy@formilux.org
Subject: Re: RE: HAProxy - Virtual Server + CentOS/RHEL 5.3
I have no idea what "divider=10" is... but I'm sure I'll figure it out as I
start getting mys
It works well. Don't forget divider=10 for even better performance.
From: geoffreym...@gmail.com [mailto:geoffreym...@gmail.com]
Sent: Friday, September 25, 2009 9:45 PM
To: haproxy@formilux.org
Subject: HAProxy - Virtual Server + CentOS/RHEL 5.3
Hello,
Does anyone know of any reason wh
other thing running is keepalived to manage the ip address for
> haproxy.
>
> On 9/3/09 10:16 AM, John Lauro wrote:
> > service iptables stop
> > should take care of it in Centos.
> >
> >
> > Although your lsmod doesn't make sense. It should be showing
service iptables stop
should take care of it in Centos.
Although your lsmod doesn't make sense. It should be showing ip_conntrack
and ip_tables and iptable_filter with a standard Centos and iptables. Even
dm_multipath and others that you are not interested in would be expected...
> -Orig
I don't think you can easily have two health checks. You could also do port
forwarding with iptables or inetd/xinetd and run the health check on a
different port. Stop the forwarding when you want maintenance mode.
> -Original Message-
> From: Matt [mailto:mattmora...@gmail.com]
> Sen
> Is there something I need to change in my config. I set it as leastconn
> to balance traffic but it isn't. Can haproxy knows to transfer request
> to other back end when it knows it has a much more traffic compare to
> other server?
>
It can not transfer to a different server when you tie the
The biggest issue probably is that you are using cookies that will tie a
client to a server. For me, I noticed it often takes over 3 days to get the
first 80% of the traffic off if I mark a server as soft down as people never
reboot or close the browser. If you have more random traffic and less
r
Do you have haproxy between your web servers and the 3rd party? If not (ie:
only to your servers), perhaps that is what you should do. Trying to throttle
the maximum connections to your web servers sounds pointless given that it's
not a very good correlation to the traffic to the third party s
(ignore previos message that had this response replying to wrong message.)
I set my to alert if ever non 0 for queue and for my graphs I just use
current sessions, and also total connections (graph as delta / sec) for
connection rate.
I assume you normally have a queue during busy times if
I set my to alert if ever non 0 for queue and for my graphs I just use current
sessions, and also total connections (graph as delta / sec) for connection
rate.
I assume you normally have a queue during busy times if you want to graph it?
From: Evgeniy Sudyr [mailto:eject.in...@gmail.c
Nearly an extra .1 seems high, but to be fair it doesn’t appear you did much of
a test:
Number of clients running queries: 1
Average number of queries per client: 0
Simulating only 1 client, I wouldn’t expect any performance improvement, and
without doing any queries, you are
[mailto:dani...@chegg.com]
Sent: Thursday, July 23, 2009 6:30 PM
Cc: haproxy@formilux.org
Subject: Re: HAProxy and FreeBSD CARP failover
Good idea except ... that HAProxy server load-balances for a couple different
sites :(
- Original Message -
From: "John Lauro"
To: "D
Only bind to the port so it doesn’t matter if additional addresses are added or
removed.
From: Daniel Gentleman [mailto:dani...@chegg.com]
Sent: Thursday, July 23, 2009 6:13 PM
To: haproxy@formilux.org
Subject: HAProxy and FreeBSD CARP failover
Hi list.
I'd like to set up a redundant H
Are you certain there is no issue with the web server? I have seen (years
ago, prior to my use with haproxy) apache produce strange problems like this
for ie that firefox was able to cope with if it's access_log file reached
2GB. On a busy server, that is easily reached in days or sooner, and
typ
I think there might be a better way, but you could run the check against a
different port. On that other port, you could have it run your custom check
and return an OK response if your check passes and fail if it doesn't.
From: Sanjeev Kumar [mailto:replysku...@gmail.com]
Sent: Friday,
This is what I use to reload:
haproxy -D -f /etc/lb-transparent-slave.cfg -sf $(pidof haproxy)
(Which has pidof lookup process id instead of file it in a file, but that
shouldn't matter.)
The main problem is you are (-st) terminating (aborting) existing
connections instead of (-sf) finishing them
>
> And no request were found into webserver (netstat -ntap | grep :80)
>
> After few seconds: "503 Service Unavailable No server is available to
> handle
> this request. "
>
Can you ping your webserver from the haproxy box ok?
What does the following show from your webserver:
netstat -rn
Does
It's a little different config than I have, but it looks ok to me.
What's haproxy -vv give?
I have:
[r...@haf1 etc]# haproxy -vv
HA-Proxy version 1.3.15.7 2008/12/04
Copyright 2000-2008 Willy Tarreau
Build options :
TARGET = linux26
CPU = generic
CC = gcc
CFLAGS
You can reduce it by changing your check frequency. You can also use
iptables on Haproxy or the node in question to reject connections which will
then be sent elsewhere.
From: Brian Gupta [mailto:brian.gu...@gmail.com]
Sent: Sunday, April 26, 2009 7:20 AM
To: HAProxy User List
Subject: Confir
That would be nice, but I don’t think so (at least not completely). Using
“balance leastconn” will give the faster servers a little more as they will
clear their connections quicker.
From: Sihem [mailto:stfle...@yahoo.fr]
Sent: Wednesday, March 18, 2009 6:26 AM
To: haproxy@formilux.org
Sub
You need to explain a little more, as I am not understating something.
Perhaps what you mean by VIP?
If they share the same single VIP at the same time, then why would you use
round-robin DNS? Round-robin is for multiple IP addresses...?
Also, if you do a virtual IP like Microsoft Windows does f
Not built into Haproxy, but you can use heartbeat or keepalived along with
haproxy for IP takeover on a pair of physical boxes (or VMs).
From: Scott Pinhorne [mailto:scott.pinho...@voxit.co.uk]
Sent: Tuesday, March 17, 2009 10:52 AM
To: haproxy@formilux.org
Subject: Multiple Proxies
Hi All
Mine don't appear to have that much difference. Are any of the servers
down, or maybe reaching their session limits? What's your retr and redis
look like?
From: Sun Yijiang [mailto:sunyiji...@gmail.com]
Sent: Tuesday, March 17, 2009 3:18 AM
To: kuan...@mail.51.com
Cc: haproxy@formilux.org
Su
Sorry for the off topic question, so feel free to reply directly. Can
anyone recommend a BGP package for linux. I have little experience with
BGP, and on the plus site I mainly just need to advertise a net (so a a
simple default route for outgoing is all I need in local routing table).
There
> - net.netfilter.nf_conntrack_max = 265535
> - net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
> => this proves that netfiler is indeed running on this machine
>and might be responsible for session drops. 265k sessions is
>very low for the large time_wait. It limits
> I still don't understand why people stick to heartbeat for things
> as simple as moving an IP address. Heartbeat is more of a clustering
> solution, with abilities to perform complex tasks.
>
> When it comes to just move an IP address between two machines an do
> nothing else, the VRRP protocol
A combination of weight and maxconn work well along with balance leastconn
to keep the ratios in sync, assuming they scale the same.
There are more options if weight/maxconn isn't good enough.
> -Original Message-
> From: Michal Krajcirovic [mailto:mic...@krajcirovic.cz]
> Sent: Wednes
How about if you just use really large weight on the main servers, and
really low on the fall back along with balance leastconn.
From: Karl Pietri [mailto:k...@slideshare.com]
Sent: Friday, February 20, 2009 5:24 PM
To: haproxy@formilux.org
Subject: priority servers in an instance
He
Put a - in front of the path in syslogd.conf.
Ie:
local0.*
-/mnt/log/haproxy_0.log
local1.*
-/mnt/log/haproxy_1.log
local2.*
-/mnt/log/haproxy_2.log
local3.*
-/mnt/log/haproxy_3.log
local4.*
-/mnt/log/haproxy_4.log
local5.*
-/mnt/log/haproxy_5.log
That will help a lot with your load. Without
Actually, the high sys time was probably from the lsof. I should have had
that run prior to vmstat, so it didn't get counted on the vmstat output.
The grep on /var/messages completed too quick to really catch much. That
said, your SYS time is a little high, especially after it finished. F
n't have a 32-bit kernel, I am out of ideas that would explain the
problem.
From: Michael Fortson [mailto:mfort...@gmail.com]
Sent: Thursday, February 12, 2009 11:23 PM
To: John Lauro
Cc: haproxy@formilux.org
Subject: Re: Reducing I/O load of logging
Sorry, forgot to answer the disk qu
> server webapp01-101 10.2.0.3:8101 minconn 1 maxconn 5 check
> inter 1s fastinter 200 rise 1 fall 1
> server webapp01-102 10.2.0.3:8102 minconn 1 maxconn 5 check
> inter 1s fastinter 200 rise 1 fall 1
> ( etc, for 80 instances over 5 servers)
I have had strange problems when minc
> I stopped logging so much in haproxy, but I get the same thing if I
> grep the nginx logs on this server: haproxy's mongrel backend checks
> start failing. I've noticed it only happens when using httpchk (or at
> least it happens much, much more quickly).
>
> Here's an iostat I ran -- the first
urce
contention problem.
> -Original Message-
> From: Michael Fortson [mailto:mfort...@gmail.com]
> Sent: Wednesday, February 11, 2009 8:38 PM
> To: John Lauro
> Cc: James Brady; haproxy@formilux.org
> Subject: Re: Reducing I/O load of logging
>
> Hi John,
>
>
It shouldn't be too hard for a machine to keep up with logging. How are you
logging? standard syslog? Make sure you have - in front of the filename?
How connections per second are you logging?
Haven't done it with Haproxy, but have with other things that generate tons
of logs.
what you
Hello,
I am using version 1.3.15.7 with kernel 2.6.28.2 in TPROXY mode. (I think
getting transparent proxy working took 3 times as long as simply getting
Haproxy working).
I am having problems with loopback type definition now.
Under "defaults application TCP" I have: source ww.
Right, in order to have Haproxy check every request you have to use httpclose.
>From the manual:
HAProxy does not yes support the HTTP keep-alive mode. So by default, if a
client communicates with a server in this mode, it will only analyze, log, and
process the first request of each connection.
If you put something like the following in global:
stats socket /var/lib/haproxy-stat mode 777
Here is some quick items I hacked together for zabbix: (should be easy
enough to put into nagios, or mrtg, or whatever)
UserParameter=proxyconn,echo "show info" | /usr/local/bin/socat
/var/lib/hapr
e-
> From: Patrick Viet [mailto:patrick.v...@gmail.com]
> Sent: Saturday, January 31, 2009 10:39 PM
> To: John Lauro
> Cc: Haproxy
> Subject: Re: Client IPs logging and/or transparent
>
> I would rather say, patch haproxy so that it not only sends
> x-forwarded-fo
Hello,
Running mode tcp in case that makes a difference for any comments, as I know
there are others options for http.
I need to preserve for auditing the IP address of the clients and be able to
associate it with a session. One problem, it appears the client IP and port
are logged, howeve
> If your CPU goes high, I suspect you're on a system which does not
> support
> a fast poller or that you have not enabled a scalable polling mechanism
> at haproxy build time. Could you please run "haproxy -vv" so that we
> try
> to find what is missing here ?
Sorry, wasn't able to look this u
Hello,
I am considering using haproxy with mysql. Basically one server, and one
backup server. Has anyone used haproxy with mysql? What were your
experiences (good and bad)? What values do you use for timeouts, etc.?
Thank you.
> -Original Message-
> From: Willy Tarreau [mailto:w...@1wt.eu]
>
> Hi John,
>
> On Wed, Jan 28, 2009 at 10:57:40AM -0500, John Lauro wrote:
> However, there's a workaround for this. You can tell haproxy that
> you want the connection to the server to be clo
Hello,
This is a relatively new setup (under a week), but had problems yesterday as
load increased.
This problem was reproducible in both the latest 1.2 (tried 1.2 after
problems) and 1.3.15.7. With large values for timeout for clitimeout (also
srvrtimeout, but to a lesser extent), I ran i
You must enable syslog to listen via IP (default is socket only). On
centos/redhat, modify /etc/sysconfig/syslog to include -r option, such as:
SYSLOGD_OPTIONS="-m 0 -r"
From: vaibhav pol [mailto:vaibhav4...@gmail.com]
Sent: Tuesday, January 27, 2009 6:34 AM
To: haproxy@formilux.org
Su
I am not sure it would be called a bad idea, just not an effective one...
don't expect it to help much when an ISP is down for only an hour. Most
clients do not honor low TTL values, especially if they are revisiting the
site without closing the browser.
I would like to hear anyone using anycast
94 matches
Mail list logo