Re: cookie-less sessions

2011-08-05 Thread Hank A. Paulson

On 8/5/11 3:01 PM, Baptiste wrote:

Hi Hank

Actually stick on URL param should work with client which does not
support cookies.
is the first reply a 30[12] ?


So you are saying that stick on URL param reads the outgoing 302 and saves the 
URL param from that in the stick table on 1.5? f so, great then problem 
solved. If it doesn't save it on the way out from the initial redirect then it 
won't help.


Is the same supposed to happen with balance url_param on 1.4?
If not, I will switch to 1.5. If it is supposed to, it doesn't, afaict.



How is they user aware of the jsid or how is he supposed to send his
jsid to the server?


302 to the URL with the jsid URL param.

Thanks



Do you have a X-Forwarded-For on your proxy or can you setup one?

cheers




Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Willy Tarreau
On Sat, Aug 06, 2011 at 02:42:45AM +0300, Piavlo wrote:
>  Well certainly aws has it's limitations which force you to design a 
> very different infrastructure than you would in normal datacenter 
> environment.
> IMHO this is the great thing about those limitations as you are forced 
> to start thinking differently and end up using a set of well known and 
> established tools to
> overcome those limitations.

Surely, getting rid of everything which had worked fine for ages and
limiting oneself to use lazy and naive approaches like DNS because
"it's probably good enough to offer high availability" is a way to think
differently. But it's not the way I conceive reliable infrastructures.

> I'm talking mainly about 
> monitoring/automation/deployment tools & centralized coordination 
> service tools - so that you can automatically react to any change in the 
> infrastructure.

Changes should not happen often, so you can expect that they come with
a minor cost. Or you have something which makes your servers die several
times a minute and you need to fix that before considering adding servers.

> With those tools you don't really care if some server ip changes - the 
> ip only changes if you stop/start and start ec2 instance.
> If you reboot ec2 instance the ip does not change. But normally you 
> would not really stop/start instance - this really happens  then 
> something bad happens to the instance, so that you need to reboot it, 
> but reboot does not always works since there might be hardware problem 
> on the server hosting this ec2 instance.
> So you need to stop it and then start - then you start it will start on 
> different hardware server.

Fine. In the real world, when a server is dead, one guy comes with a
master, reinstalls it on another hardware and restores its configuration.
The IP is taken back and everything magically works again. In the VPC you
should be able to do that too when you decide to replace a faulty instance.

> But you don't really need to all this stuff manually. If some ec2 
> instance is sick this is detected and propagated through the centralized 
> coordination service to the relevant parties.

Here I think you need to define "sick". For me, a "sick" server is one
that needs a stop/start or reboot sequence to be fine again. Otherwise
it's considered dead and needs at least repair, at worst replacement.
Repair is covered by high availability. In case of replacement, you can
keep the IP, so seen from the LB it's just a repair.

> Then you can decide to 
> start a service from a failed instance on another already running ec2 
> instance or start new instance configure itself and start the service. 

If the already running instance was OK, why was it not integrated in the
LB farm then ?

> The old failed instance can be just killed or suspended. (So VPC or 
> normal datacenter will not help here - since the service will be running 
> on different instance/server with different ip - yes you could use a 
> floating ip in normal datacenter  but you would not want to do that for 
> every backend especially then backend are automatically added/removed. 

No but here you're already describing corner cases, I see a lot of "if"
here to reach that case, and at this point I think that a simple reload
is the smallest operation to complete the process !

> You would normally use floating ip for the frontend). Then service is 
> active again on another/new instance - this is again propagated through 
> the centralized coordination service. Then you automatically update 
> needed stuff on relevant instances - like in this specific case update 
> /etc/hosts and restart/reload haproxy. (All I wanted is to avoid haproxy 
> restart/reload - there is no technical problem at all to do the 
> restart). And of course all this is done automatically without human 
> intervention.

So you realize that you're saying you lose a server, you look for another
compatible server, you find one which is doing nothing useful, you decide
to install the service on it, you start it, you update all /etc/hosts and
the only thing you don't want to do is to reload a process, which represents
less than 0.01% of all the operations that have been performed automatically
for you ! I don't buy that, that does not make any sense to me, I'm sorry.
For me, it's comparable to the guy who would absolutely want to be able to
power his servers on batteries before moving the rack to another city, so
that he can avoid a shutdown+restart sequence which would kill its impressive
uptimes.

> From where I stand I see no unreliability problem with aws - the normal 
> datacenter is just unreliable for me as aws.
> I don't need the normal datacenter or the VPC. The usage of those tools 
> and the other aws features make aws much more attractive and reliable 
> than normal datacenter.

Quite frankly, given the way you consider reliability, I fail to understand
why you insist on using a load balancer. Why not advertise all your servers
with th

Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Piavlo
 Well certainly aws has it's limitations which force you to design a 
very different infrastructure than you would in normal datacenter 
environment.
IMHO this is the great thing about those limitations as you are forced 
to start thinking differently and end up using a set of well known and 
established tools to
overcome those limitations. I'm talking mainly about 
monitoring/automation/deployment tools & centralized coordination 
service tools - so that you can automatically react to any change in the 
infrastructure.


With those tools you don't really care if some server ip changes - the 
ip only changes if you stop/start and start ec2 instance.
If you reboot ec2 instance the ip does not change. But normally you 
would not really stop/start instance - this really happens  then 
something bad happens to the instance, so that you need to reboot it, 
but reboot does not always works since there might be hardware problem 
on the server hosting this ec2 instance.
So you need to stop it and then start - then you start it will start on 
different hardware server.


But you don't really need to all this stuff manually. If some ec2 
instance is sick this is detected and propagated through the centralized 
coordination service to the relevant parties. Then you can decide to 
start a service from a failed instance on another already running ec2 
instance or start new instance configure itself and start the service. 
The old failed instance can be just killed or suspended. (So VPC or 
normal datacenter will not help here - since the service will be running 
on different instance/server with different ip - yes you could use a 
floating ip in normal datacenter  but you would not want to do that for 
every backend especially then backend are automatically added/removed. 
You would normally use floating ip for the frontend). Then service is 
active again on another/new instance - this is again propagated through 
the centralized coordination service. Then you automatically update 
needed stuff on relevant instances - like in this specific case update 
/etc/hosts and restart/reload haproxy. (All I wanted is to avoid haproxy 
restart/reload - there is no technical problem at all to do the 
restart). And of course all this is done automatically without human 
intervention.


From where I stand I see no unreliability problem with aws - the normal 
datacenter is just unreliable for me as aws.
I don't need the normal datacenter or the VPC. The usage of those tools 
and the other aws features make aws much more attractive and reliable 
than normal datacenter.


The only really annoying thing about ec2 is that you can have only one 
ip per instance - this makes the HA stuff more difficult to implement 
and you have to design it differently that in normal datacenter. AFAIU 
the aws VPC would not help there too - since VPC instances still can 
have only one ip or/and you can't reassign it to another ec2 instance.


Alex

On 08/05/2011 11:53 PM, Hank A. Paulson wrote:
I think the problem here is that the EC2 way of doing automatic server 
replacement is directly opposite normal and sane patterns of doing 
server changes in other environments. So someone on EC2 only is 
thinking this is a process to hook into and use and others, like 
Willie, are thinking wtf? why would you do this - I don't think there 
will be much common ground to be found.


Did someone already mention the idea of a soft restart after some 
external process notices a dns/ip mapping change? Does a soft restart 
(-sf) re-read the hosts file or redo server dns name lookups? 
Presumably, your instances should not restart so frequently that 
simple soft restarts would become a problem - afaik.


On 8/5/11 1:42 PM, Willy Tarreau wrote:

On Fri, Aug 05, 2011 at 11:11:50PM +0300, Piavlo wrote:

It's not a matter of config option. You're supposed to run haproxy
inside a chroot. It will then not have access to the resolver.

There are simple ways to make the resolver work inside chroot without
making the chroot less secure.


I don't know any such simple way. If you're in a chroot, you have no
FS access so you can't use resolv.conf, nsswitch.conf, nor even load
the dynamic libs that are needed for that. The only thing you can do
then is to implement your own resolver and maintain a second config
for this one. This is not what I call a simple way.


  I could ask the question the other direction : why try to resolve a
name to IP when a check fails, there is no reason why a server would
have its address changed without the admin being responsible for it.

I don't agree that admin is supposed to be responsible for it directly
at all.


So you're saying that you find it normal that a *server* changes its IP
address without the admin's consent ? I'm sorry but we'll never reach
an agreement there.


Say backend server crashes/enters bad state - this is detected and new
ec2 instance is automatically spawned and autoconfigured to
replace the failed backend ec2 instance- which 

Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Baptiste
On Fri, Aug 5, 2011 at 11:58 PM, Willy Tarreau  wrote:
> Hi Baptiste,
>
> On Fri, Aug 05, 2011 at 11:53:40PM +0200, Baptiste wrote:
>> Or using some kind of haproxy conf template with some keyword you
>> replace using sed with IPs you would get from the hosts file?
>> with inotify, you can get updated each time hosts file change, then
>> you generate a new haproxy conf from your template and you ask haproxy
>> to reload it :)
>
> Once again, if the host is in /etc/hosts, then you don't need to touch
> the config anymore. Simply reload it so that it resolves the hosts
> again.
>
> cheers,
> Willy
>
>

Why make things easy when you can make them complicated

cheers



Re: cookie-less sessions

2011-08-05 Thread Baptiste
Hi Hank

Actually stick on URL param should work with client which does not
support cookies.
is the first reply a 30[12] ?

How is they user aware of the jsid or how is he supposed to send his
jsid to the server?

Do you have a X-Forwarded-For on your proxy or can you setup one?

cheers



Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Willy Tarreau
Hi Baptiste,

On Fri, Aug 05, 2011 at 11:53:40PM +0200, Baptiste wrote:
> Or using some kind of haproxy conf template with some keyword you
> replace using sed with IPs you would get from the hosts file?
> with inotify, you can get updated each time hosts file change, then
> you generate a new haproxy conf from your template and you ask haproxy
> to reload it :)

Once again, if the host is in /etc/hosts, then you don't need to touch
the config anymore. Simply reload it so that it resolves the hosts
again.

cheers,
Willy




Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Willy Tarreau
Hi Julien,

On Fri, Aug 05, 2011 at 05:07:36PM -0400, Julien Vehent wrote:
> Willy: EC2 gives a different kind of flexibility but requires to think 
> differently. So, yeah, they do crazy stuffs like randomizing the LAN IPs 
> allocations. But people have been complaining about that so much that 
> AWS created the Virtual Private Cloud.
> That's basically a VLAN that behaves like your own piece of network. 
> You control the IPs in the subnet of your choice.

I'm aware of that, and that's why I consider that, today, accepting
that a server runs a random IP address does not make any sense.

> It's not perfect yet 
> (you can't reassign an IP from a host to another, so no keepalived), but 
> it gives a better control over the infrastructure.

They made simple things complicated in my opinion, but probably they
got surprized by their success. At least at VPS.net you can move your
addresses between machines and keepalived works.

> EC2 was initially designed to be used by developers via APIs, and only 
> after they realized that this was so big, they needed to implement more 
> classic & reliable features.
> 
> On and all, I like it, it has tons of advantages, but you waste a sh** 
> lot of time dealing with basic problems made complicated. LIke this one.

Exactly, but they're not the only ones and I'm still amazed that people
spend their time and money where they think they're not well served. Some
users' needs are very well suited there, so that's fine. If some are not
happy with the limitations, there is no point jailing themselves in an
environment which is not compatible with basic networking principles.

> >>>Also, in your case it would not fix the issue : resolving when the
> >>>server goes down will bring you the old address, and only after
> >>>caches expires it would bring the new one.
> >>If /etc/hosts is updated locally the is no need to wait for cache
> >>expiration.
> >
> >1) /etc/hosts is out of reach in a chroot
> >2) it's out of question to re-read /etc/hosts before every 
> >connection.
> >3) if you don't recheck before every connection, you can connect to 
> >the
> >   wrong place due to the time it takes to propagate changes.
> >
> 
> Why don't you edit the haproxy conf directly and reload it ? If you 
> have the new IP and are going to update the /etc/hosts, what is stopping 
> you from doing a sed on the backend's ip in haproxy.cfg ?

I'd say that if the conf is reloaded, there is no need for doing a sed on
it, as the address is supposed to be in /etc/hosts already. So basically
we're just trying to find how to avoid an automatic haproxy reload every
time a server dies... Makes me wonder how many times a second a server
dies...

> Or, you could just run in a VPC and stop doing weird stuff with your 
> networking ;)

That's exactly my point when I'm saying "configure the servers so that
they don't get a random address" :-)

Cheers,
Willy




Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Baptiste
> Why don't you edit the haproxy conf directly and reload it ? If you have the
> new IP and are going to update the /etc/hosts, what is stopping you from
> doing a sed on the backend's ip in haproxy.cfg ?
>
>
> Or, you could just run in a VPC and stop doing weird stuff with your
> networking ;)
>
>
> Julien
>


Or using some kind of haproxy conf template with some keyword you
replace using sed with IPs you would get from the hosts file?
with inotify, you can get updated each time hosts file change, then
you generate a new haproxy conf from your template and you ask haproxy
to reload it :)
brilliant !

cheers



Re: cookie-less sessions

2011-08-05 Thread Hank A. Paulson

Sorry, I meant working with balance url_param hashing

On 8/5/11 2:13 PM, Hank A. Paulson wrote:

I am going around again about cookie-less sessions and just want to double
check that nothing works for them :)

In 1.5 there is the stick on url param option, but afaict this and everything
else won't work in a situation where you have two things:
1 - clients that don't support cookies.
2 - servers that don't share session info.

The problem is a user goes to a server via round robin or some other algo,
they arrive at server 3. Server #3 creates session X and redirs (somehow) that
user to a page with a url param with jsid=X. The user goes to that url, but
haproxy has no way to remember which server served that session and gave it
the session id of X, so it decides to send the request to server #5, of course
server #5 has no idea about session X and creates session Y and replies with a
url param of jsid=Y.

I have this "working" on 1.4 with appsession hashing, only because haproxy
spins the user through several sessions (as described above) until one of them
finally hashes to go to the same server as the previous session id, then it
sticks - for a while and then when there is a long url param before the
session id it switches, not sure why yet. Not perfect but better than hash by
source since the bulk of the traffic is coming from a single ip proxy upstream
and causes one server to be massively overloaded.

Thanks for any ideas or confirmation that this is not solvable.





cookie-less sessions

2011-08-05 Thread Hank A. Paulson
I am going around again about cookie-less sessions and just want to double 
check that nothing works for them :)


In 1.5 there is the stick on url param option, but afaict this and everything 
else won't work in a situation where you have two things:

1 - clients that don't support cookies.
2 - servers that don't share session info.

The problem is a user goes to a server via round robin or some other algo, 
they arrive at server 3. Server #3 creates session X and redirs (somehow) that 
user to a page with a url param with jsid=X. The user goes to that url, but 
haproxy has no way to remember which server served that session and gave it 
the session id of X, so it decides to send the request to server #5, of course 
server #5 has no idea about session X and creates session Y and replies with a 
url param of jsid=Y.


I have this "working" on 1.4 with appsession hashing, only because haproxy 
spins the user through several sessions (as described above) until one of them 
finally hashes to go to the same server as the previous session id, then it 
sticks - for a while and then when there is a long url param before the 
session id it switches, not sure why yet. Not perfect but better than hash by 
source since the bulk of the traffic is coming from a single ip proxy upstream 
and causes one server to be massively overloaded.


Thanks for any ideas or confirmation that this is not solvable.



Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Julien Vehent


On Fri, 5 Aug 2011 22:42:08 +0200, Willy Tarreau wrote:

On Fri, Aug 05, 2011 at 11:11:50PM +0300, Piavlo wrote:

>It's not a matter of config option. You're supposed to run haproxy
>inside a chroot. It will then not have access to the resolver.
There are simple ways to make the resolver work inside chroot 
without

making the chroot less secure.


I don't know any such simple way. If you're in a chroot, you have no
FS access so you can't use resolv.conf, nsswitch.conf, nor even load
the dynamic libs that are needed for that. The only thing you can do
then is to implement your own resolver and maintain a second config
for this one. This is not what I call a simple way.

>  I could ask the question the other direction : why try to resolve 
a
>name to IP when a check fails, there is no reason why a server 
would
>have its address changed without the admin being responsible for 
it.
I don't agree that admin is supposed to be responsible for it 
directly

at all.


So you're saying that you find it normal that a *server* changes its 
IP

address without the admin's consent ? I'm sorry but we'll never reach
an agreement there.



Willy: EC2 gives a different kind of flexibility but requires to think 
differently. So, yeah, they do crazy stuffs like randomizing the LAN IPs 
allocations. But people have been complaining about that so much that 
AWS created the Virtual Private Cloud.
That's basically a VLAN that behaves like your own piece of network. 
You control the IPs in the subnet of your choice. It's not perfect yet 
(you can't reassign an IP from a host to another, so no keepalived), but 
it gives a better control over the infrastructure.


EC2 was initially designed to be used by developers via APIs, and only 
after they realized that this was so big, they needed to implement more 
classic & reliable features.


On and all, I like it, it has tons of advantages, but you waste a sh** 
lot of time dealing with basic problems made complicated. LIke this one.




>Also, in your case it would not fix the issue : resolving when the
>server goes down will bring you the old address, and only after
>caches expires it would bring the new one.
If /etc/hosts is updated locally the is no need to wait for cache
expiration.


1) /etc/hosts is out of reach in a chroot
2) it's out of question to re-read /etc/hosts before every 
connection.
3) if you don't recheck before every connection, you can connect to 
the

   wrong place due to the time it takes to propagate changes.



Why don't you edit the haproxy conf directly and reload it ? If you 
have the new IP and are going to update the /etc/hosts, what is stopping 
you from doing a sed on the backend's ip in haproxy.cfg ?



Or, you could just run in a VPC and stop doing weird stuff with your 
networking ;)



Julien





Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Hank A. Paulson
I think the problem here is that the EC2 way of doing automatic server 
replacement is directly opposite normal and sane patterns of doing server 
changes in other environments. So someone on EC2 only is thinking this is a 
process to hook into and use and others, like Willie, are thinking wtf? why 
would you do this - I don't think there will be much common ground to be found.


Did someone already mention the idea of a soft restart after some external 
process notices a dns/ip mapping change? Does a soft restart (-sf) re-read the 
hosts file or redo server dns name lookups? Presumably, your instances should 
not restart so frequently that simple soft restarts would become a problem - 
afaik.


On 8/5/11 1:42 PM, Willy Tarreau wrote:

On Fri, Aug 05, 2011 at 11:11:50PM +0300, Piavlo wrote:

It's not a matter of config option. You're supposed to run haproxy
inside a chroot. It will then not have access to the resolver.

There are simple ways to make the resolver work inside chroot without
making the chroot less secure.


I don't know any such simple way. If you're in a chroot, you have no
FS access so you can't use resolv.conf, nsswitch.conf, nor even load
the dynamic libs that are needed for that. The only thing you can do
then is to implement your own resolver and maintain a second config
for this one. This is not what I call a simple way.


  I could ask the question the other direction : why try to resolve a
name to IP when a check fails, there is no reason why a server would
have its address changed without the admin being responsible for it.

I don't agree that admin is supposed to be responsible for it directly
at all.


So you're saying that you find it normal that a *server* changes its IP
address without the admin's consent ? I'm sorry but we'll never reach
an agreement there.


Say backend server crashes/enters bad state - this is detected and new
ec2 instance is automatically spawned and autoconfigured to
replace the failed backend ec2 instance- which is optionally terminated.
The /etc/hosts of all relevent ec2 instances is auto updated (or DNS
with 60 seconds ttl is updated - by the way the 60 seconds ttl works
great withing ec2). There is no admin person involved - all is done
automatically.


That's what I'm explaining from the beginning : this *process* is totally
broken and does not fit in any way in what I'd call common practices :

   - a failed server is replaced with another server with a different IP
 address. It could very well have kept the same IP address. If servers
 in datacenters had their IP address randomly changed upon every reboot
 it would require many more men to handle them.

   - you're not even shoked that something changes the /etc/hosts of all of
 your servers when any server crashes. That's something I would never
 accept either. Of course, the only reason for this stupidity is the
 point above.

   - on top of that the DNS is updated every 60 seconds. That means that
 any process detecting the failure faster than the DNS updates will
 act based on the old IP address and possibly never refresh it. Once
 again, this is an ugly design imposed by the first point.

I'm sorry Piavlo, but I can't accept such mechanisms. They are broken
from scratch, there is no other word. A server's admin should be the
only person who decides to change the server's address. Once you decide
to let stupid process change everything below you, you can't expect
some software to guess things for you and to automagically recover from
the mess.


Also, in your case it would not fix the issue : resolving when the
server goes down will bring you the old address, and only after
caches expires it would bring the new one.

If /etc/hosts is updated locally the is no need to wait for cache
expiration.


1) /etc/hosts is out of reach in a chroot
2) it's out of question to re-read /etc/hosts before every connection.
3) if you don't recheck before every connection, you can connect to the
wrong place due to the time it takes to propagate changes.


And if /etc/hosts is auto updated by appropriate tool - going one more
step of restarting/reloading haproxy is not a problem at all - but this
is what I want to avoid.


If you want to avoid this mess, simply configure your servers not to
change address with the phases of the moon.


If instead for example i could send a command to haproxy control socket
to re-resolve all the names (or better just specific name) configured in
haproxy - it would be much better - as since /etc/hosts is already
updated it would resolve to correct ip address.


It could not because it's not supposed to be present in the empty chroot.


BTW afaiu adding/removing backends/frontends dynamically on the fly
through some api / socket - is not something that is ever planned to be
supported in haproxy?


At the moment it's not planned because it requires to dynamically change
limits that are set upon startup, such as the max memory and max FD number.
M

Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Willy Tarreau
On Fri, Aug 05, 2011 at 11:11:50PM +0300, Piavlo wrote:
> >It's not a matter of config option. You're supposed to run haproxy
> >inside a chroot. It will then not have access to the resolver.
> There are simple ways to make the resolver work inside chroot without 
> making the chroot less secure.

I don't know any such simple way. If you're in a chroot, you have no
FS access so you can't use resolv.conf, nsswitch.conf, nor even load
the dynamic libs that are needed for that. The only thing you can do
then is to implement your own resolver and maintain a second config
for this one. This is not what I call a simple way.

> >  I could ask the question the other direction : why try to resolve a
> >name to IP when a check fails, there is no reason why a server would
> >have its address changed without the admin being responsible for it.
> I don't agree that admin is supposed to be responsible for it directly 
> at all.

So you're saying that you find it normal that a *server* changes its IP
address without the admin's consent ? I'm sorry but we'll never reach
an agreement there.

> Say backend server crashes/enters bad state - this is detected and new 
> ec2 instance is automatically spawned and autoconfigured to
> replace the failed backend ec2 instance- which is optionally terminated.
> The /etc/hosts of all relevent ec2 instances is auto updated (or DNS 
> with 60 seconds ttl is updated - by the way the 60 seconds ttl works 
> great withing ec2). There is no admin person involved - all is done 
> automatically.

That's what I'm explaining from the beginning : this *process* is totally
broken and does not fit in any way in what I'd call common practices :

  - a failed server is replaced with another server with a different IP
address. It could very well have kept the same IP address. If servers
in datacenters had their IP address randomly changed upon every reboot
it would require many more men to handle them.

  - you're not even shoked that something changes the /etc/hosts of all of
your servers when any server crashes. That's something I would never
accept either. Of course, the only reason for this stupidity is the
point above.

  - on top of that the DNS is updated every 60 seconds. That means that
any process detecting the failure faster than the DNS updates will
act based on the old IP address and possibly never refresh it. Once
again, this is an ugly design imposed by the first point.

I'm sorry Piavlo, but I can't accept such mechanisms. They are broken
from scratch, there is no other word. A server's admin should be the
only person who decides to change the server's address. Once you decide
to let stupid process change everything below you, you can't expect
some software to guess things for you and to automagically recover from
the mess.

> >Also, in your case it would not fix the issue : resolving when the
> >server goes down will bring you the old address, and only after
> >caches expires it would bring the new one.
> If /etc/hosts is updated locally the is no need to wait for cache 
> expiration.

1) /etc/hosts is out of reach in a chroot
2) it's out of question to re-read /etc/hosts before every connection.
3) if you don't recheck before every connection, you can connect to the
   wrong place due to the time it takes to propagate changes.

> And if /etc/hosts is auto updated by appropriate tool - going one more 
> step of restarting/reloading haproxy is not a problem at all - but this 
> is what I want to avoid.

If you want to avoid this mess, simply configure your servers not to
change address with the phases of the moon.

> If instead for example i could send a command to haproxy control socket 
> to re-resolve all the names (or better just specific name) configured in 
> haproxy - it would be much better - as since /etc/hosts is already 
> updated it would resolve to correct ip address.

It could not because it's not supposed to be present in the empty chroot.

> BTW afaiu adding/removing backends/frontends dynamically on the fly 
> through some api / socket - is not something that is ever planned to be 
> supported in haproxy?

At the moment it's not planned because it requires to dynamically change
limits that are set upon startup, such as the max memory and max FD number.
Maybe in the future we'll be able to start with a configurable margin to
add some servers, but that's not planned right now. Changing a server's
address by hand might be much easier to implement though, eventhough it
will obviously break some protocols (eg: RDP). But it could fit your
use case

Regards,
Willy




Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Piavlo

On 08/05/2011 06:53 PM, Willy Tarreau wrote:

On Fri, Aug 05, 2011 at 11:17:16AM +0300, Piavlo wrote:

But why do a reload of haproxy in other situations (much more common  in
my use case and loose statistics and possibly some connections) if there
could be a config option that tells haproxy to re-resolve name to ip -
then backend health check fails?

It's not a matter of config option. You're supposed to run haproxy
inside a chroot. It will then not have access to the resolver.
There are simple ways to make the resolver work inside chroot without 
making the chroot less secure.

  I could ask the question the other direction : why try to resolve a
name to IP when a check fails, there is no reason why a server would
have its address changed without the admin being responsible for it.
I don't agree that admin is supposed to be responsible for it directly 
at all.
Say backend server crashes/enters bad state - this is detected and new 
ec2 instance is automatically spawned and autoconfigured to
replace the failed backend ec2 instance- which is optionally terminated. 
The /etc/hosts of all relevent ec2 instances is auto updated (or DNS 
with 60 seconds ttl is updated - by the way the 60 seconds ttl works 
great withing ec2). There is no admin person involved - all is done 
automatically.



Also, in your case it would not fix the issue : resolving when the
server goes down will bring you the old address, and only after
caches expires it would bring the new one.
If /etc/hosts is updated locally the is no need to wait for cache 
expiration.
And if /etc/hosts is auto updated by appropriate tool - going one more 
step of restarting/reloading haproxy is not a problem at all - but this 
is what I want to avoid.
If instead for example i could send a command to haproxy control socket 
to re-resolve all the names (or better just specific name) configured in 
haproxy - it would be much better - as since /etc/hosts is already 
updated it would resolve to correct ip address.


BTW afaiu adding/removing backends/frontends dynamically on the fly 
through some api / socket - is not something that is ever planned to be 
supported in haproxy?


Thanks
Alex

  In the mean time, someone
else might have got the old address before you have a new one, so
this means that it is still possible that only the old address is
used.



There is no way to reach a reliable behaviour with unreliable configuration
processes.

Regards,
Willy







Re: unknown keyword 'userlist' in '****' section

2011-08-05 Thread James Bardin
On Fri, Aug 5, 2011 at 1:10 PM, Tom Sztur  wrote:
> correction,
> Version is HA-Proxy version 1.3.15.2

Userlist is not an option in 1.3.
See your version's documentation:
http://haproxy.1wt.eu/download/1.3/doc/configuration.txt



Re: unknown keyword 'userlist' in '****' section

2011-08-05 Thread Tom Sztur
correction,
Version is HA-Proxy version 1.3.15.2

On Fri, Aug 5, 2011 at 1:09 PM, Tom Sztur  wrote:

> Hello,
> So I'm trying to setup a userlist in haproxy.cfg using the following
> instructions:
> http://code.google.com/p/haproxy-docs/wiki/Userlists
>
> however no matter where in the .cfg file I put the userlist directive in it
> keeps giving me errors:
> unknown keyword 'userlist' in 'frontend' section
> unknown keyword 'userlist' out of section.
> unknown keyword 'userlist' in 'backend' section
> unknown keyword 'userlist' in 'defaults' section
> unknown keyword 'userlist' in 'global' section
>
> Version of Pound is 2.4.3
> Can someone help?
>
> TIA
>


unknown keyword 'userlist' in '****' section

2011-08-05 Thread Tom Sztur
Hello,
So I'm trying to setup a userlist in haproxy.cfg using the following
instructions:
http://code.google.com/p/haproxy-docs/wiki/Userlists

however no matter where in the .cfg file I put the userlist directive in it
keeps giving me errors:
unknown keyword 'userlist' in 'frontend' section
unknown keyword 'userlist' out of section.
unknown keyword 'userlist' in 'backend' section
unknown keyword 'userlist' in 'defaults' section
unknown keyword 'userlist' in 'global' section

Version of Pound is 2.4.3
Can someone help?

TIA


[ANNOUNCE] haproxy 1.4.16

2011-08-05 Thread Willy Tarreau
Hi all,

Since 1.4.15 was released 2 months ago, very few minor bugs were detected.
They were so minor that it was worth waiting for other ones to be found,
but after some time, there wasn't any point making users wait any more,
so I released 1.4.16.

A few minor improvements were also made based on feedback from users. Among
the changes, MySQL checks now support Mysqld versions after 5.5, health
checks support for multi-packet response has been fixed, the HTTP 200 status
can be configured for monitor responses, a new http-no-delay option has been
added to work around buggy HTTP implementations that assume packet-based
transfers, chunked-encoded transfers have been optimised a bit, the stats
interface now support URL-encoded forms, and halog correctly handles
truncated files.

Quite honnestly, there is no real emergency to upgrade but it makes sense
for new deployments and for packagers.

The usual links apply and I have even built both the Linux and Solaris
binaries :

site index : http://haproxy.1wt.eu/ 
sources: http://haproxy.1wt.eu/download/1.4/src/
changelog  : http://haproxy.1wt.eu/download/1.4/src/CHANGELOG   
binaries   : http://haproxy.1wt.eu/download/1.4/bin/

I'll check if some backports are needed and will issue a 1.3 soon too.

Have fun,
Willy




Re: Fwd: erratic X-Forwarded-For patch for stunnel

2011-08-05 Thread Willy Tarreau
On Fri, Aug 05, 2011 at 03:21:31PM +0200, Damien Hardy wrote:
> (this ML need a reply-to header :)

No, because reply-to makes it harder to reply to individual people,
and incites responders to reply only to the list, which is the best
way to lose track of threads, as most of us are not constantly watching
the list.

Thanks for the forward anyway ;-)

Willy




Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Willy Tarreau
On Fri, Aug 05, 2011 at 11:17:16AM +0300, Piavlo wrote:
> But why do a reload of haproxy in other situations (much more common  in 
> my use case and loose statistics and possibly some connections) if there 
> could be a config option that tells haproxy to re-resolve name to ip - 
> then backend health check fails?

It's not a matter of config option. You're supposed to run haproxy
inside a chroot. It will then not have access to the resolver. I
could ask the question the other direction : why try to resolve a
name to IP when a check fails, there is no reason why a server would
have its address changed without the admin being responsible for it.

Also, in your case it would not fix the issue : resolving when the
server goes down will bring you the old address, and only after
caches expires it would bring the new one. In the mean time, someone
else might have got the old address before you have a new one, so
this means that it is still possible that only the old address is
used.

There is no way to reach a reliable behaviour with unreliable configuration
processes.

Regards,
Willy




Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Julien Vehent

On Fri, 05 Aug 2011 11:17:16 +0300, Piavlo wrote:

On 08/05/2011 06:51 AM, Julien Vehent wrote:


On Fri, 05 Aug 2011 01:08:22 +0300, Piavlo wrote:

Hi Jens,

I'm using names which resolve to internal EC2 addresses in haproxy
configs - the /etc/hosts of all instances are auto updated then new
instance is added/removed.
But the problem manifests then the instance is stoped and then
started - this makes the internal ip to change.
I can also use DNS CNAMES to public instance ip with very low TTL -
which get auto updated then instance boots by using route53 - but 
it's
still the same problem - the ip changes in DNS and not in 
/etc/hosts

(getnameinfo does not really care from where from the name is
resolved) - in both cases haproxy will not know it has changed 
since
it probably uses  getnameinfo once only on startup/reload. And 
never

later rechecks if the ip has changed.



Hi,

Haproxy resolves names into addresses at startup, so using names is 
just an ugly way, and probably confusing, to define an ip address.

Names are less confusing and less ugly for me.  I understand I can
use a combination of automation tool like puppet/chef with  zookeper
like tool to rebuild the haproxy configuration and reload haproxy - 
in

some situations like add/removal of backend server  - that's
unavoidable.
But why do a reload of haproxy in other situations (much more common
in my use case and loose statistics and possibly some connections) if
there could be a config option that tells haproxy to re-resolve name
to ip - then backend health check fails?


The only reliable way to work with IP addresses in EC2, without 
messing with the whole /etc/hosts and applications reload, is to use 
static LAN addresses, like in a real network. And you can do that just 
fine in a VPC environment.


DO NOT use elastic IPs for internal traffic. Elastic IPs are routed 
through the external network of EC2, so you will get charged $$$, your 
interconnections will be slower and you don't even know where the hell 
your packets are going



I never said that I'm using elastic IPs. But I don't think it matters
if it's an elastic/static ip or just a normal public ip which 
can/will

change on stop/start of ec2 instance.
There is a well know trick in EC2 - if you do dns lookup on public
ec2 hostname from within ec2 - you will get and internal ip instead 
of

public ip.
So you are not charged because you are effectively working with
internal ip's - if you have a CNAME  to public A records - it ends up
resolving to internal ec2 ip from within ec2 and to public ip from
outside of ec2.



That's the thing: If you run your EC2 environment in a VPC and not in 
regular EC2 infrastructure, you have full control of the subnet, you 
chose the private IP of each of your instances, and the IPs are 
conserved upon reboots



Julien




Fwd: erratic X-Forwarded-For patch for stunnel

2011-08-05 Thread Damien Hardy
(this ML need a reply-to header :)

For the conclusion :

-- Forwarded message --
From: Damien Hardy 
Date: 2011/8/5
Subject: Re: erratic X-Forwarded-For patch for stunnel
To: Guillaume Bourque 


Good point for you.

I was running with option http-server-close as global configuration.
Now with option httpclose it get the X-Forwarded-For for every request.

Thank you a lot.

-- 
Damien


2011/8/5 Guillaume Bourque 

> **
> Hi,
>
> are you using httpclose in haproxy in the frontend for the ssl portion of
> haproxy ?   Willy has talk about other ways to solve this yesterday but just
> to do a test you could put option httpclose in this frontend.
>
> "most of the time there is only 192.168.134.222 the IP of haproxy)" It's
> the ip of stunnel to I imagine ?
>
> You can see more then 1 X-Forwarded-For in the log it's cumulative...  But
> you can tell haproxy to not include X-Forwarded-For when stunnel already put
> the client ip with a option like this:
>
> option forwardfor except 10.222.0.0/27
>
> for me this is the subnet of the ssl offloader 10.222.0.0/27.
>
> SO the way I understand it
>
>
> Client - stunnel (add the client ip to X-For)  ---
> haproxy (will not add X-For) - apache1
>
> - apache2
>
>
> Client --  ---
> haproxy (will add X-For) - apache1
>
> - apache2
>
> Then you need to decide if you will be using option httpclose or what was
> discuss yesterday
>
> From Willy;
>
> So if you need stunnel to provide the IP to haproxy, you have two
> solutions :
>   - either disable keep-alive using "option httpclose" on haproxy so that
> it forces stunnel to reopen a new connection for each request and to
> add the header to each of them ;
>
>   - or make use of the "send-proxy" patch for stunnel, which is compatible
> with the "accept-proxy" feature of haproxy. This is the preferred solution
> because instead of mangling the beginning of the HTTP request, stunnel
> then informs haproxy about the source address in an out-of-band fashion,
> which makes it compatible with keep-alive.
>
>
> Bye.
>
> e 2011-08-05 05:45, Damien Hardy a écrit :
>
> Hello,
>
> I patched the debian stunnel4 package for squeeze
>
> # aptitude install devscripts build-essential fakeroot
> # apt-get build-dep stunnel4
> # apt-get source stunnel4
> # wget
> http://haproxy.1wt.eu/download/patches/stunnel-4.29-xforwarded-for.diff
> # cd stunnel4-4.29/
> # patch -p1 -i ../stunnel-4.29-xforwarded-for.diff
> # debuild -us -uc
> # dpkg -i ../stunnel4_4.29-1_amd64.deb
>
> change my conf /etc/stunnel/stunnel.conf as :
> [...]
> [https]
> accept  = 192.168.134.222:443
> connect = 192.168.134.222:4430
> TIMEOUTclose = 0
> xforwardedfor = yes
>
> change my conf /etc/haproxy/haproxy.conf as :
> listen sslsite
> bind 192.168.134.222:4430
> balance roundrobin
> cookie SRV insert indirect nocache
> capture request header X-Forwarded-For len 256
> rspirep ^Location:\ http://(.*)Location:\ https://\1
> server vexft04  192.168.16.55:80 cookie ahD2Fiel check inter 5000 fall
> 3
> server vexft05  192.168.16.50:80 cookie ifaop7Ge check inter 5000 fall
> 3
> server vexft06  192.168.128.52:80 cookie aina1oRo check inter 5000
> fall 3
> server vexft07  192.168.128.53:80 cookie ohQuai5g check inter 5000
> fall 3
>
> But X-Forwarded-For header is inconstantly set in logs as :
>
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43889[05/Aug/2011:11:23:54.218] sslsite sslsite/vexft04 
> 0/0/0/250/250 200 3865 -
> - --NI 1/1/0/1/0 0/0 {10.147.28.20} "GET /admin/AdmInscriptionPro.shtml
> HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43889[05/Aug/2011:11:23:54.468] sslsite sslsite/vexft04 
> 31/0/1/1/33 200 471 - -
> --VN 1/1/0/1/0 0/0 {} "GET /css/admin/master.css HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43889[05/Aug/2011:11:23:54.502] sslsite sslsite/vexft04 
> 173/0/0/5/178 200 2018 -
> - --VN 1/1/0/1/0 0/0 {} "GET /css/lightwindow.css HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43889[05/Aug/2011:11:23:54.680] sslsite sslsite/vexft04 
> 56/0/1/1/58 200 573 - -
> --VN 1/1/0/1/0 0/0 {} "GET /css/sIFR-screen.css HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43889[05/Aug/2011:11:23:54.739] sslsite sslsite/vexft04 
> 64/0/1/1/66 200 722 - -
> --VN 1/1/0/1/0 0/0 {} "GET /css/niftyCorners.css HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43889[05/Aug/2011:11:23:54.805] sslsite sslsite/vexft04 
> 3/0/1/11/16 200 28961 - -
> --VN 1/1/0/1/0 0/0 {} "GET /script/aculous/prototype.js HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43922[05/Aug/2011:11:23:54.832] sslsite sslsite/vexft04 
> 0/0/0/1/1 200 2071 - -
> --VN 4/4/3/4/0 0/0 {10.147.28.20} "GET /script/espace-pro.js HTTP/1.1"
> Aug  5 11:23:54 haproxy[8423]: 
> 192.168.134.222:43920[05/Aug/2011:11:23:54.831] sslsite sslsi

Re: erratic X-Forwarded-For patch for stunnel

2011-08-05 Thread Guillaume Bourque
**
Hi,

are you using httpclose in haproxy in the frontend for the ssl portion of
haproxy ?   Willy has talk about other ways to solve this yesterday but just
to do a test you could put option httpclose in this frontend.

"most of the time there is only 192.168.134.222 the IP of haproxy)" It's the
ip of stunnel to I imagine ?

You can see more then 1 X-Forwarded-For in the log it's cumulative...  But
you can tell haproxy to not include X-Forwarded-For when stunnel already put
the client ip with a option like this:

option forwardfor except 10.222.0.0/27

for me this is the subnet of the ssl offloader 10.222.0.0/27.

SO the way I understand it


Client - stunnel (add the client ip to X-For)  ---
haproxy (will not add X-For) - apache1

- apache2


Client --  ---
haproxy (will add X-For) - apache1

- apache2

Then you need to decide if you will be using option httpclose or what was
discuss yesterday

>From Willy;

So if you need stunnel to provide the IP to haproxy, you have two
solutions :
  - either disable keep-alive using "option httpclose" on haproxy so that
it forces stunnel to reopen a new connection for each request and to
add the header to each of them ;

  - or make use of the "send-proxy" patch for stunnel, which is compatible
with the "accept-proxy" feature of haproxy. This is the preferred solution
because instead of mangling the beginning of the HTTP request, stunnel
then informs haproxy about the source address in an out-of-band fashion,
which makes it compatible with keep-alive.


Bye.

e 2011-08-05 05:45, Damien Hardy a écrit :

Hello,

I patched the debian stunnel4 package for squeeze

# aptitude install devscripts build-essential fakeroot
# apt-get build-dep stunnel4
# apt-get source stunnel4
# wget
http://haproxy.1wt.eu/download/patches/stunnel-4.29-xforwarded-for.diff
# cd stunnel4-4.29/
# patch -p1 -i ../stunnel-4.29-xforwarded-for.diff
# debuild -us -uc
# dpkg -i ../stunnel4_4.29-1_amd64.deb

change my conf /etc/stunnel/stunnel.conf as :
[...]
[https]
accept  = 192.168.134.222:443
connect = 192.168.134.222:4430
TIMEOUTclose = 0
xforwardedfor = yes

change my conf /etc/haproxy/haproxy.conf as :
listen sslsite
bind 192.168.134.222:4430
balance roundrobin
cookie SRV insert indirect nocache
capture request header X-Forwarded-For len 256
rspirep ^Location:\ http://(.*)Location:\ https://\1
server vexft04  192.168.16.55:80 cookie ahD2Fiel check inter 5000 fall 3
server vexft05  192.168.16.50:80 cookie ifaop7Ge check inter 5000 fall 3
server vexft06  192.168.128.52:80 cookie aina1oRo check inter 5000 fall
3
server vexft07  192.168.128.53:80 cookie ohQuai5g check inter 5000 fall
3

But X-Forwarded-For header is inconstantly set in logs as :

Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.218] sslsite
sslsite/vexft04 0/0/0/250/250 200 3865 -
- --NI 1/1/0/1/0 0/0 {10.147.28.20} "GET /admin/AdmInscriptionPro.shtml
HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.468] sslsite
sslsite/vexft04 31/0/1/1/33 200 471 - -
--VN 1/1/0/1/0 0/0 {} "GET /css/admin/master.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.502] sslsite
sslsite/vexft04 173/0/0/5/178 200 2018 -
- --VN 1/1/0/1/0 0/0 {} "GET /css/lightwindow.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.680] sslsite
sslsite/vexft04 56/0/1/1/58 200 573 - -
--VN 1/1/0/1/0 0/0 {} "GET /css/sIFR-screen.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.739] sslsite
sslsite/vexft04 64/0/1/1/66 200 722 - -
--VN 1/1/0/1/0 0/0 {} "GET /css/niftyCorners.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.805] sslsite
sslsite/vexft04 3/0/1/11/16 200 28961 - -
--VN 1/1/0/1/0 0/0 {} "GET /script/aculous/prototype.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43922[05/Aug/2011:11:23:54.832] sslsite
sslsite/vexft04 0/0/0/1/1 200 2071 - -
--VN 4/4/3/4/0 0/0 {10.147.28.20} "GET /script/espace-pro.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43920[05/Aug/2011:11:23:54.831] sslsite
sslsite/vexft04 0/0/0/2/2 200 1811 - -
--VN 4/4/2/3/0 0/0 {10.147.28.20} "GET /script/niftyCorners.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43924[05/Aug/2011:11:23:54.832] sslsite
sslsite/vexft04 0/0/0/2/2 200 739 - -
--VN 6/6/3/4/0 0/0 {10.147.28.20} "GET /script/niftyDeclare.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43928[05/Aug/2011:11:23:54.834] sslsite
sslsite/vexft04 0/0/0/1/1 200 604 - -
--VN 6/6/2/3/0 0/0 {10.147.28.20} "GET /script/admin/menu_admin.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.821] sslsite
sslsite/vexft04 7/0/0/7/14 200 13798 - -
--VN 6/6/2/3/0 0/0 {} "GET /script/acul

erratic X-Forwarded-For patch for stunnel

2011-08-05 Thread Damien Hardy
Hello,

I patched the debian stunnel4 package for squeeze

# aptitude install devscripts build-essential fakeroot
# apt-get build-dep stunnel4
# apt-get source stunnel4
# wget
http://haproxy.1wt.eu/download/patches/stunnel-4.29-xforwarded-for.diff
# cd stunnel4-4.29/
# patch -p1 -i ../stunnel-4.29-xforwarded-for.diff
# debuild -us -uc
# dpkg -i ../stunnel4_4.29-1_amd64.deb

change my conf /etc/stunnel/stunnel.conf as :
[...]
[https]
accept  = 192.168.134.222:443
connect = 192.168.134.222:4430
TIMEOUTclose = 0
xforwardedfor = yes

change my conf /etc/haproxy/haproxy.conf as :
listen sslsite
bind 192.168.134.222:4430
balance roundrobin
cookie SRV insert indirect nocache
capture request header X-Forwarded-For len 256
rspirep ^Location:\ http://(.*)Location:\ https://\1
server vexft04  192.168.16.55:80 cookie ahD2Fiel check inter 5000 fall 3
server vexft05  192.168.16.50:80 cookie ifaop7Ge check inter 5000 fall 3
server vexft06  192.168.128.52:80 cookie aina1oRo check inter 5000 fall
3
server vexft07  192.168.128.53:80 cookie ohQuai5g check inter 5000 fall
3

But X-Forwarded-For header is inconstantly set in logs as :

Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.218] sslsite
sslsite/vexft04 0/0/0/250/250 200 3865 -
- --NI 1/1/0/1/0 0/0 {10.147.28.20} "GET /admin/AdmInscriptionPro.shtml
HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.468] sslsite
sslsite/vexft04 31/0/1/1/33 200 471 - -
--VN 1/1/0/1/0 0/0 {} "GET /css/admin/master.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.502] sslsite
sslsite/vexft04 173/0/0/5/178 200 2018 -
- --VN 1/1/0/1/0 0/0 {} "GET /css/lightwindow.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.680] sslsite
sslsite/vexft04 56/0/1/1/58 200 573 - -
--VN 1/1/0/1/0 0/0 {} "GET /css/sIFR-screen.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.739] sslsite
sslsite/vexft04 64/0/1/1/66 200 722 - -
--VN 1/1/0/1/0 0/0 {} "GET /css/niftyCorners.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.805] sslsite
sslsite/vexft04 3/0/1/11/16 200 28961 - -
--VN 1/1/0/1/0 0/0 {} "GET /script/aculous/prototype.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43922[05/Aug/2011:11:23:54.832] sslsite
sslsite/vexft04 0/0/0/1/1 200 2071 - -
--VN 4/4/3/4/0 0/0 {10.147.28.20} "GET /script/espace-pro.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43920[05/Aug/2011:11:23:54.831] sslsite
sslsite/vexft04 0/0/0/2/2 200 1811 - -
--VN 4/4/2/3/0 0/0 {10.147.28.20} "GET /script/niftyCorners.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43924[05/Aug/2011:11:23:54.832] sslsite
sslsite/vexft04 0/0/0/2/2 200 739 - -
--VN 6/6/3/4/0 0/0 {10.147.28.20} "GET /script/niftyDeclare.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43928[05/Aug/2011:11:23:54.834] sslsite
sslsite/vexft04 0/0/0/1/1 200 604 - -
--VN 6/6/2/3/0 0/0 {10.147.28.20} "GET /script/admin/menu_admin.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.821] sslsite
sslsite/vexft04 7/0/0/7/14 200 13798 - -
--VN 6/6/2/3/0 0/0 {} "GET /script/aculous/lightwindow.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43926[05/Aug/2011:11:23:54.833] sslsite
sslsite/vexft04 0/0/0/3/3 200 2640 - -
--VN 6/6/1/2/0 0/0 {10.147.28.20} "GET /script/espace-admin.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43922[05/Aug/2011:11:23:54.833] sslsite
sslsite/vexft04 2/0/0/1/3 200 945 - -
--VN 6/6/2/3/0 0/0 {} "GET /script/recherche/SearchLightWindow.js HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43924[05/Aug/2011:11:23:54.835] sslsite
sslsite/vexft04 2/0/1/1/4 200 810 - -
--VN 6/6/2/3/0 0/0 {} "GET /css/admin/typo.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43928[05/Aug/2011:11:23:54.835] sslsite
sslsite/vexft04 2/0/0/1/3 200 1138 - -
--VN 6/6/2/3/0 0/0 {} "GET /css/admin/lists.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43920[05/Aug/2011:11:23:54.833] sslsite
sslsite/vexft04 3/0/1/1/5 200 1617 - -
--VN 6/6/2/3/0 0/0 {} "GET /css/admin/layout.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43926[05/Aug/2011:11:23:54.837] sslsite
sslsite/vexft04 2/0/0/1/3 200 2914 - -
--VN 6/6/2/3/0 0/0 {} "GET /css/admin/navbar.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43922[05/Aug/2011:11:23:54.837] sslsite
sslsiteo/vexft04 2/0/0/1/3 200 1726 - -
--VN 6/6/1/2/0 0/0 {} "GET /css/admin/forms.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43924[05/Aug/2011:11:23:54.839] sslsite
sslsite/vexft04 2/0/0/1/3 200 669 - -
--VN 6/6/3/4/0 0/0 {} "GET /css/niftyDeclare.css HTTP/1.1"
Aug  5 11:23:54 haproxy[8423]:
192.168.134.222:43889[05/Aug/2011:11:23:54.836] sslsite
sslsite/vexft04 4/0/1/1/6 200 1740 - -
--VN 6/6/3/4/0 0/0 {} "GET /css/admin/ventre_general.

Re: make haproxy notice that backend server ip has changed

2011-08-05 Thread Piavlo

On 08/05/2011 06:51 AM, Julien Vehent wrote:


On Fri, 05 Aug 2011 01:08:22 +0300, Piavlo wrote:

Hi Jens,

I'm using names which resolve to internal EC2 addresses in haproxy
configs - the /etc/hosts of all instances are auto updated then new
instance is added/removed.
But the problem manifests then the instance is stoped and then
started - this makes the internal ip to change.
I can also use DNS CNAMES to public instance ip with very low TTL -
which get auto updated then instance boots by using route53 - but it's
still the same problem - the ip changes in DNS and not in /etc/hosts
(getnameinfo does not really care from where from the name is
resolved) - in both cases haproxy will not know it has changed since
it probably uses  getnameinfo once only on startup/reload. And never
later rechecks if the ip has changed.



Hi,

Haproxy resolves names into addresses at startup, so using names is 
just an ugly way, and probably confusing, to define an ip address.
Names are less confusing and less ugly for me.  I understand I can use a 
combination of automation tool like puppet/chef with  zookeper like tool 
to rebuild the haproxy configuration and reload haproxy - in some 
situations like add/removal of backend server  - that's unavoidable.
But why do a reload of haproxy in other situations (much more common  in 
my use case and loose statistics and possibly some connections) if there 
could be a config option that tells haproxy to re-resolve name to ip - 
then backend health check fails?


The only reliable way to work with IP addresses in EC2, without 
messing with the whole /etc/hosts and applications reload, is to use 
static LAN addresses, like in a real network. And you can do that just 
fine in a VPC environment.


DO NOT use elastic IPs for internal traffic. Elastic IPs are routed 
through the external network of EC2, so you will get charged $$$, your 
interconnections will be slower and you don't even know where the hell 
your packets are going


I never said that I'm using elastic IPs. But I don't think it matters if 
it's an elastic/static ip or just a normal public ip which can/will 
change on stop/start of ec2 instance.
There is a well know trick in EC2 - if you do dns lookup on public ec2 
hostname from within ec2 - you will get and internal ip instead of 
public ip.
So you are not charged because you are effectively working with internal 
ip's - if you have a CNAME  to public A records - it ends up resolving 
to internal ec2 ip from within ec2 and to public ip from outside of ec2.


Thanks
Alex


Just the result of my personal experience... :)

Julien