[no subject]

2013-01-08 Thread wsq003


Re: Re: How to select a server based on client ip ?

2012-03-15 Thread wsq003

Thanks, Carlo. It works.

But can it be simpler? such as:

frontend http
 bind :80
 mode http
 default_backend pool 


backend pool
 server s01 2.3.4.1:80
 server s02 2.3.4.2:80
 server s03 2.3.4.3:80
 use_server s01 if { src 217.192.7.0/24 }

We have many servers for different developers, the rules may be complex and 
changing.
It is not elegant to define hundreds of backend.


From: Carlo Flores
Date: 2012-03-15 14:45
To: wsq003
CC: haproxy
Subject: Re: How to select a server based on client ip ?
See the src entry under section 7.5.1 of the HAProxy docs.  There's actually 
many examples of this acl you'll find throughout the doc.  You'd use something 
like this:


frontend http
  bind :80
  mode http
  acl always_s01 src 217.192.7.0/24
  use_backend s01 if always_s01
  default_backend pool 


backend s01
  server s01 2.3.4.1:80 


backend pool
  server s01 2.3.4.1:80
  server s02 2.3.4.2:80
  server s03 2.3.4.3:80




On Wed, Mar 14, 2012 at 11:09 PM, wsq003  wrote:

Hi,

If we have 5 servers within a back-end, how can we specify some request to 
certain server based on client ip?

For example:

backend
   server s01
   server s02
   server s03
   server s04
   server s05

How can we make all requests comes from 217.192.7.* goes to server s01 ?

Thanks.

How to select a server based on client ip ?

2012-03-14 Thread wsq003
Hi,

If we have 5 servers within a back-end, how can we specify some request to 
certain server based on client ip?

For example:

backend
server s01
server s02
server s03
server s04
server s05

How can we make all requests comes from 217.192.7.* goes to server s01 ? 

Thanks.


Re: Grouping servers for failover within a backend

2012-02-22 Thread wsq003

you can try 'use_backend' with ACL.

for example you can config two backends named be1 and be2.
then:
acl a1 path_reg "some_regex1"
acl a2 path_reg "some_regex2"
use_backend be1 if a1
use_backend be2 if a2

if you carefully design the regex1 and regex2, it will work fine.
notice that if be1 is down, half of the requests will fail. (can't switch to 
be2)


From: Sachin Shetty
Date: 2012-02-22 18:49
To: haproxy@formilux.org
Subject: Grouping servers for failover within a backend
Hi,
 
We have four web servers in a single backend. Physically these four servers are 
on two different machines. A new sessions is made sticky by hashing on one of 
the headers. 
 
Regular flow is ok, but when one of the webservers are down for an in-flight  
session, the request should be re-dispatched to the webserver on the same 
machine if available. I looked at various options in the config, but couldn't 
figure out a way to do it.  Has anybody achieved any thing similar with some 
config tweaks?
 
 
Thanks
Sachin

Does haproxy support cronolog?

2012-01-31 Thread wsq003
Hi

Here we want haproxy to write logs to separate log files (i.e. 
/home/admin/haproxy/var/logs/haproxy_20120131.log), and we want to rotate the 
log files. Then cronolog seems to be a good candidate.

We don't want to change /etc/syslog.conf  or  /etc/syslog-ng.conf,  because we 
don't want to make this machine different from others.
It would be better if haproxy can take care of itself, so we want to appoint 
the 'log file name' and 'log file rotate method' in haproxy.conf.

Any recommendation is welcome.

Thanks,

Re: Re: hashing + roundrobin algorithm

2011-11-29 Thread wsq003

My modification is based on version 1.4.16.

===in struct server add 
following===
char vgroup_name[100];
struct proxy *vgroup; //if not NULL, means this is a Virtual GROUP

===in function process_chk() add at 
line 1198===
if (s->vgroup) {
if ((s->vgroup->lbprm.tot_weight > 0) && !(s->state & SRV_RUNNING)) {
s->health = s->rise;
set_server_check_status(s, HCHK_STATUS_L4OK, "vgroup ok");
set_server_up(s);
} else if (!(s->vgroup->lbprm.tot_weight > 0) && (s->state & SRV_RUNNING)) {
s->health = s->rise;
set_server_check_status(s, HCHK_STATUS_HANA, "vgroup has no available server");
set_server_down(s);
}

if (s->state & SRV_RUNNING) {
s->health = s->rise + s->fall - 1;
set_server_check_status(s, HCHK_STATUS_L4OK, "vgroup ok");
}

while (tick_is_expired(t->expire, now_ms))
t->expire = tick_add(t->expire, MS_TO_TICKS(s->inter));
return t;
}

===in function assign_server() add 
at line 622===
if (s->srv->vgroup) {
struct proxy *old = s->be;
s->be = s->srv->vgroup;
int ret = assign_server(s);
s->be = old;
return ret;
}

===in function cfg_parse_listen() 
add at line 3949===
else if (!defsrv && !strcmp(args[cur_arg], "vgroup")) {
if (!args[cur_arg + 1]) {
Alert("parsing [%s:%d] : '%s' : missing virtual_group name.\n",
file, linenum, newsrv->id);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
if (newsrv->addr.sin_addr.s_addr) {
//for easy indicate
Alert("parsing [%s:%d] : '%s' : virtual_group requires the server address as 
0.0.0.0\n",
file, linenum, newsrv->id);
err_code |= ERR_ALERT | ERR_FATAL;
goto out;
}
newsrv->check_port = 1;
strlcpy2(newsrv->vgroup_name, args[cur_arg + 1], sizeof(newsrv->vgroup_name));
cur_arg += 2;
}

===in function 
check_config_validdity() add at line 
5680
/*
 * set vgroup if necessary
 */
newsrv = curproxy->srv;
while (newsrv != NULL) {
if (newsrv->vgroup_name[0] != '\0') {
struct proxy *px = findproxy(newsrv->vgroup_name, PR_CAP_BE);
if (px == NULL) {
Alert("[%s][%s] : vgroup '%s' not exist.\n", curproxy->id, newsrv->id, 
newsrv->vgroup_name);
err_code |= ERR_ALERT | ERR_FATAL;
break;
}
newsrv->vgroup = px;
}
newsrv = newsrv->next;
}

==

and some minor changes in function stats_dump_proxy() that not important.

==

sample config file looks like:

backend internallighttpd
option httpchk /monitor/ok.htm
server wsqa 0.0.0.0 vgroup subproxy1 weight 32 check inter 4000 rise 3 fall 
3
server wsqb 0.0.0.0 vgroup subproxy2 weight 32 check inter 4000 rise 3 fall 
3
balance uri
hash-type consistent
option redispatch
retries 3

backend subproxy1
option httpchk /monitor/ok.htm
server wsq01 1.1.1.1:8001 weight 32 check inter 4000 rise 3 fall 3
server wsq02 1.1.1.2:8001 weight 32 check inter 4000 rise 3 fall 3
balance roundrobin
option redispatch
retries 3

backend subproxy2
option httpchk /monitor/ok.htm
server wsq03 1.1.1.1:8002 weight 32 check inter 4000 rise 3 fall 3
server wsq04 1.1.1.2:8002 weight 32 check inter 4000 rise 3 fall 3
balance roundrobin
option redispatch
retries 3

==

Sorry I can't provide a clean patch, because vgroup is just one of several 
changes.
I did not consider the rewrite rules at that time. Maybe we can add a function 
call before calling assigen_server()?


From: Willy Tarreau
Date: 2011-11-30 01:47
To: wsq003
CC: Rerngvit Yanggratoke; haproxy; Baptiste
Subject: Re: Re: hashing + roundrobin algorithm
On Tue, Nov 29, 2011 at 02:56:49PM +0800, wsq003 wrote:
> 
> Backend proxies may be multiple layers, then every layer can have its own LB 
> param.
> Logically this is a tree-like structure, every real server is a leaf. Every 
> none-leaf node is a backend proxy and may have LB param.

I clearly understand what it looks like from the outside. It's still not very
clear how you *concretely* implemented it. Maybe you basically did what I've
been planning for a long time (the internal server) and then your code could
save us some time.

A feature I found important there was to be able to apply backend rewrite rules
ag

Re: Re: hashing + roundrobin algorithm

2011-11-28 Thread wsq003

Backend proxies may be multiple layers, then every layer can have its own LB 
param.
Logically this is a tree-like structure, every real server is a leaf. Every 
none-leaf node is a backend proxy and may have LB param.
When a HTTP request arrives, it go through the tree-like structure to find a 
proper real server.

It would be better if official version can provide this feature.


From: Willy Tarreau
Date: 2011-11-29 14:24
To: wsq003
CC: Rerngvit Yanggratoke; haproxy; Baptiste
Subject: Re: Re: hashing + roundrobin algorithm
Hi,

On Tue, Nov 29, 2011 at 01:52:31PM +0800, wsq003 wrote:
> 
> We add a new keyword 'vgroup' under 'server' key word.
> server wsqa 0.0.0.0 vgroup subproxy1 weight 32 check inter 4000 rise 3 
> fall 3 
> means request assigned to this server will be treated as set backend 
> 'subproxy1'. Then in backend 'subproxy1' you can configure any load balance 
> strategy. This can be recursive.
> 
> In source code:
>  At the end of assign_server(), if we found that a server has 'vgroup' 
> property, we will set backend of cur_proxy and call assign_server() again.

Your trick sounds interesting but I'm not sure I completely understand
how it works.

There was a feature I wanted to implement some time ago, it would be sort
of an internal server which would directly map to a frontend (or maybe just
a backend) without passing via a TCP connection. It looks like your trick
does something similar but I just fail to understand how the LB params are
assigned to multiple backends for a given server.

Regards,
Willy

Re: Re: hashing + roundrobin algorithm

2011-11-28 Thread wsq003

We add a new keyword 'vgroup' under 'server' key word.
server wsqa 0.0.0.0 vgroup subproxy1 weight 32 check inter 4000 rise 3 fall 
3 
means request assigned to this server will be treated as set backend 
'subproxy1'. Then in backend 'subproxy1' you can configure any load balance 
strategy. This can be recursive.

In source code:
 At the end of assign_server(), if we found that a server has 'vgroup' 
property, we will set backend of cur_proxy and call assign_server() again.


From: Rerngvit Yanggratoke
Date: 2011-11-26 08:33
To: wsq003
CC: Willy Tarreau; haproxy; Baptiste
Subject: Re: Re: hashing + roundrobin algorithm
Hello wsq003,
   That sounds very interesting. It would be great if you could share your 
patch. If that is not possible, providing guideline on how to implement that 
would be helpful as well. Thank you!


2011/11/23 wsq003 


I've made a private patch to haproxy (just a few lines of code, but not 
elegant), which can support this feature. 

My condition is just like your imagination: consistent-hashing to a group then 
round-robin in this group.

Our design is that several 'server' will share a physical machine, and 'severs' 
of one group will be distributed to several physical machine.
So, if one physical machine is down, nothing will pass through the cache layer, 
because every group still works. Then we will get a chance to recover the 
cluster as we want.


From: Willy Tarreau
Date: 2011-11-23 15:15
To: Rerngvit Yanggratoke
CC: haproxy; Baptiste
Subject: Re: hashing + roundrobin algorithm
Hi,

On Fri, Nov 18, 2011 at 05:48:54PM +0100, Rerngvit Yanggratoke wrote:
> Hello All,
> First of all, pardon me if I'm not communicating very well. English
> is not my native language. We are running a static file distribution
> cluster. The cluster consists of many web servers serving static files over
> HTTP.  We have very large number of files such that a single server simply
> can not keep all files (don't have enough disk space). In particular, a
> file can be served only from a subset of servers. Each file is uniquely
> identified by a file's URI. I would refer to this URI later as a key.
> I am investigating deploying HAProxy as a front end to this
> cluster. We want HAProxy to provide load balancing and automatic fail over.
> In other words, a request comes first to HAProxy and HAProxy should forward
> the request to appropriate backend server. More precisely, for a particular
> key, there should be at least two servers being forwarded to from HAProxy
> for the sake of load balancing. My question is what load
> balancing strategy should I use?
> I could use hashing(based on key) or consistent hashing. However,
> each file would end up being served by a single server on a particular
> moment. That means I wouldn't have load balancing and fail over for a
> particular key.

This question is much more a question of architecture than of configuration.
What is important is not what you can do with haproxy, but how you want your
service to run. I suspect that if you acquired hardware and bandwidth to build
your service, you have pretty clear ideas of how your files will be distributed
and/or replicated between your servers. You also know whether you'll serve
millions of files or just a few tens, which means in the first case that you
can safely have one server per URL, and in the later that you would risk
overloading a server if everybody downloads the same file at a time. Maybe
you have installed caches to avoid overloading some servers. You have probably
planned what will happen when you add new servers, and what is supposed to
happen when a server temporarily fails.

All of these are very important questions, they determine whether your site
will work or fail.

Once you're able to respond to these questions, it becomes much more obvious
what the LB strategy can be, if you want to dedicate server farms to some
URLs, or load-balance each hash among a few servers because you have a
particular replication strategy. And once you know what you need, then we
can study how haproxy can respond to this need. Maybe it can't at all, maybe
it's easy to modify it to respond to your needs, maybe it does respond pretty
well.

My guess from what you describe is that it could make a lot of sense to
have one layer of haproxy in front of Varnish caches. The first layer of
haproxy chooses a cache based on a consistent hash of the URL, and each
varnish is then configured to address a small bunch of servers in round
robin. But this means that you need to assign servers to farms, and that
if you lose a varnish, all the servers behind it are lost too.

If your files are present on all servers, it might make sense to use
varnish as explained above but which would round-robin across all servers.
That way you make 

Re: Executing Script between Failover

2011-11-24 Thread wsq003

One another way would be:
Use crontab to start a script, this script can get status of servers by `curl 
http://your.haproxy.com:8080/admin_status;cvs`
Then you can send messages to anywhere you like.


From: Prasad Wani
Date: 2011-11-24 19:12
To: haproxy
Subject: Executing Script between Failover
Hi,


While configuring the Failover between two machine does HaProxy has any future 
to execute the Script Just after the Failover and before 2nd server start 
serving request.


What I needed here Whenever Failover happens I want to call Monitoring URL and 
it will be called every time whenever failover happens. URL is sending alert to 
intimate that Failover happen. 



-- 
Prasad S. Wani

Re: Re: hashing + roundrobin algorithm

2011-11-23 Thread wsq003

I've made a private patch to haproxy (just a few lines of code, but not 
elegant), which can support this feature. 

My condition is just like your imagination: consistent-hashing to a group then 
round-robin in this group.

Our design is that several 'server' will share a physical machine, and 'severs' 
of one group will be distributed to several physical machine.
So, if one physical machine is down, nothing will pass through the cache layer, 
because every group still works. Then we will get a chance to recover the 
cluster as we want.


From: Willy Tarreau
Date: 2011-11-23 15:15
To: Rerngvit Yanggratoke
CC: haproxy; Baptiste
Subject: Re: hashing + roundrobin algorithm
Hi,

On Fri, Nov 18, 2011 at 05:48:54PM +0100, Rerngvit Yanggratoke wrote:
> Hello All,
> First of all, pardon me if I'm not communicating very well. English
> is not my native language. We are running a static file distribution
> cluster. The cluster consists of many web servers serving static files over
> HTTP.  We have very large number of files such that a single server simply
> can not keep all files (don't have enough disk space). In particular, a
> file can be served only from a subset of servers. Each file is uniquely
> identified by a file's URI. I would refer to this URI later as a key.
> I am investigating deploying HAProxy as a front end to this
> cluster. We want HAProxy to provide load balancing and automatic fail over.
> In other words, a request comes first to HAProxy and HAProxy should forward
> the request to appropriate backend server. More precisely, for a particular
> key, there should be at least two servers being forwarded to from HAProxy
> for the sake of load balancing. My question is what load
> balancing strategy should I use?
> I could use hashing(based on key) or consistent hashing. However,
> each file would end up being served by a single server on a particular
> moment. That means I wouldn't have load balancing and fail over for a
> particular key.

This question is much more a question of architecture than of configuration.
What is important is not what you can do with haproxy, but how you want your
service to run. I suspect that if you acquired hardware and bandwidth to build
your service, you have pretty clear ideas of how your files will be distributed
and/or replicated between your servers. You also know whether you'll serve
millions of files or just a few tens, which means in the first case that you
can safely have one server per URL, and in the later that you would risk
overloading a server if everybody downloads the same file at a time. Maybe
you have installed caches to avoid overloading some servers. You have probably
planned what will happen when you add new servers, and what is supposed to
happen when a server temporarily fails.

All of these are very important questions, they determine whether your site
will work or fail.

Once you're able to respond to these questions, it becomes much more obvious
what the LB strategy can be, if you want to dedicate server farms to some
URLs, or load-balance each hash among a few servers because you have a
particular replication strategy. And once you know what you need, then we
can study how haproxy can respond to this need. Maybe it can't at all, maybe
it's easy to modify it to respond to your needs, maybe it does respond pretty
well.

My guess from what you describe is that it could make a lot of sense to
have one layer of haproxy in front of Varnish caches. The first layer of
haproxy chooses a cache based on a consistent hash of the URL, and each
varnish is then configured to address a small bunch of servers in round
robin. But this means that you need to assign servers to farms, and that
if you lose a varnish, all the servers behind it are lost too.

If your files are present on all servers, it might make sense to use
varnish as explained above but which would round-robin across all servers.
That way you make the cache layer and the server layer independant of each
other. But this can imply complex replication strategies.

As you see, there is no single response, you really need to define how you
want your architecture to work and to scale first.

Regards,
Willy

how http-server-close work?

2011-11-21 Thread wsq003
Hi,

In my condition, I set the http-server-close option for client-side keepalive.  
(You know this will save the time of establish connections)
My question is will haproxy re-assign backend server for every HTTP request in 
this connection? I also configure 'balance uri' and 'hash-type consistent'.

e.g. I hope /a/b.jpg and /c/d.jpg be assigned to different backend server based 
on consistent-hashing, even when they are in a same client-side connection.

Thanks in advance.

Re: Re: trying to use ebtree to store key-value paires

2011-11-01 Thread wsq003

Thank you Willy. It works.

Following is an example how to use ebtree to store key-value pairs:

struct relation {
 struct eb64_node cbm;
 long long key;
 struct proxy* be;
};

void test()
{
 struct eb_root root;
 memset(&root, 0, sizeof(root));

 struct relation* rel = calloc(1, sizeof(struct relation));
 rel->cbm.key = 123;
 rel->key = 123;
 rel->be = 0;
 eb64i_insert(&root, &rel->cbm);

 struct eb64_node* nd = eb64i_lookup(&root, 123);
 if (nd) {
  rel = eb64_entry(nd, struct relation, cbm);
 }
}


From: Willy Tarreau
Date: 2011-11-02 06:34
To: wsq003
CC: haproxy
Subject: Re: trying to use ebtree to store key-value paires
Hi,

On Tue, Nov 01, 2011 at 03:25:05PM +0800, wsq003 wrote:
> hi,
> 
> I found ebtree works in a good manner, so want to use it in other places.
> Dose ebtree support key-value pair? I would like to use it to replace 
> std::map at some specific condition.

it's mainly used for that. The principle is that you store a key in a
node and you store this node into a larger struct which contains all
your values. Then when you lookup the key, you find the node so you
know the struct holding the key. That's how it's used in haproxy. Look
at task.c for instance, the timer queue is scanned to wake up all tasks
with an expired timer. The key here is the timer and the "value" is the
task.

Willy

How about server side keep-alive in v1.5?

2011-10-08 Thread wsq003
Hi Willy,

In the mainpage I saw below: "1.5 will bring keep-alive to the server, but it 
will probably make sense only with static servers."

While in the change-log or source code I did not find this feature (server side 
keep-alive).

Am I missing something, or server side keep-alive still on going?

Thanks,