Re: pf load balancing and failover

2006-10-29 Thread Sylwester S. Biernacki
On Sunday, October 29, 2006, at 15:43:09, Berk D. Demir wrote:

>> We are rdring all traffic between 3 servers in farm: 10.0.0.13,14,15
>> so we are using -k 0.0.0.0/0 :-)

> If you're not using sticky addresses, you don't need the patch.
> If you're using them, you should use the patch and kill the lingering 
> src-track entries with pfctl option '-K' (capital K)
huh - you're right... our application working in wwwfarm is clever one
and don't need sticky-address option in rdr rules:)

-- 
Sylwester S. Biernacki <[EMAIL PROTECTED]>
X-NET, http://www.xnet.com.pl/



Re: pf load balancing and failover

2006-10-29 Thread Berk D. Demir

Sylwester S. Biernacki wrote:

On Friday, October 27, 2006, at 12:23:24, Pete Vickers wrote:


Hi Berk,


I'm really intereted in this. I have a load of legacy tcp session  
based load balancing with I'd love to migrate to an OpenBSD/pf based  
solution. Do you have a patch with applies cleanly to 4.0 ?


afair this patch is applied in -current tree and we are using it for a
few weeks now and works preety well.

We are rdring all traffic between 3 servers in farm: 10.0.0.13,14,15
so we are using -k 0.0.0.0/0 :-)


If you're not using sticky addresses, you don't need the patch.
If you're using them, you should use the patch and kill the lingering 
src-track entries with pfctl option '-K' (capital K)


i.e:
removeweb() (
# Remove from backend pool
pfctl -t $1 -Td $2
# Kill states destinated to it
pfctl -k 0.0.0.0/0 -k $2
# Kill sticky src-track entries destinated to it
pfctl -K 0.0.0.0/0 -K $2
)



Re: pf load balancing and failover

2006-10-29 Thread Sylwester S. Biernacki
On Friday, October 27, 2006, at 12:23:24, Pete Vickers wrote:

> Hi Berk,

> I'm really intereted in this. I have a load of legacy tcp session  
> based load balancing with I'd love to migrate to an OpenBSD/pf based  
> solution. Do you have a patch with applies cleanly to 4.0 ?

afair this patch is applied in -current tree and we are using it for a
few weeks now and works preety well.

We are rdring all traffic between 3 servers in farm: 10.0.0.13,14,15
so we are using -k 0.0.0.0/0 :-)


#!/bin/sh

$webserver1="10.0.0.13"
$webserver2="10.0.0.14"
$webserver3="10.0.0.15"

removeweb() (
# removeweb table ip
  pfctl -t $1 -Td $2
  pfctl -k 0.0.0.0/0 -k $2
)

addweb() (
# addweb table ip
  pfctl -t $1 -Ta $2
)

while true ; do  {
  webstatus1=`curl --connect-timeout 10 $webserver1 2>/dev/null`
  webstatus2=`curl --connect-timeout 10 $webserver2 2>/dev/null`
  webstatus3=`curl --connect-timeout 10 $webserver3 2>/dev/null`

  if [ X"$webstatus1" != X"OK" ]; then
removeweb wwwfarm $webserver1
  else
addweb wwwfarm $webserver1
  fi

  if [ X"$webstatus2" != X"OK" ]; then
removeweb wwwfarm $webserver2
  else
addweb wwwfarm $webserver2
  fi

  if [ X"$webstatus3" != X"OK" ]; then
removeweb wwwfarm $webserver3
  else
addweb wwwfarm $webserver3
  fi

} ;

sleep 5;
done

exit 0





-- 
Sylwester S. Biernacki <[EMAIL PROTECTED]>
X-NET, http://www.xnet.com.pl/



Re: pf load balancing and failover

2006-10-27 Thread Berk D. Demir

Pete Vickers wrote:

Hi Berk,

I'm really intereted in this. I have a load of legacy tcp session based 
load balancing with I'd love to migrate to an OpenBSD/pf based solution. 
Do you have a patch with applies cleanly to 4.0 ?


/Pete



Anyone caring about the patch, please see my recent post to tech@ w/ 
subject "kill src nodes for pf(4) and pfctl(8)".


I'm impressed with the number of private mails requesting the patch for 
4.0 or even for unsupported 3.7. I'm sorry for not replying in private.


Success or error reports goes to tech@ or directly to me please.



Re: pf load balancing and failover

2006-10-27 Thread Pete Vickers

Hi Berk,

I'm really intereted in this. I have a load of legacy tcp session  
based load balancing with I'd love to migrate to an OpenBSD/pf based  
solution. Do you have a patch with applies cleanly to 4.0 ?


/Pete


On 26. okt. 2006, at 22.16, Berk D. Demir wrote:


Pete Vickers wrote:

 1) When using sticky-address in the rdr rules client-server
associations are added to the internal Sources table.
It is impossible to remove entries for a single backend from this
table. If a backend fails and is removed from the rdr destination
table this table will have to be flushed, making all clients  
end up on

new backends, wich is unacceptable in many configurations.
If this table is not cleared then the rdr destination table is  
not
inspected for client IP's found in the Sources table. These  
clients

will still be sent to the failed and removed backend.
Preferably entries could be removed from this table based on
source-IP and backend-IP:backend-port, and maybe even the virtual
service IP:port or a pf rule number.
 2) TCP sessions to a failed backend will continue to exist after the
backend is removed from the rdr destination table. As of today  
these

sessions can be removed with pfctl by specifying the source and
destination IP addresses. Since different services can run on
differerent port numbers on the same machines it should be  
possible to

specify a destination port number as well.
I guess that if a backend dies then the client is notified  
about this
just as if it had been speaking directly to the backend, so it  
might
not be necessary to clean out these sessions at all, and maybe  
even

the tcpdrop tool will do the trick?
Anyway, main issue is with removing single sessions from the  
internal Sources table (as it is called in pfctl(8)).


I've submitted a patch, adding a new ioctl to pf and an  
implementation to clear src-track entries likewise states  (-k  
1.1.1.1 -k 2.3.5.0/23).


A patched build (smt. between 4.0 and -current) is running in many  
DCs in my county right now.


pfctl.c changed after my submission. I have to fix the patches and  
post here in case it helps.


It needs to get OKs from developers to get into the tree. Last  
touch with a developer about this patch was with dhartmei on Jul 25.


(I'll post it tomorrow)




Re: pf load balancing and failover

2006-10-26 Thread Pete Vickers

Hi Per-Olav,

If you are dealing with http based services, rather than generic tcp,  
then you could take a look at 'pound'. I did a port of it a while  
back, and use it in pretty large scale environment here, it supports  
sticky backend etc. Works well for me, YMMV.


http://marc.theaimsgroup.com/?l=openbsd-ports&m=115513682623098

/Pete


On 26. okt. 2006, at 23.26, Per-Olov Sjvholm wrote:


On Thursday 26 October 2006 22:28, Kevin Reay wrote:

Hey,

On 10/26/06, Pete Vickers <[EMAIL PROTECTED]> wrote:

If I recall correctly,


You don't. :o)


slbd adds new rules to pf for each incoming
tcp session. Since I couldn't get it to work (old version) I do not
know what the session and Sources tables will look like, but I
suspect there will be no problems with them in slbd. Client-server
association is maintained by slbd and implemented with separate  
rules

for each tcp session.


slbd doesn't maintain separate rules for each tcp session. Client- 
server

association is NOT maintained by slbd.


This seems a bit ineffective and rather pointless since pf has the
load balancing functionality built in.


Which slbd relies on. Slbd just inserts the load balancing rules into
pf based on it's own config. Then it does the job of health-checking
the servers listed in it's config file, and removing them from the
server list if they go down.

The problems with using pf and a health checking script is  
related to

removal of failed backends. There are two separate issues:

  1) When using sticky-address in the rdr rules client-server
 associations are added to the internal Sources table.
 It is impossible to remove entries for a single backend from  
this
 table. If a backend fails and is removed from the rdr  
destination
 table this table will have to be flushed, making all clients  
end

up on
 new backends, wich is unacceptable in many configurations.
 If this table is not cleared then the rdr destination table  
is not
 inspected for client IP's found in the Sources table. These  
clients

 will still be sent to the failed and removed backend.
 Preferably entries could be removed from this table based on
 source-IP and backend-IP:backend-port, and maybe even the  
virtual

 service IP:port or a pf rule number.


Which is what slbd avoids. slbd doesn't use sticky-address for  
this reason.

slbd seems mostly geared for web servers where the web application
is written well enough to not need each request to go back to the  
same

server.

Kevin


Hi Kevin

I can come up with 100 reasons for using the same web target server  
over a
whole session and very few for not doing it. Can't see we can use  
slbd for
the ordering system as intended if requests goes to just any server  
in the

pool.

Or did I miss anything?

Regards
/Per-Olov




Re: pf load balancing and failover

2006-10-26 Thread Per-Olov Sjöholm
On Thursday 26 October 2006 22:28, Kevin Reay wrote:
> Hey,
>
> On 10/26/06, Pete Vickers <[EMAIL PROTECTED]> wrote:
> > If I recall correctly,
>
> You don't. :o)
>
> > slbd adds new rules to pf for each incoming
> > tcp session. Since I couldn't get it to work (old version) I do not
> > know what the session and Sources tables will look like, but I
> > suspect there will be no problems with them in slbd. Client-server
> > association is maintained by slbd and implemented with separate rules
> > for each tcp session.
>
> slbd doesn't maintain separate rules for each tcp session. Client-server
> association is NOT maintained by slbd.
>
> > This seems a bit ineffective and rather pointless since pf has the
> > load balancing functionality built in.
>
> Which slbd relies on. Slbd just inserts the load balancing rules into
> pf based on it's own config. Then it does the job of health-checking
> the servers listed in it's config file, and removing them from the
> server list if they go down.
>
> > The problems with using pf and a health checking script is related to
> > removal of failed backends. There are two separate issues:
> >
> >   1) When using sticky-address in the rdr rules client-server
> >  associations are added to the internal Sources table.
> >  It is impossible to remove entries for a single backend from this
> >  table. If a backend fails and is removed from the rdr destination
> >  table this table will have to be flushed, making all clients end
> > up on
> >  new backends, wich is unacceptable in many configurations.
> >  If this table is not cleared then the rdr destination table is not
> >  inspected for client IP's found in the Sources table. These clients
> >  will still be sent to the failed and removed backend.
> >  Preferably entries could be removed from this table based on
> >  source-IP and backend-IP:backend-port, and maybe even the virtual
> >  service IP:port or a pf rule number.
>
> Which is what slbd avoids. slbd doesn't use sticky-address for this reason.
> slbd seems mostly geared for web servers where the web application
> is written well enough to not need each request to go back to the same
> server.
>
> Kevin

Hi Kevin

I can come up with 100 reasons for using the same web target server over a 
whole session and very few for not doing it. Can't see we can use slbd for 
the ordering system as intended if requests goes to just any server in the 
pool.

Or did I miss anything?

Regards
/Per-Olov



pf load balancing and failover

2006-10-26 Thread Kevin Reay

Hey,

On 10/26/06, Pete Vickers <[EMAIL PROTECTED]> wrote:

If I recall correctly,


You don't. :o)


slbd adds new rules to pf for each incoming
tcp session. Since I couldn't get it to work (old version) I do not
know what the session and Sources tables will look like, but I
suspect there will be no problems with them in slbd. Client-server
association is maintained by slbd and implemented with separate rules
for each tcp session.


slbd doesn't maintain separate rules for each tcp session. Client-server
association is NOT maintained by slbd.


This seems a bit ineffective and rather pointless since pf has the
load balancing functionality built in.


Which slbd relies on. Slbd just inserts the load balancing rules into
pf based on it's own config. Then it does the job of health-checking
the servers listed in it's config file, and removing them from the
server list if they go down.


The problems with using pf and a health checking script is related to
removal of failed backends. There are two separate issues:

  1) When using sticky-address in the rdr rules client-server
 associations are added to the internal Sources table.
 It is impossible to remove entries for a single backend from this
 table. If a backend fails and is removed from the rdr destination
 table this table will have to be flushed, making all clients end
up on
 new backends, wich is unacceptable in many configurations.
 If this table is not cleared then the rdr destination table is not
 inspected for client IP's found in the Sources table. These clients
 will still be sent to the failed and removed backend.
 Preferably entries could be removed from this table based on
 source-IP and backend-IP:backend-port, and maybe even the virtual
 service IP:port or a pf rule number.


Which is what slbd avoids. slbd doesn't use sticky-address for this reason.
slbd seems mostly geared for web servers where the web application
is written well enough to not need each request to go back to the same
server.

Kevin



Re: pf load balancing and failover

2006-10-26 Thread Berk D. Demir

Pete Vickers wrote:

 1) When using sticky-address in the rdr rules client-server
associations are added to the internal Sources table.
It is impossible to remove entries for a single backend from this
table. If a backend fails and is removed from the rdr destination
table this table will have to be flushed, making all clients end up on
new backends, wich is unacceptable in many configurations.
If this table is not cleared then the rdr destination table is not
inspected for client IP's found in the Sources table. These clients
will still be sent to the failed and removed backend.
Preferably entries could be removed from this table based on
source-IP and backend-IP:backend-port, and maybe even the virtual
service IP:port or a pf rule number.

 2) TCP sessions to a failed backend will continue to exist after the
backend is removed from the rdr destination table. As of today these
sessions can be removed with pfctl by specifying the source and
destination IP addresses. Since different services can run on
differerent port numbers on the same machines it should be possible to
specify a destination port number as well.
I guess that if a backend dies then the client is notified about this
just as if it had been speaking directly to the backend, so it might
not be necessary to clean out these sessions at all, and maybe even
the tcpdrop tool will do the trick?

Anyway, main issue is with removing single sessions from the internal 
Sources table (as it is called in pfctl(8)).


I've submitted a patch, adding a new ioctl to pf and an implementation 
to clear src-track entries likewise states  (-k 1.1.1.1 -k 2.3.5.0/23).


A patched build (smt. between 4.0 and -current) is running in many DCs 
in my county right now.


pfctl.c changed after my submission. I have to fix the patches and post 
here in case it helps.


It needs to get OKs from developers to get into the tree. Last touch 
with a developer about this patch was with dhartmei on Jul 25.


(I'll post it tomorrow)



Re: pf load balancing and failover

2006-10-26 Thread Pete Vickers

Hi,


If I recall correctly, slbd adds new rules to pf for each incoming  
tcp session. Since I couldn't get it to work (old version) I do not  
know what the session and Sources tables will look like, but I  
suspect there will be no problems with them in slbd. Client-server  
association is maintained by slbd and implemented with separate rules  
for each tcp session.


This seems a bit ineffective and rather pointless since pf has the  
load balancing functionality built in.


The problems with using pf and a health checking script is related to  
removal of failed backends. There are two separate issues:


 1) When using sticky-address in the rdr rules client-server
associations are added to the internal Sources table.
It is impossible to remove entries for a single backend from this
table. If a backend fails and is removed from the rdr destination
table this table will have to be flushed, making all clients end  
up on

new backends, wich is unacceptable in many configurations.
If this table is not cleared then the rdr destination table is not
inspected for client IP's found in the Sources table. These clients
will still be sent to the failed and removed backend.
Preferably entries could be removed from this table based on
source-IP and backend-IP:backend-port, and maybe even the virtual
service IP:port or a pf rule number.

 2) TCP sessions to a failed backend will continue to exist after the
backend is removed from the rdr destination table. As of today  
these

sessions can be removed with pfctl by specifying the source and
destination IP addresses. Since different services can run on
differerent port numbers on the same machines it should be  
possible to

specify a destination port number as well.
I guess that if a backend dies then the client is notified about  
this
just as if it had been speaking directly to the backend, so it  
might

not be necessary to clean out these sessions at all, and maybe even
the tcpdrop tool will do the trick?

Anyway, main issue is with removing single sessions from the internal  
Sources table (as it is called in pfctl(8)).



/Pete




On 22. okt. 2006, at 21.13, Kevin Reay wrote:


On 10/22/06, Per-Olov Sjvholm <[EMAIL PROTECTED]> wrote:

Hi again

I am looking at the CVS. I can't see its possible to out of the  
box remove
addresses from  a round robin scheme in PF against a faulty web  
server. Am I

missing something?

But I maybe misunderstood Kevin Reay that in this thread said:  
"and it would
automatically remove the address from a pf poll (and optionality  
run a

command) when a host failed.".

Maybe I have to do some scripting after all...


It can be a little confusing at first, but it makes a lot of sense
once you understand it. The way I remember it, a person creates a
config file for slbd that defines the various pools and their polling
methods, and slbd creates the load balancing pools in pf at start-up
automatically (in an anchored ruleset). Then it removes entries from
those pools when a server goes down. So... no scripting required.

Of course, Bill Marquette will probably have more knowledge/details
about this then me...

Kevin




Re: pf load balancing and failover

2006-10-22 Thread Per-Olov Sjöholm
On Sunday 22 October 2006 21:13, Kevin Reay wrote:
> On 10/22/06, Per-Olov Sjvholm <[EMAIL PROTECTED]> wrote:
> > Hi again
> >
> > I am looking at the CVS. I can't see its possible to out of the box
> > remove addresses from  a round robin scheme in PF against a faulty web
> > server. Am I missing something?
> >
> > But I maybe misunderstood Kevin Reay that in this thread said: "and it
> > would automatically remove the address from a pf poll (and optionality
> > run a command) when a host failed.".
> >
> > Maybe I have to do some scripting after all...
>
> It can be a little confusing at first, but it makes a lot of sense
> once you understand it. The way I remember it, a person creates a
> config file for slbd that defines the various pools and their polling
> methods, and slbd creates the load balancing pools in pf at start-up
> automatically (in an anchored ruleset). Then it removes entries from
> those pools when a server goes down. So... no scripting required.
>
> Of course, Bill Marquette will probably have more knowledge/details
> about this then me...
>
> Kevin

REALLY nice ;-)

Just have to wait for the download site to be ok then...

Thanks
/Per-Olov



Re: pf load balancing and failover

2006-10-22 Thread Per-Olov Sjöholm
On Sunday 22 October 2006 17:29, Bill Marquette wrote:
> On 10/22/06, Per-Olov Sjvholm <[EMAIL PROTECTED]> wrote:
> > Hi
> >
> > I have followed this thread. Can anyone point out a working download
> > link? Sourceforge does not have any working mirrors for this
> > slbd-1.3.tar.gz file.. Probably a misconfiguration somewhere.
>
> Hmm, didn't notice that they didn't mirror it properly when I posted
> it last night.  You can try pulling it down from CVS @
> http://sourceforge.net/cvs/?group_id=96331
> I'll see what I can do to whip sourceforge into shape and get the
> mirroring fixed.  Thanks
>
> --Bill

Hi again

I am looking at the CVS. I can't see its possible to out of the box remove 
addresses from  a round robin scheme in PF against a faulty web server. Am I 
missing something?

But I maybe misunderstood Kevin Reay that in this thread said: "and it would 
automatically remove the address from a pf poll (and optionality run a 
command) when a host failed.".

Maybe I have to do some scripting after all...

Regards
/Per-Olov



Re: pf load balancing and failover

2006-10-22 Thread Kevin Reay

On 10/22/06, Per-Olov Sjvholm <[EMAIL PROTECTED]> wrote:

Hi again

I am looking at the CVS. I can't see its possible to out of the box remove
addresses from  a round robin scheme in PF against a faulty web server. Am I
missing something?

But I maybe misunderstood Kevin Reay that in this thread said: "and it would
automatically remove the address from a pf poll (and optionality run a
command) when a host failed.".

Maybe I have to do some scripting after all...


It can be a little confusing at first, but it makes a lot of sense
once you understand it. The way I remember it, a person creates a
config file for slbd that defines the various pools and their polling
methods, and slbd creates the load balancing pools in pf at start-up
automatically (in an anchored ruleset). Then it removes entries from
those pools when a server goes down. So... no scripting required.

Of course, Bill Marquette will probably have more knowledge/details
about this then me...

Kevin



Re: pf load balancing and failover

2006-10-22 Thread Bill Marquette

On 10/22/06, Per-Olov Sjvholm <[EMAIL PROTECTED]> wrote:

Hi

I have followed this thread. Can anyone point out a working download link?
Sourceforge does not have any working mirrors for this slbd-1.3.tar.gz file..
Probably a misconfiguration somewhere.


Hmm, didn't notice that they didn't mirror it properly when I posted
it last night.  You can try pulling it down from CVS @
http://sourceforge.net/cvs/?group_id=96331
I'll see what I can do to whip sourceforge into shape and get the
mirroring fixed.  Thanks

--Bill



Re: pf load balancing and failover

2006-10-22 Thread Per-Olov Sjöholm
On Sunday 22 October 2006 01:44, Kevin Reay wrote:
> > Point of correction, slbd didn't have the ability to ping IP addresses.
>
> Good call.
>
> > You might check the code in CVS, it should compile and work on 3.9.
>
> Your right, I didn't notice it was being maintained. Thanks for the
> pointer, and thanks so much for keeping it maintained (I just noticed
> you were the one who updated it in CVS).
>
> Back to the original question; it looks slbd would be a good and
> elegant way to achieve what your looking to do. Just grab it from the
> sourceforge CVS repository.
>
> Kevin

Hi

I have followed this thread. Can anyone point out a working download link? 
Sourceforge does not have any working mirrors for this slbd-1.3.tar.gz file.. 
Probably a misconfiguration somewhere.

Thanks
Per-Olov



Re: pf load balancing and failover

2006-10-21 Thread Kevin Reay

Point of correction, slbd didn't have the ability to ping IP addresses.


Good call.



You might check the code in CVS, it should compile and work on 3.9.


Your right, I didn't notice it was being maintained. Thanks for the
pointer, and thanks so much for keeping it maintained (I just noticed
you were the one who updated it in CVS).

Back to the original question; it looks slbd would be a good and
elegant way to achieve what your looking to do. Just grab it from the
sourceforge CVS repository.

Kevin



Re: pf load balancing and failover

2006-10-21 Thread Bill Marquette

On 10/21/06, Kevin Reay <[EMAIL PROTECTED]> wrote:

> there should be a userland process doing these checks and reoving the
> offending address from the pool on failure. unfortunately, to my
> knowledge, still nobody wrote something which does it.
>

A while ago I used this with great success:
http://slbd.sourceforge.net/

It's open source (bsd!) and written for OpenBSD and pf. Unfortunately it
seems to have become outdated (won't compile on recent versions
of OpenBSD) because of the changed pf interface. (updating it
probably wouldn't be too much work)

It had the ability to query webservers (http), ping ip addresses, and connect


Point of correction, slbd didn't have the ability to ping IP addresses.


to specific tcp ports for heartbeat; and it would automatically remove
the address from a pf poll (and optionality run a command) when a
host failed.

It really would be cool if someone updated it (maybe me if I get some
time in the future)


You might check the code in CVS, it should compile and work on 3.9.

--Bill



Re: pf load balancing and failover

2006-10-21 Thread Kevin Reay

there should be a userland process doing these checks and reoving the
offending address from the pool on failure. unfortunately, to my
knowledge, still nobody wrote something which does it.



A while ago I used this with great success:
http://slbd.sourceforge.net/

It's open source (bsd!) and written for OpenBSD and pf. Unfortunately it
seems to have become outdated (won't compile on recent versions
of OpenBSD) because of the changed pf interface. (updating it
probably wouldn't be too much work)

It had the ability to query webservers (http), ping ip addresses, and connect
to specific tcp ports for heartbeat; and it would automatically remove
the address from a pf poll (and optionality run a command) when a
host failed.

It really would be cool if someone updated it (maybe me if I get some
time in the future)

Kevin



Re: pf load balancing and failover

2006-10-21 Thread Henning Brauer
* Alexander Lind <[EMAIL PROTECTED]> [2006-10-20 19:18]:
> OpenBSDs PF loadbalancing functionality does not support any sort of 
> failover rule rewriting, or conditional rulesets, does it?
> 
> For example, if I have PF round-robin to 4 webservers, and one goes 
> down, is there any way to make PF notice this and remove the downed host 
> from the pool, based on something as simple as missing ping replies?

there should be a userland process doing these checks and reoving the 
offending address from the pool on failure. unfortunately, to my 
knowledge, still nobody wrote something which does it.

you might be able to achieve the same by using redirects to a table, 
some generic monitoring package and a little scripting.

-- 
Henning Brauer, [EMAIL PROTECTED], [EMAIL PROTECTED]
BS Web Services, http://bsws.de
Full-Service ISP - Secure Hosting, Mail and DNS Services
Dedicated Servers, Rootservers, Application Hosting - Hamburg & Amsterdam



Re: pf load balancing and failover

2006-10-20 Thread Jason Dixon

On Oct 20, 2006, at 12:19 PM, Alexander Lind wrote:

OpenBSDs PF loadbalancing functionality does not support any sort  
of failover rule rewriting, or conditional rulesets, does it?


For example, if I have PF round-robin to 4 webservers, and one goes  
down, is there any way to make PF notice this and remove the downed  
host from the pool, based on something as simple as missing ping  
replies?


Even cooler if it could interface with some SNMP service, like nagios.

If not supported natively, does anyone know of any other software I  
could use to achieve something like this?


http://marc.theaimsgroup.com/?l=openbsd-misc&m=115456485605757&w=2

--
Jason Dixon
DixonGroup Consulting
http://www.dixongroup.net



Re: pf load balancing and failover

2006-10-20 Thread Stuart Henderson
On 2006/10/20 17:19, Alexander Lind wrote:
> For example, if I have PF round-robin to 4 webservers, and one goes 
> down, is there any way to make PF notice this and remove the downed 
> host from the pool, based on something as simple as missing ping 
> replies?

carp is good for this. run it on the backends, and load-balance to
protected addresses. even if you have other mechanisms to take failed
servers out the pool (which you may want e.g. in case httpd is dead
but the box is alive) this is a useful backup mechanism.

> Even cooler if it could interface with some SNMP service, like nagios.
> 
> If not supported natively, does anyone know of any other software I 
> could use to achieve something like this?

monit (in ports) has a reasonable range of checks and is designed
so that it can take corrective action itself - as well as a bunch of
on-host checks (checks processes, file changes, cpu%) it can check
other machines too,

   check host ...
   if failed url ...
   then exec ...

main thing I don't like is it's a bit over-fond of pid files,
but that doesn't affect checking other hosts obviously.

nagios has a better range of options for bothering you so they're
somewhat complementary.



pf load balancing and failover

2006-10-20 Thread Alexander Lind
OpenBSDs PF loadbalancing functionality does not support any sort of 
failover rule rewriting, or conditional rulesets, does it?


For example, if I have PF round-robin to 4 webservers, and one goes 
down, is there any way to make PF notice this and remove the downed host 
from the pool, based on something as simple as missing ping replies?


Even cooler if it could interface with some SNMP service, like nagios.

If not supported natively, does anyone know of any other software I 
could use to achieve something like this?


Alec