Re: Linux routing performace

2011-05-05 Thread John Marrett
James,

Maybe you could explain your network structure a little better to us.

What networks do you reach on your internal network, is it a limited set?

I assume that you need to reach all internet addresses on the public interface?

So far nothing in your description suggests that you couldn't address
your internal network needs with routes for specific networks instead
of a default route.

-JohnF



Re: Linux routing performace

2011-05-05 Thread Willy Tarreau
On Thu, May 05, 2011 at 10:22:55AM -0400, James Bardin wrote:
> On Thu, May 5, 2011 at 7:02 AM, Willy Tarreau  wrote:
> 
> >
> > I have no idea with ip rules impact performance that much for you.
> > Anyway, since you're dealing with two interfaces, you can explicitly
> > bind haproxy to each of them and still have a default route on each
> > interface. The trick is to use a different metric so that you can have
> > two default routes.
> >
> > For instance :
> >
> >  ip route add default via 10.0.0.1 dev eth0
> >  ip route add default via 192.168.0.1 dev eth1 metric 2
> >
> 
> I hadn't tried a default with a different metric, but no, still
> doesn't work. Packets outside of the local subnets still end up
> leaving through the first default route, which is why I have to move
> the packets through another routing table with it's own default. Note
> that this, and the previous suggestions, do work on most people's
> networks, but our strict reverse path checking makes this more
> complex.

My suggestion was meant to address the strict reverse-path too.
Did you have the "bind ... interface ..." statement in your haproxy
config ?

Willy




Re: Linux routing performace

2011-05-05 Thread James Bardin
On Thu, May 5, 2011 at 7:02 AM, Willy Tarreau  wrote:

>
> I have no idea with ip rules impact performance that much for you.
> Anyway, since you're dealing with two interfaces, you can explicitly
> bind haproxy to each of them and still have a default route on each
> interface. The trick is to use a different metric so that you can have
> two default routes.
>
> For instance :
>
>  ip route add default via 10.0.0.1 dev eth0
>  ip route add default via 192.168.0.1 dev eth1 metric 2
>

I hadn't tried a default with a different metric, but no, still
doesn't work. Packets outside of the local subnets still end up
leaving through the first default route, which is why I have to move
the packets through another routing table with it's own default. Note
that this, and the previous suggestions, do work on most people's
networks, but our strict reverse path checking makes this more
complex.


Thanks Willy,
-jim



Re: Linux routing performace

2011-05-05 Thread Willy Tarreau
Hi James,

On Wed, May 04, 2011 at 09:32:04AM -0400, James Bardin wrote:
> This isn't the end of the world if it's unsolvable, as I can request
> that all load-balancing service IPs be public for now, and spin up
> another haproxy pair for private services if there is a specific
> requirement.
> 
> I was just hoping there was some kernel sysctl or ip parameter that
> could effect routing performance. I'm kind of curious as to why this
> ip rule impacts performance so much. Maybe reassigning the outgoing
> interface is expensive?

I have no idea with ip rules impact performance that much for you.
Anyway, since you're dealing with two interfaces, you can explicitly
bind haproxy to each of them and still have a default route on each
interface. The trick is to use a different metric so that you can have
two default routes.

For instance :

  ip route add default via 10.0.0.1 dev eth0
  ip route add default via 192.168.0.1 dev eth1 metric 2

Then in haproxy :

frontend pub
 bind 10.0.0.10:80 interface eth0

frontend priv
 bind 192.168.0.10:80 interface eth1

That way incoming connections will be bound to a specific interface
and will user the default gateway associated to this interface.

Regards,
Willy




Re: Linux routing performace

2011-05-04 Thread James Bardin
Thanks guys,

On Tue, May 3, 2011 at 10:50 PM, Joseph Hardeman  wrote:

>
> route add -net 192.168.1.16 netmask 255.255.255.240 gw 10.0.0.1
>

A simple route doesn't work in this case, as the packets have to leave
out the correct interface as well, or they will be dropped by the
reverse-path-checking. Linux will route them correctly be default, but
they will still always leave out the interface with the default
gateway.


>
> On Tue, May 3, 2011 at 10:39 PM, Jon Watte  wrote:
>>
>> Does the internal network need a gateway at all?

The internal network is routed throughout the campus, so I may have
backend servers with private IPs, which aren't in my subnet.


This isn't the end of the world if it's unsolvable, as I can request
that all load-balancing service IPs be public for now, and spin up
another haproxy pair for private services if there is a specific
requirement.

I was just hoping there was some kernel sysctl or ip parameter that
could effect routing performance. I'm kind of curious as to why this
ip rule impacts performance so much. Maybe reassigning the outgoing
interface is expensive?

Thanks,
-jim



Re: Linux routing performace

2011-05-03 Thread Joseph Hardeman
Hi James,

I would agree with jw.  If your internal network is all on the same subnet,
you don't need the second gateway.  Now if you are routing to different
subnets on the internal network, you could simply put route statements
pointing those routes to use the internal router instead of adding a second
gateway on the haproxy server.

For instance:

route add -net 192.168.1.16 netmask 255.255.255.240 gw 10.0.0.1

Joe

On Tue, May 3, 2011 at 10:39 PM, Jon Watte  wrote:

> Does the internal network need a gateway at all?
>
> We run a very similar set-up, HAProxy listening on a public network, and
> forwarding TCP connections to servers on an internal network. Because all
> the servers are on the same 10/8 subnet, no default gateway is needed.
>
> Sincerely,
>
> jw
>
>
> Jon Watte, IMVU.com
> We're looking for awesome people! http://www.imvu.com/jobs/
>
>
>
>
> On Tue, May 3, 2011 at 7:41 AM, James Bardin  wrote:
>
>> Hello,
>>
>> This isn't necessarily an haproxy question, but I'm having trouble
>> finding a good resource, so I'm hoping some of the other experienced
>> people on this list may be able to help.
>>
>> Setup:
>> I have a load balancer configuration that needs to me multi-homed
>> across a private and public network. Both networks have strict reverse
>> path checking, so packets must be routed out their corresponding
>> interface, instead of a single default (each interface essentially has
>> it's own default gateway).
>>
>> The public net is eth0, so it gets the real default gateway. The
>> routing rules take any private-net packets, and send them out the
>> correct interface, to the private-net gateway.
>>
>> 
>> ip route add default via 10.0.0.1 dev eth1 table 10
>> ip rule add from 10.0.0.0/8 table 10
>> 
>>
>> Result:
>> What I've noticed is that any traffic handled by this one routing
>> decision drops the overall throughput to about 30% (it also seems adds
>> about .5ms to the rtt). Haproxy can handle about 1.5Gb/s of tcp
>> traffic on the public network, but only about 500Mb/s through the
>> private (there's an even greater skew when I remove haproxy, because
>> my link is close to 3Gb/s). Adding another cpu, and using interrupt
>> coalescing reduced the system cpu time, and brought down the
>> context-switches, but didn't increase performance at all.
>>
>> Any other tuning options I might try? I'm running the latest RHEL5
>> kernel at the moment (I haven't tried bringing up new machines with a
>> newer kernel yet)
>>
>>
>> Thanks,
>>
>> --
>> James Bardin 
>> Systems Engineer
>> Boston University IS&T
>>
>>
>


Re: Linux routing performace

2011-05-03 Thread Jon Watte
Does the internal network need a gateway at all?

We run a very similar set-up, HAProxy listening on a public network, and
forwarding TCP connections to servers on an internal network. Because all
the servers are on the same 10/8 subnet, no default gateway is needed.

Sincerely,

jw


Jon Watte, IMVU.com
We're looking for awesome people! http://www.imvu.com/jobs/



On Tue, May 3, 2011 at 7:41 AM, James Bardin  wrote:

> Hello,
>
> This isn't necessarily an haproxy question, but I'm having trouble
> finding a good resource, so I'm hoping some of the other experienced
> people on this list may be able to help.
>
> Setup:
> I have a load balancer configuration that needs to me multi-homed
> across a private and public network. Both networks have strict reverse
> path checking, so packets must be routed out their corresponding
> interface, instead of a single default (each interface essentially has
> it's own default gateway).
>
> The public net is eth0, so it gets the real default gateway. The
> routing rules take any private-net packets, and send them out the
> correct interface, to the private-net gateway.
>
> 
> ip route add default via 10.0.0.1 dev eth1 table 10
> ip rule add from 10.0.0.0/8 table 10
> 
>
> Result:
> What I've noticed is that any traffic handled by this one routing
> decision drops the overall throughput to about 30% (it also seems adds
> about .5ms to the rtt). Haproxy can handle about 1.5Gb/s of tcp
> traffic on the public network, but only about 500Mb/s through the
> private (there's an even greater skew when I remove haproxy, because
> my link is close to 3Gb/s). Adding another cpu, and using interrupt
> coalescing reduced the system cpu time, and brought down the
> context-switches, but didn't increase performance at all.
>
> Any other tuning options I might try? I'm running the latest RHEL5
> kernel at the moment (I haven't tried bringing up new machines with a
> newer kernel yet)
>
>
> Thanks,
>
> --
> James Bardin 
> Systems Engineer
> Boston University IS&T
>
>


Linux routing performace

2011-05-03 Thread James Bardin
Hello,

This isn't necessarily an haproxy question, but I'm having trouble
finding a good resource, so I'm hoping some of the other experienced
people on this list may be able to help.

Setup:
I have a load balancer configuration that needs to me multi-homed
across a private and public network. Both networks have strict reverse
path checking, so packets must be routed out their corresponding
interface, instead of a single default (each interface essentially has
it's own default gateway).

The public net is eth0, so it gets the real default gateway. The
routing rules take any private-net packets, and send them out the
correct interface, to the private-net gateway.


ip route add default via 10.0.0.1 dev eth1 table 10
ip rule add from 10.0.0.0/8 table 10


Result:
What I've noticed is that any traffic handled by this one routing
decision drops the overall throughput to about 30% (it also seems adds
about .5ms to the rtt). Haproxy can handle about 1.5Gb/s of tcp
traffic on the public network, but only about 500Mb/s through the
private (there's an even greater skew when I remove haproxy, because
my link is close to 3Gb/s). Adding another cpu, and using interrupt
coalescing reduced the system cpu time, and brought down the
context-switches, but didn't increase performance at all.

Any other tuning options I might try? I'm running the latest RHEL5
kernel at the moment (I haven't tried bringing up new machines with a
newer kernel yet)


Thanks,

-- 
James Bardin 
Systems Engineer
Boston University IS&T