Mark Tinka wrote:

Not sure I understand why you would want to accelerate for a bandwidth manager. Unlikely you are using it as a web server?


It to accelerate the Internet not the bandwidth manager. We wanted to cache HTTP traffic. The cache (well according to its documentation) is capable of recognizing cache hits and not applying the management rules to it. My hope is that applies to transparent caches (port 80 interception) and not just when the browser is going to a non-transparent proxy (e.g port 3128).

AFAIK the bandwidth manager would have to look into the HTTP headers in order to tell the difference.

If the cache is before the bandwidth manager, then it's hard to control its bandwidth
except using a single IP address (meaning all customers get the same
QOS). If it is after the bandwidth manager, then the cache itself is not
managed ...



Hmmh, not necessarily... in my experience (bandwidth) managing cache access, I have used a bandwidth manager that has multiple Ethernet interfaces to which you can attach devices or elements you want to manage. In my case, I have 5 ports, one for backhauling all LAN traffic to/from the Internet and the other four to manage the devices I need to manage. All 5 interfaces can accept policies, so it doesn't really matter where you put the policies as traffic will have to go through the main backhaul port anyway.


The setup you describe sounds very nice. The point on the network where this script acts is on the main backhaul port. Well almost. It has to send the traffic to the router which will contact the uplink and downlink providers. By being the slowest point in the network, we can shape the traffic because everything else that sends at full speed, will propagate the shaping that has been applied at the chokepoint.

This is what I meant by not being able to throttle the cache. In this case, the bandwidth manager (default gateway for two networks) throttles their requests, and sends them to the Internet. The cache is also acting as a firewall and intercepts port 80 traffic using iptables. Even when the cache wants to go at full speed, the rest of the traffic (not intercepted on port 80) still needs to get through with minimal delay. Hence the traffic shaping ...

Your case appears to be easier to manage with. By policies do you mean QOS terms of service like Least-Delay, Maximum-Throughput etc? The client is using a basic router, which they will upgrade soon (hopefully) and such refinements may be possible there.

What I am trying to do here is to maximize uplink utilization (especially by sending ACKs early) so that we can get better characteristics on the downlink (new packets are notified to be sent faster). An additional complication is that the client uses one provider for downlink and another for uplink ...

If you apply your policies on the interface connected to the cache, for your various networks, your customers will be redirected to your cache box at whatever speed you have configured for their IP address, and the cache will respond to them at whatever speed you configured for their IP address on the same port.


In this setup, the cache responds at full speed if it has the content already, and at throttled speed if it does not. The throttling is done by the bandwidth manager. What I do is to reorder the bandwidth manager packets and cache packets to get maximum uplink utilization while achieving some decent latency characteristics. I am considering using delay pools within the cache to improve how the cache uses the bandwidth.

On the port connected to your border router, you add a policy for your cache box and allocate it the appropriate bandwidth as necessary, in effect, managing how much bandwidth your cache uses to handle customer requests. You will also need to add your other networks on this same port to manage traffic heading straight for the router as the port connected to the cache only gets to deal with HTTP traffic.


We wanted the cache to use all available bandwidth on a dynamic basis. If nothing else is using the link, we want the cache to go as fast as it can. This is proving a bit elusive. I am not sure if I am allowing the link to burst enough, and since HTTP traffic is bursty it is hard to tell. What sort of policies have worked best for you?

I found this to work for me very well!

Some bandwidth managers are evil, they only come with 2 ports - IN and OUT. Doesn't offer much flexibility.


We have one of those. ET/BWMGR. I hate it.

-- G.
_______________________________________________
LUG mailing list
[email protected]
http://kym.net/mailman/listinfo/lug
%LUG is generously hosted by INFOCOM http://www.infocom.co.ug/

Reply via email to