On Thursday 14 April 2005 17:46, Guido Sohne wrote:

> It to accelerate the Internet not the bandwidth manager.

Hmmh, from what I remember about cache accelerators, they are used by ISP's or 
content providers to cache, and thus, accelerate locally hosted websites 
accessed by external browsers.

> We wanted to 
> cache HTTP traffic. The cache (well according to its documentation) is
> capable of recognizing cache hits and not applying the management rules
> to it.

I don't think this really matters as a customer's bandwidth is already limited 
by the last mile solution deployed, especially if last mile is rented telco 
circuits.

> My hope is that applies to transparent caches (port 80 
> interception) and not just when the browser is going to a
> non-transparent proxy (e.g port 3128).

Well, what is your configuration like? It's more or less the same thing, just 
that the latter requires additional customer configuration. Your bandwidth 
policy should be applicable under both circumstances.

>
> AFAIK the bandwidth manager would have to look into the HTTP headers in
> order to tell the difference.

Intelligent bandwidth managers can limit capacity utilization based on various 
methods, including but not limited to application layer protocols like HTTP, 
FTP, SMTP, e.t.c. This is good if you are providing a more granular bandwidth 
management service (that customers are paying for), to achieve things like 
QoS as well as enforce stuff like SLA's.

> The setup you describe sounds very nice. The point on the network where
> this script acts is on the main backhaul port. Well almost. It has to
> send the traffic to the router which will contact the uplink and
> downlink providers. By being the slowest point in the network, we can
> shape the traffic because everything else that sends at full speed, will
> propagate the shaping that has been applied at the chokepoint.

I'd suggest you think about placing your bandwidth manager between your border 
router and cache box, especially if you have multiple Ethernet ports on it 
rather than an IN/OUT situation.

The other way to have equally police HTTP and non-HTTP traffic is to get a 
switch and place it one port on the bandwidth manager, and place the border 
router and cache box in that switch and write your policies on the bandwidth 
manager port connecting to the switch.

>
> This is what I meant by not being able to throttle the cache. In this
> case, the bandwidth manager (default gateway for two networks) throttles
> their requests, and sends them to the Internet.

Aaah, so this is the complication. I usually like my bandwidth managers as 
very intelligent bridges, not doing any routing at all. It's an extra hop 
that's a point of failure that, IMHO, shouldn't really be there.

Run the bandwidth manager as a bridge, bridging your LAN to your Internet 
border network. It should still be bright enough to read the IP headers going 
through it and apply policies accordingly. 

More advanced bandwidth managers (that aren't necessarily that expensive) 
include a failover feature that sets up the bandwidth manager as a simple 
wire that will continue to pass traffic in the event of total power loss. 
Customers don't get rate limited, but service is still up.

> The cache is also acting 
> as a firewall and intercepts port 80 traffic using iptables.

You mean as a firewall for the network, or just using the IPTables firewall 
rules to intercept HTTP traffic?

> Even when 
> the cache wants to go at full speed, the rest of the traffic (not
> intercepted on port 80) still needs to get through with minimal delay.
> Hence the traffic shaping ...

Hmmh, does this box forward any packets to your border router?

>
> Your case appears to be easier to manage with. By policies do you mean
> QOS terms of service like Least-Delay, Maximum-Throughput etc?

Yes.

> The 
> client is using a basic router, which they will upgrade soon (hopefully)
> and such refinements may be possible there.

Suggest you do those on your edge router that they connect to (can never trust 
policies to be implemented and maintained on customer routers).

>
> What I am trying to do here is to maximize uplink utilization
> (especially by sending ACKs early) so that we can get better
> characteristics on the downlink (new packets are notified to be sent
> faster).

Well, your bandwidth manager, by also being a router, feels like it's in the 
way. If you could just to do bridging on it and ensure it has enough 
memory/CPU not to introduce any other delay besides that of bandwidth 
management, you will get some good performance.

> An additional complication is that the client uses one provider 
> for downlink and another for uplink ...

Never a problem - asymmetric routing a fact of life and many an ISP and 
customer deploy such a topology. I have such a topology.

> In this setup, the cache responds at full speed if it has the content
> already, and at throttled speed if it does not.

That is very intelligent, and I am impressed by it. My only issue is whether 
it matters especially if a customer is already running a rate limited line 
e.g., a 64Kbps leased line.

> The throttling is done 
> by the bandwidth manager. What I do is to reorder the bandwidth manager
> packets and cache packets to get maximum uplink utilization while
> achieving some decent latency characteristics.

How?

> I am considering using 
> delay pools within the cache to improve how the cache uses the bandwidth.

Delay pools, sounds like Squid. In my experience, delay pools will do no more 
than create a week of headaches, nightmares, sleepless nights and mugs of 
coffee.

> We wanted the cache to use all available bandwidth on a dynamic basis.
> If nothing else is using the link, we want the cache to go as fast as it
> can.

This is simple, allocate CIR as well as Bc (burst), and maybe even Be (excess 
burst). Base it on threshold and similar criteria for it become active, e.g., 
the presence of additional xKbps of bandwidth on the network.

> This is proving a bit elusive. I am not sure if I am allowing the 
> link to burst enough, and since HTTP traffic is bursty it is hard to
> tell.

It is something you'd have to teach your bandwidth manager to do.

> What sort of policies have worked best for you? 

Normal bursting techniques have been quite satisfactory, nothing to fancy. It 
was usually all in the package...

> We have one of those. ET/BWMGR. I hate it.

Hmmh, I was actually referring to another manufacturer, and talking about ET. 
These must be new models to compete with some of the legacy manufacturers, 
otherwise my first model was the one I based this thread on. Newer models can 
accept even up to 24 Ethernet/Gig-E ports.

Mark.

>
> -- G.
_______________________________________________
LUG mailing list
[email protected]
http://kym.net/mailman/listinfo/lug
%LUG is generously hosted by INFOCOM http://www.infocom.co.ug/

Reply via email to