On Sunday 13 April 2008 05:43:50 Brian Butterworth wrote:
> On 12/04/2008, Michael <[EMAIL PROTECTED]> wrote:
> > On Saturday 12 April 2008 05:57:49 Brian Butterworth wrote:
> > > If it were all doing using HTTP it would be easily cached, of course,
...
> > Ignores the fact that most caches will not cache objects over a certain
> > size. 
..
> Every proxy server I have set-up allows you to configure this!  

Indeed. I even mentioned that myself, where I mentioned that it'd probably 
need to be whitelisted (as a per domain limit). That said, it damages the 
purpose behind the proxy. If you change the core aim from speed saving to 
bandwidth, then whitelisting _sufficient_ amounts of large objects will drive 
the overall cache hit rate down since that is dominated by small objects. 
(since you can get better bandwith savings often by targetting sufficiently 
popular large files, reducing space for the massively more popular small 
files)

If this goes down too far, then whilst your lower quartile response times will 
be dominated by the time it takes to serve a hit, if your median and upper 
quartile response times become dominated by median and upper quartile 
response times the user experience becomes massively uneven - with some 
things served incredibly quickly and some things (by comparison) incredibly 
slowly.

The user may actually (from an objective viewpoint) be experiencing a quicker 
response time overall, but in that scenario, they would believe they were 
getting significantly worse. 

This isn't theoretical, I've seen this in a wide number of different caching 
deployments from small companies, universities through international ISPs 
where I've deployed (or trouble shooted) caching systems.

When that happens, the users DO complain and push for the caching system to
be turned off. In extreme cases users vote DO with their feet.

> If this is really a problem, then you could set up a server for each ISP
> with the files copied on their network with the Iplayer software being
> redirected to the fastest file when available.
>
> So, if you watch a programme on a BT (Phorm! boo, hiss) ISP line, you get
> the stream from iplayer.btinternet.com, on talktalk from
> iplayer.talktalk.com etc.

Here you're talking about deploying a content distribution network with 
servers inside the ISPs, essentially, redirecting requests to the closest 
possible servers. This is precisely what many CDNs (include Akamai) do. If 
the BBC wanted to build out something similar then something based on 
Scattercast [1] would work well. 

[1] http://www.cs.berkeley.edu/~brewer/papers/scattercast-mmsj.pdf
     http://research.chawathe.com/people/yatin/publications/thesis-single.pdf

more bitesize:
http://research.chawathe.com/people/yatin/publications/talks/stanford-netseminar.ppt

I've deployed something based on it's principles in the past (at a different 
employer) integrated with a caching infrastructure- Scattercast was 
commercialised by a company called Fast Forward who disappearded a fair few 
years back, but the approach is sound and reimplementable given the PhD 
thesis linked is suffciiently detailed.

Even if that approach is patented (unknown, and next to impossible to check if 
*no* part of the system is patented), then there are a multitude of others 
that can be taken since scattercast works essentially by doing multicast at 
application layer, with application packets. (a complete GOP for example 
rather than an IP packet)

The implementation I worked with essentially performed its internal routing 
based on RIP with a static network definition, but there's no reason that 
that can't be done in a more modern dynamic way.

However, there is a flip side. As people have repeatedly said here, some ISPs 
have sold people "unlimited" capacity, which they don't have, simply 
because "current common usage patterns fit a certain bandwidth level" and 
they've built their business model on that basis. 

That "common usage" pattern is changing, and that's hitting those ISPs bottom 
line. (In the case of ISPs who do already have a model that passes on costs 
in an upfront manner not claiming to be unlimited, they will naturally tout 
this.)

Then there's various approaches - you either charge your real costs, you seek 
someone to blame or you find a way of working *with* content providers to 
reduce costs for both you and them to deliver a better service to your 
customers/their audience. (It's up to those businesses to decide how to
deal with their mismarketing, though I do like the final option myself)

Caching is part of the picture, CDN's (well MDNs in this case) another part, 
but also ISPs being clearer with their customers is another. After all, you 
shouldn't be able to claim unlimited for something limited, should you?

I think I've said everything I've got worth saying there and leave it at that. 
I'm guessing you'll disagree with a substantial amount so I'll agree to 
disagree with you in advance :-)

(far too nice weather out there right now! :-)


Michael.

-
Sent via the backstage.bbc.co.uk discussion group.  To unsubscribe, please 
visit http://backstage.bbc.co.uk/archives/2005/01/mailing_list.html.  
Unofficial list archive: http://www.mail-archive.com/backstage@lists.bbc.co.uk/

Reply via email to