I think that the technology Gavin is thinking of is more about economising on the content being sent rather than tweaking TCP parameters.
For those that aren't familiar with WAN compression technologies they are probably made most famous by the product coming from Riverbed (but there are many others, including many of the big names in networking). What these do. is basically put a hashing and caching box on each of the link. The sending optimiser computes some sort of hash from the blocks of data payload it sends to the receiver. If the hash doesn't match a previously saved block it just sends it on to the receiver. The receiver saves this new block of data along with the hash and forwards on to next hop recipient. However, if the block matches something previously sent, it merely needs to send the hash (or some other index) to the receiver. The receiving optimiser can then recover the matching block from it's cache and forward it on it's side of the WAN. So effectively this block of data has been able transferred without actually being sent. (This is similar to rsync, the difference is that rsync operates at file level, WAN optimisation interrogates the TCP and UDP payloads directly). Of course the assumption here is that throughout the day (or weeks) there is quite a bit of repetition of traffic. Mail, web and corporate apps, management traffic at a bit level often have the same chunks of data being repeated over again. (Think about a mail with an attachment being sent to multiple recipients, and maybe being forwarded then to others, and finally the attachment being saved on a file server.) The same pattern of bytes is send over the link again and again.) There are other algorithms that involve just sending the deltas, and of course plain vanilla gzip-like compression. In large enterprises, there are some WANs that have achieved over 90% reduction of WAN traffic using these techniques. The whole aim is for the installation of this device to be totally transparent to the applications - apart from the increase in performance and lower link utilisation. I also have had a bit of a look out there whether there is anything like this in the open-source space - and have not found anything. I imagine the algorithms in rsync and squid could be put to play here. The only issue however that I imagine you would probably end up running up against quite a few patents that might cover some of the techniques being employed here. Certainly a project of value if someone has the time! Martin Visser Technology Consultant Consulting & Integration Technology Solutions Group - HP Services 410 Concord Road Rhodes NSW 2138 Australia Mobile: +61-411-254-513 Fax: +61-2-9022-1800 E-mail: martin.visserAThp.com This email (including any attachments) is intended only for the use of the individual or entity named above and may contain information that is confidential, proprietary or privileged. If you are not the intended recipient, please notify HP immediately by return email and then delete the email, destroy any printed copy and do not disclose or use the information in it. -----Original Message----- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of James Gregory Sent: Wednesday, 25 July 2007 10:55 AM To: Gavin Carr Cc: SLUG Subject: Re: [SLUG] WAN link optimisation On Tue, 2007-07-24 at 13:37 +1000, Gavin Carr wrote: > Hey sluggers, > > Anyone have any pointers to open source projects (or features of > projects) around WAN link optimisation? I'm specifically looking for a > way of duplicating traffic across multiple links to avoid resends on > high latency links, but I'm interested in the whole area. Hey Gavin, Don't have references handy, but I expect that you would be able to improve performance substantially by tinkering some values in proc. Specifically I'm thinking of selecting a different retransmit algorithm for TCP, and enabling the various 'smart ACK' schemes available these days. All the usual stuff like increasing send and receive buffers and enabling window scaling applies as well. There's some good stuff about tuning linux for trans-continental links out there, which I've previously found with google. HTH, James. -- James Gregory -- http://codelore.com -- [EMAIL PROTECTED] -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html -- SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/ Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html