How involved is the current RX in the raw IO performance gap (AFS vs. NFS, FTP et. all) people bring up?
i believe rx is the problem but more precisely how rx is used the problem. we (nrl) had some work done to put atm into afs. rx datagrams were sent directly over an atm vc. atm pdu's are more like udp than tcp but that's probably not were the gains were seen.
gains? Care to (Can You) give any insight about what the gains were/are? Linear? Order of Magnitude? Related to File/Transfer Size?
the 'segment' size for rx was increased. this created a 'new' type of jumbogram that wasnt a bunch of little rx datagrams packaged together (like 'regular' jumbograms). regular jumbograms are typically handled via scatter/gather which (IMHO) isnt well implemented on many operating systems.
NOD.
i believe (and i wish i had some time to work on this -- really) that if you just junked the current jumbogram scheme and make rx datagrams bigger (and reducing them for congestion control) you could see some pretty substantial performance increases. apparently jumbograms are the way they are because people wanted a form of congestion control on afs (controlling number of rx datagrams in a packet).
I was wondering if a dynamic approach would be a good match, because servers have to deal with 10G ethernet as well as T1 and dialup connections, and transfer sizes that can vary dramatically. So a scheme that starts small, dynamically grows until saturation, then planes out may be a suitable approach.
Also, what about a hybrid (UDP/TCP) approach? Perhaps keeping UDP for most calls (compatability) and doing TCP to _stream_ the "bulk" data (i.e. file read,write) as a "featured" option if both ends support it.
_______________________________________________ OpenAFS-devel mailing list [EMAIL PROTECTED] https://lists.openafs.org/mailman/listinfo/openafs-devel
