On Fri, 4 Jun 2010, Sandon Van Ness wrote:
The problem is that just using rsync I am not getting gigabit. For me gigabit maxes out at around 930-940 megabits. When I use rsync alone I only was getting around 720 megabits incomming. This is only when its reading from the block device. When reading from the memory (IE: cat a few big files on the server to have them cached) it gets ~935 megabits. The machine is easily able to sustain that read speed (and write) but the problem is getting it to actually do it.
TCP imposes some overhead. Rsync chats back and forth so additional latency is added. Depending on settings, rsync may read an existing block on the receiving end and only send an update block if the data is different.
The only way I was able to get full gig (935 megabits) was using tar and mbuffer due to it acting as a read-ahead buffer. is there anyway to turn the prefetch up as there really is no reason I should only be getting 720 megabits when copying files off with rsync (or NFS) like I am seeing.
While there have been some unfortunate bugs in the prefetch algorithm, a problem when sending many smaller files is that it takes a bit of time for the prefetch to ramp up for each individual file. Zfs needs to learn that prefetch is valuable for the file. For obvious reasons, zfs does not assume maximum prefetch immediately after the file has been opened. For a while now I have argued that zfs should be able to learn filesystem and process behavior based on recent activity and should dynamically tune prefech based on that knowledge.
Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss