Now that DanG has a workable vector i-o for read and write, I'm trying again to make reading zero-copy. Man-oh-man, do we have our work cut out for us....
It seems that currently we provide a buffer to read. Then XDR makes a new object, puts headers into it, makes another data_val and copies data into that, then it is all eventually passed to ntirpc where a buffer is created and all copied into that. If GSS, another copy is made. (This one cannot be avoided.) So we're copying large amounts of data 4-5 times. Not counting whatever the FSAL library call does internally. Then there is NFS_LOOKAHEAD_READ, and a nfs_request_lookahead. Could somebody explain what that is doing? AFAICT, the only test is in dup_req, and it won't keep the dup_req "because large, though idempotent". Isn't a large read exactly where we'd benefit from holding onto a dup_req? NFS_LOOKAHEAD_HIGH_LATENCY is never used. There are a lot of XDR tests for x_public having the pointer to nfs_request_lookahead, yet setting that pointer is one of the early things in nfs_worker_thread.c nfs_rpc_process_request(). Finally, and what I'll do this weekend, my attempt to edit xdr_nfs23.c won't pass checkpatch commit, because all the headers are still pre-1989 pre-ANSI K&R-style. Unfortunately, Red Hat Linux doesn't seem to have cproto built-in, even though it's on the usual https://linux.die.net/man/1/cproto. ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Nfs-ganesha-devel mailing list Nfs-ganesha-devel@lists.sourceforge.net https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel