Sorry for the delay in response.

On 01/15/2016 02:34 PM, li.ping...@zte.com.cn wrote:
GLUSTERFS_WRITE_IS_APPEND Setting in afr_writev function at glusterfs client end makes the posix_writev in the server end deal IO write fops from parallel to serial in consequence.

i.e. multiple io-worker threads carrying out IO write fops are blocked in posix_writev to execute final write fop pwrite/pwritev in __posix_writev function ONE AFTER ANOTHER.

For example:

thread1: iot_worker -> ...  -> posix_writev()   |
thread2: iot_worker -> ...  -> posix_writev()   |
thread3: iot_worker -> ...  -> posix_writev()   -> __posix_writev()
thread4: iot_worker -> ...  -> posix_writev()   |

there are 4 iot_worker thread doing the 128KB IO write fops as above, but only one can execute __posix_writev function and the others have to wait.

however, if the afr volume is configured on with storage.linux-aio which is off in default, the iot_worker will use posix_aio_writev instead of posix_writev to write data. the posix_aio_writev function won't be affected by GLUSTERFS_WRITE_IS_APPEND, and the AFR volume write performance goes up.
I think this is a bug :-(.

So, my question is whether AFR volume could work fine with storage.linux-aio configuration which bypass the GLUSTERFS_WRITE_IS_APPEND setting in afr_writev,
and why glusterfs keeps posix_aio_writev different from posix_writev ?

Any replies to clear my confusion would be grateful, and thanks in advance.
What is the workload you have? multiple writers on same file workloads?

Pranith


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.




_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

_______________________________________________
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel

Reply via email to