Hi Avati,

> Write performance in replicate is not only a throughput factor of disk and
> network, but also involves xattr performance. xattr performance is a
> function of the inode size in most of the disk filesystems. Can you give
> some more details about the backend filesystem, specifically the inode size
> with which it was formatted? If it was ext3 with the default 128byte inode,
> it is very likely you might be running out of in-inode xattr space (due to
> enabling marker-related features like geo-sync or quota?) and hitting data
> blocks. If so, please reformat with 512byte or 1KB inode size.
>
> Also, what about read performance in replicate?
>

Thanks for your insight on this issue, we are using ext3 for the gluster
partition with CentOS 5 default inode size:

[root@vm-container-0-0 ~]# tune2fs -l /dev/sdb1 | grep Inode
Inode count:              244219904
Inodes per group:         32768
Inode blocks per group:   1024
Inode size:               128

I'll reformat sdb1 with 512 bytes and recreate my gluster volumes with
distribute/replicate and run my benchmark tests again.


   --joey
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to