On 07/27/2011 12:53 AM, Pavan T C wrote:


2. What is the disk bandwidth you are getting on the local filesystem
on a given storage node ? I mean, pick any of the 10 storage servers
dedicated for Gluster Storage and perform a dd as below:
Seeing an average of 740 MB/s write, 971 GB/s read.

I presume you did this in one of the /data-brick*/export directories ?
Command output with the command line would have been clearer, but thats fine.
That is correct -- we used /data-brick1/export.


3. What is the IB bandwidth that you are getting between the compute
node and the glusterfs storage node? You can run the tool "rdma_bw" to
get the details:
30407: Bandwidth peak (#0 to #976): 2594.58 MB/sec
30407: Bandwidth average: 2593.62 MB/sec
30407: Service Demand peak (#0 to #976): 978 cycles/KB
30407: Service Demand Avg : 978 cycles/KB


This looks like a DDR connection. "ibv_devinfo -v" will tell a better story about the line width and speed of your infiniband connection.
QDR should have a much higher bandwidth.
But that still does not explain why you should get as low as 50 MB/s for a single stream single client write when the backend can support direct IO throughput of more than 700 MB/s.
ibv_devinfo shows 4x for active width and 10 Gbps for active speed. Not sure why we're not seeing better bandwidth with rdma_bw -- we'll have to troubleshoot that some more -- but I agree, it shouldn't be the limiting factor as far the Gluster client speed problems we're seeing.

I'll send you the log files you requested off-list.

John

--

________________________________________________________

John Lalande
University of Wisconsin-Madison
Space Science&  Engineering Center
1225 W. Dayton Street, Room 439, Madison, WI 53706
608-263-2268 / john.lala...@ssec.wisc.edu



Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to