There is no way you’ll see 6GB/s out of a single disk. I think you’re referring 
to the rated SATA speed, which has nothing to do with the actual data rates 
you’ll see from the spinning rust. You might see ~130-150MB/s from a single 
platter in really nice, artificial workloads, more in RAID configurations that 
can read from multiple disks.

I have 6 WD Red 6TBs in a RAIDZ2 array (ZFS software RAID, nothing even vaguely 
approaching high-end hardware otherwise) and for typical file-serving 
workloads, I see about 120-130MBs from it. In contrast, I have a Samsung 950 
Pro NVME SSD, and do see over 1G/s throughput in some real-world workloads with 
it. But it costs >8x the price per storage unit.

-j


> On Aug 4, 2016, at 2:23 AM, Kaamesh Kamalaaharan <kaam...@novocraft.com> 
> wrote:
> 
> hi, 
> thanks for the reply. I have hardware raid 5  storage servers with 4TB WD red 
> drives. I think they are capable of 6GB/s transfers so it shouldnt be a drive 
> speed issue. Just for testing i tried to do a dd test directy into the brick 
> mounted from the storage server itself and got around 800mb/s transfer rate 
> which is double what i get when the brick is mounted on the client. Are there 
> any other options or tests that i can perform to figure out the root cause of 
> my problem as i have exhaused most google searches and tests. 
> 
> Kaamesh
> 
> On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo <lenovolastn...@yahoo.com> wrote:
> your 10G nic is capable, the problem is the disk speed, fix ur disk speed 
> first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.
> 
> 
> On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan 
> <kaam...@novocraft.com> wrote:
> 
> 
> Hi , 
> I have gluster 3.6.2 installed on my server network. Due to internal issues 
> we are not allowed to upgrade the gluster version. All the clients are on the 
> same version of gluster. When transferring files  to/from the clients or 
> between my nodes over the 10gb network, the transfer rate is capped at 
> 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts? 
> 
> Our server setup is as following:
> 
> 2 gluster servers -gfs1 and gfs2
>  volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
> 
> 
> 
> The following is the average results what i did so far:
> 
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd 
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
> 
> result=399Mb/s
> 
> 3) test read speed with dd
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
> 
> result=284MB/s
> 
> My gluster volume configuration:
>  
> Volume Name: gfsvolume
> Type: Replicate
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gfs1:/export/sda/brick
> Brick2: gfs2:/export/sda/brick
> Options Reconfigured:
> performance.quick-read: off
> network.ping-timeout: 30
> network.frame-timeout: 90
> performance.cache-max-file-size: 2MB
> cluster.server-quorum-type: none
> nfs.addr-namelookup: off
> nfs.trusted-write: off
> performance.write-behind-window-size: 4MB
> cluster.data-self-heal-algorithm: diff
> performance.cache-refresh-timeout: 60
> performance.cache-size: 1GB
> cluster.quorum-type: fixed
> auth.allow: 172.*
> cluster.quorum-count: 1
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> cluster.server-quorum-ratio: 50%
> 
> Any help would be appreciated. 
> Thanks,
> Kaamesh
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to