Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-09 Thread Alex Crow
Your replica 2 result is pretty damn good IMHO, you would always expect
at the very most 1/2 the write speed than a local write to brick
storage. Not sure why a 1 brick volume doesn't approach your native
though - it could be that FUSE overhead caps you at <1GB/s in your setup.

AFAIK there is work being done (AFR2?) to offload the replication from
the client to the server. I could just be dreaming though, so I'll leave
others to chip in.

Alex


On 09/08/16 18:24, Дмитрий Глушенок wrote:
> Hi,
>
> Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster 
> node):
>
> Writing locally to replica 2 volume (each brick is separate local RAID6): 613 
> MB/sec
> Writing locally to 1-brick volume: 877 MB/sec
> Writing locally to the brick itself (directly to XFS): 1400 MB/sec
>
> Tests were performed using fio with following settings:
>
> bs=4096k
> ioengine=libaio
> iodepth=32
> direct=0
> runtime=600
> directory=/R1
> numjobs=1
> rw=write
> size=40g
>
> Even with direct=1 the brick itself gives 1400 MB/sec.
>
> 1-brick volume profiling below:
>
> # gluster volume profile test-data-03 info
> Brick: gluster-01:/R1/test-data-03
> ---
> Cumulative Stats:
>Block Size: 131072b+  262144b+ 
>  No. of Reads:0 0 
> No. of Writes:   88907220 
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
> Fop
>  -   ---   ---   ---   
> 
>   0.00   0.00 us   0.00 us   0.00 us  3 
> RELEASE
> 100.00 122.96 us  67.00 us   42493.00 us 208598   
> WRITE
>  
> Duration: 1605 seconds
>Data Read: 0 bytes
> Data Written: 116537688064 bytes
>  
> Interval 0 Stats:
>Block Size: 131072b+  262144b+ 
>  No. of Reads:0 0 
> No. of Writes:   88907220 
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
> Fop
>  -   ---   ---   ---   
> 
>   0.00   0.00 us   0.00 us   0.00 us  3 
> RELEASE
> 100.00 122.96 us  67.00 us   42493.00 us 208598   
> WRITE
>  
> Duration: 1605 seconds
>Data Read: 0 bytes
> Data Written: 116537688064 bytes
>  
> #
>
> As you can see all writes are performed using 128 KB block size. And it looks 
> like a bottleneck. Which was discussed previously btw: 
> http://www.gluster.org/pipermail/gluster-devel/2013-March/038821.html
>
> Using GFAPI to access the volume shows better speed, but still far from raw 
> brick. fio tests with ioengine=gfapi gives following:
>
> Writing locally to replica 2 volume (each brick is separate local RAID6): 680 
> MB/sec
> Writing locally to 1-brick volume: 960 MB/sec
>
>
> Accorging to 1-brick volume profile 128 KB blocks no more used:
>
> # gluster volume profile tzk-data-03 info
> Brick: j-gluster-01.vcod.jet.su:/R1/tzk-data-03
> ---
> Cumulative Stats:
>Block Size:4194304b+ 
>  No. of Reads:0 
> No. of Writes: 9211 
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
> Fop
>  -   ---   ---   ---   
> 
> 100.002237.67 us1880.00 us5785.00 us   8701   
> WRITE
>  
> Duration: 49 seconds
>Data Read: 0 bytes
> Data Written: 38633734144 bytes
>  
> Interval 0 Stats:
>Block Size:4194304b+ 
>  No. of Reads:0 
> No. of Writes: 9211 
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls 
> Fop
>  -   ---   ---   ---   
> 
> 100.002237.67 us1880.00 us5785.00 us   8701   
> WRITE
>  
> Duration: 49 seconds
>Data Read: 0 bytes
> Data Written: 38633734144 bytes
>  
> [root@j-gluster-01 ~]# 
>
> So, it may be worth to try using NFS Ganesha with GFAPI plugin.
>
>
>> 3 авг. 2016 г., в 9:40, Kaamesh Kamalaaharan  
>> написал(а):
>>
>> Hi , 
>> I have gluster 3.6.2 installed on my server network. Due to internal issues 
>> we are not allowed to upgrade the gluster version. All the clients are on 
>> the same version of gluster. When transferring files  to/from the clients or 
>> between my nodes over the 10gb network, the transfer rate is capped at 
>> 450Mb/s .Is there any way to increase the transfer speeds for gluster 
>> mounts? 
>>
>> Our server setup is as following:
>>
>> 2 gluster servers -gfs1 and gfs2
>>  volume name : gfsvolume
>> 3 clients - hpc1, hpc2,hpc3
>> gluster volume mounted on /export/gfsmount/
>>
>>
>>
>> The following is the average results 

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-09 Thread Дмитрий Глушенок
Hi,

Same problem on 3.8.1. Even on loopback interface (traffic not leaves gluster 
node):

Writing locally to replica 2 volume (each brick is separate local RAID6): 613 
MB/sec
Writing locally to 1-brick volume: 877 MB/sec
Writing locally to the brick itself (directly to XFS): 1400 MB/sec

Tests were performed using fio with following settings:

bs=4096k
ioengine=libaio
iodepth=32
direct=0
runtime=600
directory=/R1
numjobs=1
rw=write
size=40g

Even with direct=1 the brick itself gives 1400 MB/sec.

1-brick volume profiling below:

# gluster volume profile test-data-03 info
Brick: gluster-01:/R1/test-data-03
---
Cumulative Stats:
   Block Size: 131072b+  262144b+ 
 No. of Reads:0 0 
No. of Writes:   88907220 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls Fop
 -   ---   ---   ---   
  0.00   0.00 us   0.00 us   0.00 us  3 RELEASE
100.00 122.96 us  67.00 us   42493.00 us 208598   WRITE
 
Duration: 1605 seconds
   Data Read: 0 bytes
Data Written: 116537688064 bytes
 
Interval 0 Stats:
   Block Size: 131072b+  262144b+ 
 No. of Reads:0 0 
No. of Writes:   88907220 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls Fop
 -   ---   ---   ---   
  0.00   0.00 us   0.00 us   0.00 us  3 RELEASE
100.00 122.96 us  67.00 us   42493.00 us 208598   WRITE
 
Duration: 1605 seconds
   Data Read: 0 bytes
Data Written: 116537688064 bytes
 
#

As you can see all writes are performed using 128 KB block size. And it looks 
like a bottleneck. Which was discussed previously btw: 
http://www.gluster.org/pipermail/gluster-devel/2013-March/038821.html

Using GFAPI to access the volume shows better speed, but still far from raw 
brick. fio tests with ioengine=gfapi gives following:

Writing locally to replica 2 volume (each brick is separate local RAID6): 680 
MB/sec
Writing locally to 1-brick volume: 960 MB/sec


Accorging to 1-brick volume profile 128 KB blocks no more used:

# gluster volume profile tzk-data-03 info
Brick: j-gluster-01.vcod.jet.su:/R1/tzk-data-03
---
Cumulative Stats:
   Block Size:4194304b+ 
 No. of Reads:0 
No. of Writes: 9211 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls Fop
 -   ---   ---   ---   
100.002237.67 us1880.00 us5785.00 us   8701   WRITE
 
Duration: 49 seconds
   Data Read: 0 bytes
Data Written: 38633734144 bytes
 
Interval 0 Stats:
   Block Size:4194304b+ 
 No. of Reads:0 
No. of Writes: 9211 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls Fop
 -   ---   ---   ---   
100.002237.67 us1880.00 us5785.00 us   8701   WRITE
 
Duration: 49 seconds
   Data Read: 0 bytes
Data Written: 38633734144 bytes
 
[root@j-gluster-01 ~]# 

So, it may be worth to try using NFS Ganesha with GFAPI plugin.


> 3 авг. 2016 г., в 9:40, Kaamesh Kamalaaharan  
> написал(а):
> 
> Hi , 
> I have gluster 3.6.2 installed on my server network. Due to internal issues 
> we are not allowed to upgrade the gluster version. All the clients are on the 
> same version of gluster. When transferring files  to/from the clients or 
> between my nodes over the 10gb network, the transfer rate is capped at 
> 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts? 
> 
> Our server setup is as following:
> 
> 2 gluster servers -gfs1 and gfs2
>  volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
> 
> 
> 
> The following is the average results what i did so far:
> 
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd 
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
> 
> result=399Mb/s
> 
> 3) test read speed with dd
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
> 
> result=284MB/s
> 
> My gluster volume configuration:
>  
> Volume Name: gfsvolume
> Type: Replicate
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gfs1:/export/sda/brick
> Brick2: gfs2:/export/sda/brick
> Options Reconfigured:
> performance.quick-read: off
> network.ping-timeout: 30
> network.frame-timeout: 90
> performance.cache-max-file-size: 2MB
> 

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-05 Thread Gandalf Corvotempesta
2016-08-05 9:38 GMT+02:00 Kaamesh Kamalaaharan :
> We never did a rebuild before this so im not really sure. Will this affect
> gluster in any way?

Not directly but you'll have a node very slow for the whole time of rebuild.
A rebuild of that size could take many, many, many days and in this
period you could also hit another disk failure.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-05 Thread Kaamesh Kamalaaharan
We never did a rebuild before this so im not really sure. Will this affect
gluster in any way?


On Fri, Aug 5, 2016 at 2:42 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Il 05 ago 2016 5:57 AM, "Kaamesh Kamalaaharan"  ha
> scritto:
> > 12 x 4.0TB 3.5" LFF NL-SAS 6G, 128MB, 7.2K rpm HDDs (as Data Store set
> as RAID 6 achieve 36.0TB usable storage)
> >
>
> 12x4tb disks in a single raid6?
> What about rebuild time?
> You almost get an URE during a rebuild with that size.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-05 Thread Gandalf Corvotempesta
Il 05 ago 2016 5:57 AM, "Kaamesh Kamalaaharan"  ha
scritto:
> 12 x 4.0TB 3.5" LFF NL-SAS 6G, 128MB, 7.2K rpm HDDs (as Data Store set as
RAID 6 achieve 36.0TB usable storage)
>

12x4tb disks in a single raid6?
What about rebuild time?
You almost get an URE during a rebuild with that size.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-04 Thread Kaamesh Kamalaaharan
Hi,
I was mistaken about my server specs.
my actual specs  for each server are

12 x 4.0TB 3.5" LFF NL-SAS 6G, 128MB, 7.2K rpm HDDs (as Data Store set as
RAID 6 achieve 36.0TB usable storage)

not the WD RED as i mentioned earlier. I guess i should have a higher
transfer rate with these drives in. 400 MB/s is a bit too slow in my
opinion.

Any help i can get will be greatly appreciated as im not sure where i
should start debugging this issue


On Fri, Aug 5, 2016 at 2:44 AM, Leno Vo  wrote:

> i got 1.2 gb/s on seagate sshd ST1000LX001 raid 5 x3 (but with the
> dreaded cache array on) and 1.1 gb/s on samsung pro ssd 1tb x3 raid5 (no
> array caching on for it's not compatible on proliant---not enterprise ssd).
>
>
> On Thursday, August 4, 2016 5:23 AM, Kaamesh Kamalaaharan <
> kaam...@novocraft.com> wrote:
>
>
> hi,
> thanks for the reply. I have hardware raid 5  storage servers with 4TB WD
> red drives. I think they are capable of 6GB/s transfers so it shouldnt be a
> drive speed issue. Just for testing i tried to do a dd test directy into
> the brick mounted from the storage server itself and got around 800mb/s
> transfer rate which is double what i get when the brick is mounted on the
> client. Are there any other options or tests that i can perform to figure
> out the root cause of my problem as i have exhaused most google searches
> and tests.
>
> Kaamesh
>
> On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo  wrote:
>
> your 10G nic is capable, the problem is the disk speed, fix ur disk speed
> first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.
>
>
> On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan <
> kaam...@novocraft.com> wrote:
>
>
> Hi ,
> I have gluster 3.6.2 installed on my server network. Due to internal
> issues we are not allowed to upgrade the gluster version. All the clients
> are on the same version of gluster. When transferring files  to/from the
> clients or between my nodes over the 10gb network, the transfer rate is
> capped at 450Mb/s .Is there any way to increase the transfer speeds for
> gluster mounts?
>
> Our server setup is as following:
>
> 2 gluster servers -gfs1 and gfs2
>  volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
>
>
>
>
> The following is the average results what i did so far:
>
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd
>
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
>
> result=399Mb/s
>
>
> 3) test read speed with dd
>
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
>
>
> result=284MB/s
>
>
> My gluster volume configuration:
>
>
> Volume Name: gfsvolume
>
> Type: Replicate
>
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gfs1:/export/sda/brick
>
> Brick2: gfs2:/export/sda/brick
>
> Options Reconfigured:
>
> performance.quick-read: off
>
> network.ping-timeout: 30
>
> network.frame-timeout: 90
>
> performance.cache-max-file-size: 2MB
>
> cluster.server-quorum-type: none
>
> nfs.addr-namelookup: off
>
> nfs.trusted-write: off
>
> performance.write-behind-window-size: 4MB
>
> cluster.data-self-heal-algorithm: diff
>
> performance.cache-refresh-timeout: 60
>
> performance.cache-size: 1GB
>
> cluster.quorum-type: fixed
>
> auth.allow: 172.*
>
> cluster.quorum-count: 1
>
> diagnostics.latency-measurement: on
>
> diagnostics.count-fop-hits: on
>
> cluster.server-quorum-ratio: 50%
>
>
> Any help would be appreciated.
>
> Thanks,
>
> Kaamesh
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-04 Thread Leno Vo
i got 1.2 gb/s on seagate sshd ST1000LX001 raid 5 x3 (but with the dreaded 
cache array on) and 1.1 gb/s on samsung pro ssd 1tb x3 raid5 (no array caching 
on for it's not compatible on proliant---not enterprise ssd). 

On Thursday, August 4, 2016 5:23 AM, Kaamesh Kamalaaharan 
 wrote:
 

 hi, thanks for the reply. I have hardware raid 5  storage servers with 4TB WD 
red drives. I think they are capable of 6GB/s transfers so it shouldnt be a 
drive speed issue. Just for testing i tried to do a dd test directy into the 
brick mounted from the storage server itself and got around 800mb/s transfer 
rate which is double what i get when the brick is mounted on the client. Are 
there any other options or tests that i can perform to figure out the root 
cause of my problem as i have exhaused most google searches and tests. 
Kaamesh
On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo  wrote:

your 10G nic is capable, the problem is the disk speed, fix ur disk speed 
first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least. 

On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan 
 wrote:
 

 Hi , I have gluster 3.6.2 installed on my server network. Due to internal 
issues we are not allowed to upgrade the gluster version. All the clients are 
on the same version of gluster. When transferring files  to/from the clients or 
between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s 
.Is there any way to increase the transfer speeds for gluster mounts? 
Our server setup is as following:

2 gluster servers -gfs1 and gfs2 volume name : gfsvolume3 clients - hpc1, 
hpc2,hpc3gluster volume mounted on /export/gfsmount/



The following is the average results what i did so far:
1) test bandwith with iperf between all machines - 9.4 GiB/s2) test write speed 
with dd 

dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1


result=399Mb/s

3) test read speed with dd
dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1




result=284MB/s


My gluster volume configuration: 
Volume Name: gfsvolumeType: ReplicateVolume ID: 
a29bd2fb-b1ef-4481-be10-c2f4faf4059bStatus: StartedNumber of Bricks: 1 x 2 = 
2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2: 
gfs2:/export/sda/brickOptions Reconfigured:performance.quick-read: 
offnetwork.ping-timeout: 30network.frame-timeout: 
90performance.cache-max-file-size: 2MBcluster.server-quorum-type: 
nonenfs.addr-namelookup: offnfs.trusted-write: 
offperformance.write-behind-window-size: 4MBcluster.data-self-heal-algorithm: 
diffperformance.cache-refresh-timeout: 60performance.cache-size: 
1GBcluster.quorum-type: fixedauth.allow: 172.*cluster.quorum-count: 
1diagnostics.latency-measurement: ondiagnostics.count-fop-hits: 
oncluster.server-quorum-ratio: 50%

Any help would be appreciated. Thanks,Kaamesh

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

   



  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-04 Thread Jamie Lawrence
There is no way you’ll see 6GB/s out of a single disk. I think you’re referring 
to the rated SATA speed, which has nothing to do with the actual data rates 
you’ll see from the spinning rust. You might see ~130-150MB/s from a single 
platter in really nice, artificial workloads, more in RAID configurations that 
can read from multiple disks.

I have 6 WD Red 6TBs in a RAIDZ2 array (ZFS software RAID, nothing even vaguely 
approaching high-end hardware otherwise) and for typical file-serving 
workloads, I see about 120-130MBs from it. In contrast, I have a Samsung 950 
Pro NVME SSD, and do see over 1G/s throughput in some real-world workloads with 
it. But it costs >8x the price per storage unit.

-j


> On Aug 4, 2016, at 2:23 AM, Kaamesh Kamalaaharan  
> wrote:
> 
> hi, 
> thanks for the reply. I have hardware raid 5  storage servers with 4TB WD red 
> drives. I think they are capable of 6GB/s transfers so it shouldnt be a drive 
> speed issue. Just for testing i tried to do a dd test directy into the brick 
> mounted from the storage server itself and got around 800mb/s transfer rate 
> which is double what i get when the brick is mounted on the client. Are there 
> any other options or tests that i can perform to figure out the root cause of 
> my problem as i have exhaused most google searches and tests. 
> 
> Kaamesh
> 
> On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo  wrote:
> your 10G nic is capable, the problem is the disk speed, fix ur disk speed 
> first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.
> 
> 
> On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan 
>  wrote:
> 
> 
> Hi , 
> I have gluster 3.6.2 installed on my server network. Due to internal issues 
> we are not allowed to upgrade the gluster version. All the clients are on the 
> same version of gluster. When transferring files  to/from the clients or 
> between my nodes over the 10gb network, the transfer rate is capped at 
> 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts? 
> 
> Our server setup is as following:
> 
> 2 gluster servers -gfs1 and gfs2
>  volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
> 
> 
> 
> The following is the average results what i did so far:
> 
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd 
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
> 
> result=399Mb/s
> 
> 3) test read speed with dd
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
> 
> result=284MB/s
> 
> My gluster volume configuration:
>  
> Volume Name: gfsvolume
> Type: Replicate
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
> Status: Started
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: gfs1:/export/sda/brick
> Brick2: gfs2:/export/sda/brick
> Options Reconfigured:
> performance.quick-read: off
> network.ping-timeout: 30
> network.frame-timeout: 90
> performance.cache-max-file-size: 2MB
> cluster.server-quorum-type: none
> nfs.addr-namelookup: off
> nfs.trusted-write: off
> performance.write-behind-window-size: 4MB
> cluster.data-self-heal-algorithm: diff
> performance.cache-refresh-timeout: 60
> performance.cache-size: 1GB
> cluster.quorum-type: fixed
> auth.allow: 172.*
> cluster.quorum-count: 1
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> cluster.server-quorum-ratio: 50%
> 
> Any help would be appreciated. 
> Thanks,
> Kaamesh
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-04 Thread Kaamesh Kamalaaharan
hi,
thanks for the reply. I have hardware raid 5  storage servers with 4TB WD
red drives. I think they are capable of 6GB/s transfers so it shouldnt be a
drive speed issue. Just for testing i tried to do a dd test directy into
the brick mounted from the storage server itself and got around 800mb/s
transfer rate which is double what i get when the brick is mounted on the
client. Are there any other options or tests that i can perform to figure
out the root cause of my problem as i have exhaused most google searches
and tests.

Kaamesh

On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo  wrote:

> your 10G nic is capable, the problem is the disk speed, fix ur disk speed
> first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.
>
>
> On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan <
> kaam...@novocraft.com> wrote:
>
>
> Hi ,
> I have gluster 3.6.2 installed on my server network. Due to internal
> issues we are not allowed to upgrade the gluster version. All the clients
> are on the same version of gluster. When transferring files  to/from the
> clients or between my nodes over the 10gb network, the transfer rate is
> capped at 450Mb/s .Is there any way to increase the transfer speeds for
> gluster mounts?
>
> Our server setup is as following:
>
> 2 gluster servers -gfs1 and gfs2
>  volume name : gfsvolume
> 3 clients - hpc1, hpc2,hpc3
> gluster volume mounted on /export/gfsmount/
>
>
>
>
> The following is the average results what i did so far:
>
> 1) test bandwith with iperf between all machines - 9.4 GiB/s
> 2) test write speed with dd
>
> dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1
>
> result=399Mb/s
>
>
> 3) test read speed with dd
>
> dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1
>
>
> result=284MB/s
>
>
> My gluster volume configuration:
>
>
> Volume Name: gfsvolume
>
> Type: Replicate
>
> Volume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: gfs1:/export/sda/brick
>
> Brick2: gfs2:/export/sda/brick
>
> Options Reconfigured:
>
> performance.quick-read: off
>
> network.ping-timeout: 30
>
> network.frame-timeout: 90
>
> performance.cache-max-file-size: 2MB
>
> cluster.server-quorum-type: none
>
> nfs.addr-namelookup: off
>
> nfs.trusted-write: off
>
> performance.write-behind-window-size: 4MB
>
> cluster.data-self-heal-algorithm: diff
>
> performance.cache-refresh-timeout: 60
>
> performance.cache-size: 1GB
>
> cluster.quorum-type: fixed
>
> auth.allow: 172.*
>
> cluster.quorum-count: 1
>
> diagnostics.latency-measurement: on
>
> diagnostics.count-fop-hits: on
>
> cluster.server-quorum-ratio: 50%
>
>
> Any help would be appreciated.
>
> Thanks,
>
> Kaamesh
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-03 Thread Leno Vo
your 10G nic is capable, the problem is the disk speed, fix ur disk speed 
first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least. 

On Wednesday, August 3, 2016 2:40 AM, Kaamesh Kamalaaharan 
 wrote:
 

 Hi , I have gluster 3.6.2 installed on my server network. Due to internal 
issues we are not allowed to upgrade the gluster version. All the clients are 
on the same version of gluster. When transferring files  to/from the clients or 
between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s 
.Is there any way to increase the transfer speeds for gluster mounts? 
Our server setup is as following:

2 gluster servers -gfs1 and gfs2 volume name : gfsvolume3 clients - hpc1, 
hpc2,hpc3gluster volume mounted on /export/gfsmount/



The following is the average results what i did so far:
1) test bandwith with iperf between all machines - 9.4 GiB/s2) test write speed 
with dd 

dd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1


result=399Mb/s

3) test read speed with dd
dd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1




result=284MB/s


My gluster volume configuration: 
Volume Name: gfsvolumeType: ReplicateVolume ID: 
a29bd2fb-b1ef-4481-be10-c2f4faf4059bStatus: StartedNumber of Bricks: 1 x 2 = 
2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2: 
gfs2:/export/sda/brickOptions Reconfigured:performance.quick-read: 
offnetwork.ping-timeout: 30network.frame-timeout: 
90performance.cache-max-file-size: 2MBcluster.server-quorum-type: 
nonenfs.addr-namelookup: offnfs.trusted-write: 
offperformance.write-behind-window-size: 4MBcluster.data-self-heal-algorithm: 
diffperformance.cache-refresh-timeout: 60performance.cache-size: 
1GBcluster.quorum-type: fixedauth.allow: 172.*cluster.quorum-count: 
1diagnostics.latency-measurement: ondiagnostics.count-fop-hits: 
oncluster.server-quorum-ratio: 50%

Any help would be appreciated. Thanks,Kaamesh

___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

  ___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users