Hi All,

I have shared the Gluster Volume Profile for your reference. I am facing
performance issue with my Gluster setup while copying multiples
files/folders from client to the mounted gluster volume.

Any suggestion to improve the copy speed to the Gluster volume is much
appreciated.

Thanks
Srikanth

On Tue, Dec 15, 2015 at 10:47 PM, Srikanth Mampilakal <
shrikanth1...@gmail.com> wrote:

> Hi Anuradha,
>
> Please find the Gluster Volume Profile details
>
> time cp -RPp drupal\ code/ /mnt/testmount/copytogluster
>
>
>
>
> *Profile info of the volume when you copy dirs/files into glusterfs.*
>
>
> *Time taken to copy (70 MB files/Folder)*
>
> [root@GFSCLIENT01 temp]# time cp -RPp /mnt/testmount/
> /mnt/testmount/copytogluster
>
> real    29m40.985s
> user    0m0.172s
> sys     0m1.688s
>
>
>
> [root@GFSNODE01 ~]# gluster volume profile gv1 info
> Brick: GFSNODE01:/mnt/perfDisk/gv1
>
> --------------------------------------
> Cumulative Stats:
>    Block Size:                 16b+                  32b+
>  64b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                   19                    11
>    75
>
>    Block Size:                128b+                 256b+
> 512b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                   77                   221
>   297
>
>    Block Size:               1024b+                2048b+
>  4096b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                  344                   305
>   336
>
>    Block Size:               8192b+               16384b+
> 32768b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                  160                   200
>    87
>
>    Block Size:              65536b+              131072b+
>  No. of Reads:                    0                     0
> No. of Writes:                   59                    38
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>   Fop
>  ---------   -----------   -----------   -----------   ------------
>  ----
>       0.00       0.00 us       0.00 us       0.00 us           2198
> RELEASE
>       0.00       0.00 us       0.00 us       0.00 us             18
>  RELEASEDIR
>       0.00      39.75 us      22.00 us      59.00 us              4
> READDIR
>       0.01      63.12 us       3.00 us     143.00 us              8
> OPENDIR
>       0.01     108.83 us      27.00 us     194.00 us              6
>  GETXATTR
>       0.11      58.07 us      28.00 us     124.00 us            170
>  STAT
>       0.54     113.57 us      46.00 us     258.00 us            440
>  SETXATTR
>       0.79      97.28 us      23.00 us     224.00 us            745
>  STATFS
>       1.37      57.40 us      12.00 us     428.00 us           2198
> FLUSH
>       3.70      77.12 us      15.00 us     322.00 us           4420
>  FINODELK
>       3.94      68.70 us      14.00 us     259.00 us           5278
> ENTRYLK
>       4.98     205.68 us      70.00 us    2874.00 us           2229
> WRITE
>       5.15    1077.38 us     202.00 us  112584.00 us            440
> MKDIR
>       5.27     110.26 us      33.00 us    5589.00 us           4397
> REMOVEXATTR
>       7.88     118.30 us      28.00 us   11471.00 us           6130
> SETATTR
>       9.23     190.97 us      33.00 us  107884.00 us           4450
>  FXATTROP
>      16.06     672.52 us     112.00 us  177035.00 us           2199
>  CREATE
>      20.24      80.67 us      11.00 us     454.00 us          23102
> INODELK
>      20.74     160.46 us      24.00 us   33476.00 us          11901
>  LOOKUP
>
>     Duration: 3007 seconds
>    Data Read: 0 bytes
> Data Written: 24173066 bytes
>
>
> -----------------------------------------------------------------------------------------------
>
> *Profile info of the volume when you copy dirs/files within glusterfs.*
>
> Time taken to copy (70 MB files/folders)
>
> [root@GFSCLIENT01 testmount]# time cp -RPp copytogluster/data/
> /mnt/testmount/copywithinglustervol/
>
> real    37m50.407s
> user    0m0.248s
> sys     0m1.979s
>
> [root@GFSNODE01 ~]# gluster volume profile gv1 info
> Brick: GFSNODE01:/mnt/perfDisk/gv1
>
>
> Interval 8 Stats:
>    Block Size:                 64b+                 128b+
> 256b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                   11                     5
>    11
>
>    Block Size:                512b+                1024b+
>  2048b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                   14                    18
>    13
>
>    Block Size:               4096b+                8192b+
> 16384b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                   11                     6
>     3
>
>    Block Size:              32768b+               65536b+
>  No. of Reads:                    0                     1
> No. of Writes:                    1                     1
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>   Fop
>  ---------   -----------   -----------   -----------   ------------
>  ----
>       0.00       0.00 us       0.00 us       0.00 us             94
> RELEASE
>       0.00       0.00 us       0.00 us       0.00 us              4
>  RELEASEDIR
>       0.03      37.00 us      37.00 us      37.00 us              1
> FSTAT
>       0.04      22.50 us      12.00 us      33.00 us              2
> READDIR
>       0.05      20.67 us      17.00 us      28.00 us              3
>  STAT
>       0.06      77.00 us      77.00 us      77.00 us              1
>  READ
>       0.07      41.50 us      14.00 us      69.00 us              2
>  GETXATTR
>       0.10      30.25 us       2.00 us      53.00 us              4
> OPENDIR
>       0.13      51.67 us      42.00 us      60.00 us              3
>  SETXATTR
>       0.35     139.67 us     127.00 us     152.00 us              3
> MKDIR
>       0.59      25.46 us      16.00 us      40.00 us             28
>  STATFS
>       1.48      18.90 us      11.00 us      38.00 us             94
> FLUSH
>       3.87      24.00 us      13.00 us      43.00 us            194
> ENTRYLK
>       4.00      25.57 us      16.00 us      55.00 us            188
>  FINODELK
>       5.65      72.37 us      57.00 us     177.00 us             94
> WRITE
>       7.38      47.22 us      38.00 us      60.00 us            188
> REMOVEXATTR
>       8.28      50.55 us      27.00 us     108.00 us            197
> SETATTR
>       8.50      54.41 us      37.00 us     112.00 us            188
>  FXATTROP
>      13.04     166.86 us      78.00 us    1050.00 us             94
>  CREATE
>      17.43      26.27 us      11.00 us      85.00 us            798
> INODELK
>      28.95      68.04 us      14.00 us     233.00 us            512
>  LOOKUP
>
>     Duration: 29 seconds
>    Data Read: 78602 bytes
> Data Written: 365315 bytes
>
> Interval 28 Stats:
>    Block Size:                  4b+                  32b+
>  64b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                    1                     5
>    28
>
>    Block Size:                128b+                 256b+
> 512b+
>  No. of Reads:                    0                     0
>     0
> No. of Writes:                   59                   164
>   305
>
>    Block Size:               1024b+                2048b+
>  4096b+
>  No. of Reads:                    0                     0
>     1
> No. of Writes:                  232                   171
>   165
>
>    Block Size:               8192b+               16384b+
> 32768b+
>  No. of Reads:                    0                     0
>     2
> No. of Writes:                  117                    96
>    39
>
>    Block Size:              65536b+              131072b+
>  No. of Reads:                   11                     8
> No. of Writes:                   24                    22
>  %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls
>   Fop
>  ---------   -----------   -----------   -----------   ------------
>  ----
>       0.00       0.00 us       0.00 us       0.00 us           1406
> RELEASE
>       0.00       0.00 us       0.00 us       0.00 us            265
>  RELEASEDIR
>       0.00      22.50 us      13.00 us      32.00 us              2
> READDIR
>       0.00      30.50 us      19.00 us      42.00 us              2
>  GETXATTR
>       0.02      35.12 us      28.00 us      45.00 us             16
> FSTAT
>       0.06      75.86 us      27.00 us     232.00 us             22
>  READ
>       0.08      21.99 us      15.00 us      37.00 us            102
>  STAT
>       0.31      32.43 us       2.00 us      55.00 us            265
> OPENDIR
>       0.43      44.88 us      33.00 us      69.00 us            264
>  SETXATTR
>       0.54      24.82 us      15.00 us      91.00 us            598
>  STATFS
>       0.84      16.54 us      10.00 us      74.00 us           1406
> FLUSH
>       1.89     199.11 us      99.00 us    6678.00 us            263
> MKDIR
>       2.39      23.27 us      14.00 us      91.00 us           2840
>  FINODELK
>       2.58      21.39 us      11.00 us      62.00 us           3339
> ENTRYLK
>       2.83     288.06 us      25.00 us    2315.00 us            272
>  READDIRP
>       3.93      76.22 us      54.00 us     355.00 us           1428
> WRITE
>       6.05      59.59 us      36.00 us   42015.00 us           2812
> REMOVEXATTR
>       6.34      61.60 us      24.00 us   28264.00 us           2850
>  FXATTROP
>       6.40      45.78 us      25.00 us     146.00 us           3867
> SETATTR
>      10.96     215.75 us      79.00 us   14633.00 us           1406
>  CREATE
>      12.49      23.78 us      10.00 us     100.00 us          14545
> INODELK
>      41.85      71.82 us      14.00 us   55712.00 us          16131
>  LOOKUP
>
>     Duration: 598 seconds
>    Data Read: 2150643 bytes
> Data Written: 12210039 bytes
>
> Do let me know if you need any other details
>
> Thanks
> Srikanth
>
>
> On Fri, Dec 11, 2015 at 4:15 PM, Anuradha Talur <ata...@redhat.com> wrote:
>
>> Response inline.
>>
>> ----- Original Message -----
>> > From: "Srikanth Mampilakal" <shrikanth1...@gmail.com>
>> > To: gluster-users@gluster.org
>> > Sent: Thursday, December 10, 2015 7:59:04 PM
>> > Subject: Re: [Gluster-users] Gluster - Performance issue while copying
>> bulk   files/folders
>> >
>> >
>> >
>> > Hi members,
>> >
>> > Really appreciate if you can share your thoughts or any feedback for
>> > resolving the slow copy issue
>> >
>> > Regards
>> > Srikanth
>> > On 10-Dec-2015 2:12 AM, "Srikanth Mampilakal" <
>> srikanth.mampila...@gmail.com
>> > > wrote:
>> >
>> >
>> >
>> > Hi,
>> >
>> >
>> > I have production gluster file service used as a shared storage where
>> the
>> > content management system uses it as document root. I have run in to a
>> > performance issue with the gluster/fuse client.
>> >
>> > Looking for your thoughts and experience in resolving Gluster
>> performance
>> > issues:
>> >
>> > Gluster Infrastructure
>> >
>> > Gluster version :GlusterFS 3.7.6
>> >
>> > 2 gluster nodes of the same config below
>> >
>> > Redhat EL7.0-64
>> > Memory : 4GB
>> > Processor : 2 x 2.0 Ghz
>> > Network : 100 Mbps
>> > File Storage Volume : NETAPP Storage LUN with 2.0 IOPS/GB
>> >
>> > Gluster Volume information:
>> >
>> > [root@GlusterFileServe1 ~]# gluster volume info
>> >
>> > Volume Name: prodcmsroot
>> > Type: Replicate
>> > Volume ID: f1284bf0-1939-46f9-a672-a7716e362947
>> > Status: Started
>> > Number of Bricks: 1 x 2 = 2
>> > Transport-type: tcp
>> > Bricks:
>> > Brick1: Server1:/glusterfs/brick1/prodcmsroot
>> > Brick2: Server2:/glusterfs/brick1/prodcmsroot
>> > Options Reconfigured:
>> > performance.io-thread-count: 64
>> > performance.cache-size: 1073741824
>> > performance.readdir-ahead: on
>> > performance.write-behind-window-size: 524288
>> >
>> > [root@GlusterFileServe1 ~]#
>> >
>> > The replication between Gluster node are quick and consistent.
>> >
>> > The apache webservers are accessing the Gluster volume using native
>> gluster
>> > fuse client and located in the same VLAN as the Gluster Server.
>> >
>> > GlusterFileServe1:/prodcmsroot /mnt/glusterfs glusterfs
>> > direct-io-mode=disable,defaults,_netdev 0 0
>> >
>> > The server utilization (memory,cpu,network and disk 1/0) is relatively
>> low
>> >
>> > I am experiencing very slow performance while copying multiple
>> file/folders
>> > (approx 75 MB) and it takes atleast approx 35 min. Even copy a folder
>> (with
>> > multiple files/subfolders) within the Gluster volume take the same time.
>> >
>> > However, if I do dd to check the copy speed, I get the below result.
>> >
>> > [root@ClientServer ~]# time sh -c "dd if=/dev/zero
>> of=/mnt/testmount/test.tmp
>> > bs=4k count=20000 && sync"
>> > 20000+0 records in
>> > 20000+0 records out
>> > 81920000 bytes (82 MB) copied, 17.1357 s, 4.8 MB/s
>> >
>> > real 0m17.337s
>> > user 0m0.031s
>> > sys 0m0.317s
>> >
>> >
>> > Anyone experience the same kind of performance issue, please let me
>> know your
>> > thoughts.
>> >
>> Hi Srikanth,
>>
>> Could you please provide the following information so that the reason
>> behind
>> slow copy can be deduced?
>>
>> 1) Profile info of the volume when you copy dirs/files into glusterfs.
>> 2) Profile info of the volume when you copy dirs/files within glusterfs.
>>
>> The following steps should help you with profile info:
>> 1) gluster volume profile <VOLNAME> start
>> 2) Perform copy operations
>> 3) gluster volume profile <VOLNAME> info (you will get stats of the FOPs
>> at this point)
>> 4) gluster volume profile <VOLNAME> stop
>>
>> Please follow steps 1 through 4 twice. Once for copy into glusterfs and
>> once for copy
>> within.
>>
>> > Cheers
>> > Srikanth
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>> >
>> > _______________________________________________
>> > Gluster-users mailing list
>> > Gluster-users@gluster.org
>> > http://www.gluster.org/mailman/listinfo/gluster-users
>>
>> --
>> Thanks,
>> Anuradha.
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Cheers
> Shrikanth
>



-- 
Cheers
Shrikanth
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to