Re: [Gluster-users] High load on glusterfs!!

2017-08-29 Thread ABHISHEK PALIWAL
Is it possible to suggest something here?

On Aug 17, 2017 12:03 PM, "ABHISHEK PALIWAL" 
wrote:

> Hi Team,
>
> I have an query regarding the usage of ACL on gluster volume. I have
> noticed that when we use normal gluster volume (without ACL) CPU load is
> low, but when we apply the ACL on gluster volume which internally uses Fuse
> ACL, CPU load gets increase about 6x times.
>
> Could you please let me know is this expected or we can do some other
> configuration to reduce this type of overhead on gluster volume with ACLs.
>
> For more clarification we are using kernel NFS for exporting the gluster
> volume.
>
> Please let me know if you require more information.
>
> --
> Regards
> Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] High load on glusterfs!!

2017-08-16 Thread ABHISHEK PALIWAL
Hi Team,

I have an query regarding the usage of ACL on gluster volume. I have
noticed that when we use normal gluster volume (without ACL) CPU load is
low, but when we apply the ACL on gluster volume which internally uses Fuse
ACL, CPU load gets increase about 6x times.

Could you please let me know is this expected or we can do some other
configuration to reduce this type of overhead on gluster volume with ACLs.

For more clarification we are using kernel NFS for exporting the gluster
volume.

Please let me know if you require more information.

-- 
Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load on glusterfs client

2016-03-14 Thread Krutika Dhananjay
Looks like a case of gfid split-brain.

What does the output of `gluster volume heal  info split-brain` say?

-Krutika

On Mon, Mar 14, 2016 at 1:05 PM,  wrote:

> Today, after setting the self-heal options to off, the problem happened
> again.
>
>
>
> Here’s what happens when I try to heal the problems:
>
>
>
> [root@nfs02 gluster volume heal opt info
>
> Brick nfs01:/opt/bkk
>
> Number of entries: 0
>
>
>
> Brick nfs02:/opt/bkk
>
>
> /releases/1.0.1/typo3temp/Cache/Code/fluid_template/Standalone_template_source_948126c20bcb68e1839bd1cb11b2326776a5a89a.php
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> /releases/1.0.1/typo3temp/locks/b3e7327e62654b234caf95033dd5f5db
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> /releases/1.0.1/typo3temp/locks/dabccbb64a6426cf9b1005b79f5765a9
>
> Number of entries: 32
>
>
>
> [root@nfs02 ~]# gluster volume heal opt full
>
> Launching heal operation to perform full self heal on volume opt has been
> unsuccessful
>
>
>
> [root@nfs02 ~]# gluster volume heal opt
>
> Launching heal operation to perform index self heal on volume opt has been
> successful
>
> Use heal info commands to check status
>
>
>
> [root@nfs02 ~]# gluster volume heal opt info
>
> Brick nfs01:/opt/bkk
>
> Number of entries: 0
>
>
>
> Brick nfs02:/opt/bkk
>
>
> /releases/1.0.1/typo3temp/Cache/Code/fluid_template/Standalone_template_source_948126c20bcb68e1839bd1cb11b2326776a5a89a.php
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> /releases/1.0.1/typo3temp/locks/b3e7327e62654b234caf95033dd5f5db
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> Number of entries: 31
>
>
>
>
>
> Here’s the log to it:
>
> [2016-03-14 07:25:34.593287] W [MSGID: 108008]
> [afr-self-heal-name.c:359:afr_selfheal_name_gfid_mismatch_check]
> 0-opt-replicate-0: GFID mismatch for
> /Standalone_layout_Mail_fa22189c871e1e7c1a36b59209ef0812d5e709e9.php
> 082a3550-15ad-4719-a550-c60c7f8c8791 on opt-client-1 and
> c2f289ca-6058-46ae-af8a-18491975eb7d on opt-client-0
>
>
>
>
>
> [root@-nfs02 ~]# gluster volume info
>
>
>
> Volume Name: opt
>
> Type: Replicate
>
> Volume ID: 5b77070f-5378-45ec-9eda-5f7dd007ff8a
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: nfs01:/opt/bkk
>
> Brick2: nfs02:/opt/bkk
>
> Options Reconfigured:
>
> cluster.metadata-self-heal: off
>
> cluster.data-self-heal: off
>
> cluster.entry-self-heal: off
>
> performance.md-cache-timeout: 1
>
> performance.cache-max-file-size: 2MB
>
> performance.io-thread-count: 16
>
> network.ping-timeout: 42
>
> performance.write-behind-window-size: 4MB
>
> performance.read-ahead: off
>
> performance.cache-refresh-timeout: 10
>
> performance.cache-size: 512MB
>
> performance.quick-read: off
>
> performance.readdir-ahead: on
>
>
>
>
>
> Any other ideas?
>
>
>
> Thanks!
>
>
>
> *Von:* Krutika Dhananjay [mailto:kdhan...@redhat.com]
> *Gesendet:* Dienstag, 8. März 2016 04:04
> *An:* Gumprich, Sebastian
> *Cc:* gluster-users@gluster.org
> *Betreff:* Re: [Gluster-users] High load on glusterfs client
>
>
>
> Could you try disabling client-side heals and see if it works for you?
>
> Here's what you'd need to do:
>
> #gluster volume set  entry-self-heal off
>
> #gluster volume set  data-self-heal off
>
> #gluster volume set  metadata-self-heal off
>
> -Krutika
>
>
>
> On Wed, Mar 2, 2016 at 12:37 AM,  wrote:
>
> Hello everyone,
>
>
>
> I’m experiencing high load on our glusterfs clients.
>
>
>
> Here’s the setup:
>
>
>
> There are to glusterfs server:
>
>
>
> Nfs01 and nfs02 with the following configuration:
>
>
>
> [root nfs01 ~]# gluster volume info opt
>
>
>
> Volume Name: opt
>
> Type: Replicate
>
> Volume ID: 5b77070f-5378-45ec-9eda-5f7dd007ff8a
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1:  nfs01:/opt/bkk
>
> Brick2: nfs02:/opt/bkk
>
> Options Reconfigured:
>
> performance.readdir-ahead: on
>
&g

Re: [Gluster-users] High load on glusterfs client

2016-03-07 Thread Krutika Dhananjay
Could you try disabling client-side heals and see if it works for you?
Here's what you'd need to do:

#gluster volume set  entry-self-heal off
#gluster volume set  data-self-heal off
#gluster volume set  metadata-self-heal off

-Krutika

On Wed, Mar 2, 2016 at 12:37 AM,  wrote:

> Hello everyone,
>
>
>
> I’m experiencing high load on our glusterfs clients.
>
>
>
> Here’s the setup:
>
>
>
> There are to glusterfs server:
>
>
>
> Nfs01 and nfs02 with the following configuration:
>
>
>
> [root nfs01 ~]# gluster volume info opt
>
>
>
> Volume Name: opt
>
> Type: Replicate
>
> Volume ID: 5b77070f-5378-45ec-9eda-5f7dd007ff8a
>
> Status: Started
>
> Number of Bricks: 1 x 2 = 2
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1:  nfs01:/opt/bkk
>
> Brick2: nfs02:/opt/bkk
>
> Options Reconfigured:
>
> performance.readdir-ahead: on
>
> performance.quick-read: off
>
> performance.cache-size: 512MB
>
> performance.cache-refresh-timeout: 10
>
> performance.read-ahead: off
>
> performance.write-behind-window-size: 4MB
>
> network.ping-timeout: 2
>
> performance.io-thread-count: 16
>
> performance.cache-max-file-size: 2MB
>
> performance.md-cache-timeout: 1
>
>
>
> Then there are two clients (web01 and web02) that mount the brick via a
> virtual ip-address (nfs-VIP):
>
> nfs-VIP:/opt on /opt/bkk type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
>
>
> operating system on all server is CentOS Linux release 7.2.1511 (Core).
>
> Glusterfs version is glusterfs 3.7.6 built on Nov  9 2015 15:20:26
>
>
>
> On the brick lies the PHP dynamic webcontent from a typo3 CMS.
>
>
>
> On the client (web01) the following is logged in the gluster.log:
>
>
>
> iner_08850598886fb5f39c9cf1d269d7e20677f97ede.php>,
> e09948dd-1e9b-4430-8f55-3df64cda2385 on opt-client-1 and
> ba80a475-7b83-4c83-bd0c-798a108bfb63 on opt-client-0. Skipping conservative
> merge on the file.
>
> [2016-03-01 18:40:50.570040] E [MSGID: 108008]
> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
> 0-opt-replicate-0: Gfid mismatch detected for
> <4c6dda77-6a2b-4996-bca4-9ace4cee45cc/News_News_layout_Detail_html_bd113d9c433c8f88376e47547db3b94e698a5ecd.php>,
> 739ee14c-2d5d-458b-bffd-83595bfcbe6a on opt-client-1 and
> 5a311733-731e-4478-ad3c-a70fbf66ba30 on opt-client-0. Skipping conservative
> merge on the file.
>
> [2016-03-01 18:40:50.572992] E [MSGID: 108008]
> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
> 0-opt-replicate-0: Gfid mismatch detected for
> <4c6dda77-6a2b-4996-bca4-9ace4cee45cc/News_News_partial_Detail_FalMediaContainer_9c1b3fd40fca9019726b3f6b8bc04618ffadab7b.php>,
> bb6907a1-ce80-4e03-92df-6fbc69d24a4d on opt-client-1 and
> 6f88aa67-cb81-4c26-94b2-e3aaa8704e8d on opt-client-0. Skipping conservative
> merge on the file.
>
> [2016-03-01 18:40:50.791704] E [MSGID: 108008]
> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
> 0-opt-replicate-0: Gfid mismatch detected for
> <4c6dda77-6a2b-4996-bca4-9ace4cee45cc/Powermail_Form_action_create_f40464a6a7f73d86cda514065167d59a7ddece73.php>,
> 5e5b224b-ea20-4d38-8504-61b24f5d6a3b on opt-client-1 and
> fab07af5-2aa5-4873-a6e2-6265ec78e304 on opt-client-0. Skipping conservative
> merge on the file.
>
> [2016-03-01 18:40:54.085964] E [MSGID: 108008]
> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
> 0-opt-replicate-0: Gfid mismatch detected for
> <4c6dda77-6a2b-4996-bca4-9ace4cee45cc/News_News_action_detail_8d30b654cd8343fe40616b8a2f8a5343b1ed776e.php>,
> 4d75a687-b9ab-4f97-b698-38668d1981ae on opt-client-1 and
> 110b315e-2e28-4859-a8b9-e0f1629faa3c on opt-client-0. Skipping conservative
> merge on the file.
>
> [2016-03-01 18:40:56.153651] E [MSGID: 108008]
> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
> 0-opt-replicate-0: Gfid mismatch detected for
> <4c6dda77-6a2b-4996-bca4-9ace4cee45cc/Powermail_Form_layout_Default_aae217b167ad82f4b1258bb01fa73f305844dbd8.php>,
> 6f7e2709-8c14-486a-85a2-a3cb48af4ca5 on opt-client-1 and
> 6ab62408-0406-4834-96b9-a51e18441d4c on opt-client-0. Skipping conservative
> merge on the file.
>
> [2016-03-01 18:41:05.476126] I [MSGID: 108026]
> [afr-self-heal-entry.c:593:afr_selfheal_entry_do] 0-opt-replicate-0:
> performing entry selfheal on 7a922c37-48d0-4dfb-8abb-18a435c948af
>
> [2016-03-01 18:41:05.597093] I [MSGID: 108026]
> [afr-self-heal-common.c:651:afr_log_selfheal] 0-opt-replicate-0: Completed
> entry selfheal on 7a922c37-48d0-4dfb-8abb-18a435c948af. source=1 sinks=0
>
> [2016-03-01 18:41:05.790944] E [MSGID: 108008]
> [afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch]
> 0-opt-replicate-0: Gfid mismatch detected for
> <4c6dda77-6a2b-4996-bca4-9ace4cee45cc/Powermail_Form_action_form_f0755f8526150f023fd98252b510a40c49586dbd.php>,
> 118668d9-608a-477a-b655-bcc6c2298bf4 on opt-client-1 and
> a87943a4-e18a-4642-adff-1ad765496533 on opt-client-0. Skipping conservative
> merge on the file.
>
> [

[Gluster-users] High load on glusterfs client

2016-03-07 Thread Sebastian.Gumprich
Hello everyone,

I'm experiencing high load on our glusterfs clients.

Here's the setup:

There are to glusterfs server:

Nfs01 and nfs02 with the following configuration:

[root nfs01 ~]# gluster volume info opt

Volume Name: opt
Type: Replicate
Volume ID: 5b77070f-5378-45ec-9eda-5f7dd007ff8a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1:  nfs01:/opt/bkk
Brick2: nfs02:/opt/bkk
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.cache-size: 512MB
performance.cache-refresh-timeout: 10
performance.read-ahead: off
performance.write-behind-window-size: 4MB
network.ping-timeout: 2
performance.io-thread-count: 16
performance.cache-max-file-size: 2MB
performance.md-cache-timeout: 1

Then there are two clients (web01 and web02) that mount the brick via a virtual 
ip-address (nfs-VIP):
nfs-VIP:/opt on /opt/bkk type fuse.glusterfs 
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

operating system on all server is CentOS Linux release 7.2.1511 (Core).
Glusterfs version is glusterfs 3.7.6 built on Nov  9 2015 15:20:26

On the brick lies the PHP dynamic webcontent from a typo3 CMS.

On the client (web01) the following is logged in the gluster.log:

iner_08850598886fb5f39c9cf1d269d7e20677f97ede.php>, 
e09948dd-1e9b-4430-8f55-3df64cda2385 on opt-client-1 and 
ba80a475-7b83-4c83-bd0c-798a108bfb63 on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:40:50.570040] E [MSGID: 108008] 
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch] 
0-opt-replicate-0: Gfid mismatch detected for 
<4c6dda77-6a2b-4996-bca4-9ace4cee45cc/News_News_layout_Detail_html_bd113d9c433c8f88376e47547db3b94e698a5ecd.php>,
 739ee14c-2d5d-458b-bffd-83595bfcbe6a on opt-client-1 and 
5a311733-731e-4478-ad3c-a70fbf66ba30 on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:40:50.572992] E [MSGID: 108008] 
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch] 
0-opt-replicate-0: Gfid mismatch detected for 
<4c6dda77-6a2b-4996-bca4-9ace4cee45cc/News_News_partial_Detail_FalMediaContainer_9c1b3fd40fca9019726b3f6b8bc04618ffadab7b.php>,
 bb6907a1-ce80-4e03-92df-6fbc69d24a4d on opt-client-1 and 
6f88aa67-cb81-4c26-94b2-e3aaa8704e8d on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:40:50.791704] E [MSGID: 108008] 
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch] 
0-opt-replicate-0: Gfid mismatch detected for 
<4c6dda77-6a2b-4996-bca4-9ace4cee45cc/Powermail_Form_action_create_f40464a6a7f73d86cda514065167d59a7ddece73.php>,
 5e5b224b-ea20-4d38-8504-61b24f5d6a3b on opt-client-1 and 
fab07af5-2aa5-4873-a6e2-6265ec78e304 on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:40:54.085964] E [MSGID: 108008] 
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch] 
0-opt-replicate-0: Gfid mismatch detected for 
<4c6dda77-6a2b-4996-bca4-9ace4cee45cc/News_News_action_detail_8d30b654cd8343fe40616b8a2f8a5343b1ed776e.php>,
 4d75a687-b9ab-4f97-b698-38668d1981ae on opt-client-1 and 
110b315e-2e28-4859-a8b9-e0f1629faa3c on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:40:56.153651] E [MSGID: 108008] 
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch] 
0-opt-replicate-0: Gfid mismatch detected for 
<4c6dda77-6a2b-4996-bca4-9ace4cee45cc/Powermail_Form_layout_Default_aae217b167ad82f4b1258bb01fa73f305844dbd8.php>,
 6f7e2709-8c14-486a-85a2-a3cb48af4ca5 on opt-client-1 and 
6ab62408-0406-4834-96b9-a51e18441d4c on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:41:05.476126] I [MSGID: 108026] 
[afr-self-heal-entry.c:593:afr_selfheal_entry_do] 0-opt-replicate-0: performing 
entry selfheal on 7a922c37-48d0-4dfb-8abb-18a435c948af
[2016-03-01 18:41:05.597093] I [MSGID: 108026] 
[afr-self-heal-common.c:651:afr_log_selfheal] 0-opt-replicate-0: Completed 
entry selfheal on 7a922c37-48d0-4dfb-8abb-18a435c948af. source=1 sinks=0
[2016-03-01 18:41:05.790944] E [MSGID: 108008] 
[afr-self-heal-entry.c:253:afr_selfheal_detect_gfid_and_type_mismatch] 
0-opt-replicate-0: Gfid mismatch detected for 
<4c6dda77-6a2b-4996-bca4-9ace4cee45cc/Powermail_Form_action_form_f0755f8526150f023fd98252b510a40c49586dbd.php>,
 118668d9-608a-477a-b655-bcc6c2298bf4 on opt-client-1 and 
a87943a4-e18a-4642-adff-1ad765496533 on opt-client-0. Skipping conservative 
merge on the file.
[2016-03-01 18:41:06.649695] W [MSGID: 108008] 
[afr-self-heal-name.c:359:afr_selfheal_name_gfid_mismatch_check] 
0-opt-replicate-0: GFID mismatch for 
/Powermail_Form_action_form_f0755f8526150f023fd98252b510a40c49586dbd.php
 118668d9-608a-477a-b655-bcc6c2298bf4 on opt-client-1 and 
a87943a4-e18a-4642-adff-1ad765496533 on opt-client-0
[2016-03-01 18:41:06.661277] W [fuse-bridge.c:462:fuse_entry_cbk] 
0-glusterfs-fuse: 184415191: LOOKUP() 
/releases/1.0.1/typo3temp/Cache/Code/fluid_template/Powermail_Form_action_form_f07