Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-17 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
the tar issue. cynthia From: Ravishankar N Sent: 2020年3月17日 17:08 To: Zhou, Cynthia (NSB - CN/Hangzhou) ; Kotresh Hiremath Ravishankar Cc: Gluster Devel Subject: Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime On 17/03/20 12:56 pm, Zhou

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-17 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
against the time sent from client. As Amar mentioned, this doesn't fit well into the scheme of how ctime is designed. Definitely keeping it optional and disabling it by default is one way. But is that what your intention here? On Tue, Mar 17, 2020 at 10:56 AM Zhou, Cynthia (NSB - CN/Han

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-16 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Ok, thanks for your feedback! I will do local test to verify this patch first. cynthia From: Amar Tumballi Sent: 2020年3月17日 13:18 To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Kotresh Hiremath Ravishankar ; Gluster Devel Subject: Re: [Gluster-devel] could you help to check about a glusterfs

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-16 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
mdata, so the following changes to file can be populated to other clients. cynthia From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: 2020年3月12日 17:31 To: 'Kotresh Hiremath Ravishankar' Cc: 'Gluster Devel' Subject: RE: could you help to check about a glusterfs issue seems to be

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-12 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
/root) Gid: ( 615/_nokfsuifileshare) Access: 2020-04-11 12:20:22.100395536 +0300 Modify: 2020-03-12 11:25:04.094913452 +0200 Change: 2020-03-12 11:25:04.095913453 +0200 Birth: 2020-03-12 07:53:26.803783053 +0200 From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: 2020年3月12日 16:09 To: 'Ko

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-12 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
14:37 To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Gluster Devel Subject: Re: could you help to check about a glusterfs issue seems to be related to ctime All the perf xlators depend on time (mostly mtime I guess). In my setup, only quick read was enabled and hence disabling it worked for me. All

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-11 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
correct content? The file size showed is 141, but actually in brick it is longer than that. cynthia From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: 2020年3月12日 12:53 To: 'Kotresh Hiremath Ravishankar' Cc: 'Gluster Devel' Subject: RE: could you help to check about a glusterfs issue

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-11 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
From my local test only when disable both features.ctime and ctime.noatime this issue is gone. Or Do echo 3 >/proc/sys/vm/drop_caches after each time when some client change the file , can cat command show correct data(same as brick ) cynthia From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: 20

Re: [Gluster-devel] could you help to check about a glusterfs issue seems to be related to ctime

2020-03-11 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
only after I disable utime this issue is completely gone. features.ctime off ctime.noatime off Do you know why is this? Cynthia Nokia storage team From: Kotresh Hiremath Ravishankar Sent: 2020年3月11日 22:05 To: Zhou, Cynthia (NSB - CN/Hangzhou

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-06-10 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi, How about this patch? I see there is a failed test, is that related to my change? cynthia From: Raghavendra Gowdappa Sent: Thursday, May 09, 2019 12:13 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Amar Tumballi Suryanarayan ; gluster-devel@gluster.org Subject: Re: [Gluster-devel

[Gluster-devel] glusterfs coredump--mempool

2019-05-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi glusterfs expert, I meet glusterfs process coredump again in my env, short after glusterfs process startup. The local become NULL, but seems this frame is not destroyed yet since the magic number(GF_MEM_HEADER_MAGIC) still untouched. Using host libthread_db library "/lib64/libthread_db.so.1".

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-08 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi, Ok, It is posted to https://review.gluster.org/#/c/glusterfs/+/22687/ From: Raghavendra Gowdappa Sent: Wednesday, May 08, 2019 7:35 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Amar Tumballi Suryanarayan ; gluster-devel@gluster.org Subject: Re: [Gluster-devel] glusterfsd memory leak issue

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-08 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
>ssl_ctx); + priv->ssl_ctx = NULL; + } + SSL_shutdown(priv->ssl_ssl); + SSL_clear(priv->ssl_ssl); + SSL_free(priv->ssl_ssl); From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: Monday, May 06, 2019 2:12 PM To: 'Amar T

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-05 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
before. Is glusterfs using ssl_accept correctly? cynthia From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: Monday, May 06, 2019 10:34 AM To: 'Amar Tumballi Suryanarayan' Cc: Milind Changire ; gluster-devel@gluster.org Subject: RE: [Gluster-devel] glusterfsd memory leak issue found after enab

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-05-05 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
n(priv->ssl_ssl); +SSL_clear(priv->ssl_ssl); +SSL_free(priv->ssl_ssl); +priv->ssl_ssl = NULL; + } if (priv->ssl_private_key) { GF_FREE(priv->ssl_private_key); } From: Amar Tumballi Suryanarayan Sent: Wednesday, May 01, 2019 8:43 PM To: Zhou, Cynthia (NSB - CN/H

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-27 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
ill cause it to be some unsanitized value and cause this stuck? cynthia From: Raghavendra Gowdappa Sent: Thursday, April 25, 2019 2:07 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: gluster-devel@gluster.org Subject: Re: glusterd stuck for glusterfs with version 3.12.15 On Mon, Apr 15, 2019 at 12

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-22 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
patch . Step 1> while true;do gluster v heal info, 2> check the vol-name glusterfsd memory usage, it is obviously increasing. cynthia From: Milind Changire Sent: Monday, April 22, 2019 2:36 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Atin Mukherjee ; gluster-devel@gluster.org Subje

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Ok ,another question, why priv->ssl_sbio = BIO_new_socket(priv->sock, BIO_NOCLOSE); use NOCLOSE mode instead of BIO_CLOSE? cynthia From: Milind Changire Sent: Monday, April 22, 2019 2:21 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Atin Mukherjee ; gluster-devel@gluster.org Subje

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
I do some google. Seems it is not needed to call BIO_free, SSL_free will free bio. https://groups.google.com/forum/#!topic/mailing.openssl.users/8i9cRQGlfDM From: Milind Changire Sent: Monday, April 22, 2019 1:35 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Atin Mukherjee ; gluster-devel

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
"priv->ssl_sbio of socket(%d)is %p ",priv->sock,priv->ssl_sbio); +if(priv->ssl_sbio != NULL) +BIO_free(priv->ssl_sbio); +priv->ssl_ssl = NULL; + priv->ssl_sbio = NULL; + } if (priv->ssl_private_key) { GF_FREE(priv->ssl_private_key)

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
019-04-21 05:02:17.829337] T [socket.c:493:__socket_ssl_readv] 0-tcp.ccs-server: * reading over SSL cynthia From: Milind Changire Sent: Monday, April 22, 2019 10:21 AM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Atin Mukherjee ; gluster-devel@gluster.org Subject: Re: [Gluster-devel] glusterfsd

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Ok, I will post it later. cynthia From: Raghavendra Gowdappa Sent: Monday, April 22, 2019 10:09 AM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Atin Mukherjee ; gluster-devel@gluster.org Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl On Mon, Apr 22, 2019 at 7

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-21 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
p;& priv->ssl_ssl) + { +gf_log(this->name, GF_LOG_TRACE, + "clear and reset for socket(%d), free ssl ", + priv->sock); +SSL_shutdown(priv->ssl_ssl); +SSL_clear(priv->ssl_ssl); +SSL_free(priv->ssl_ssl); + priv->ssl_ssl = NULL; +

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-18 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
31 bytes in 518 allocations from stack b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]' 11704 bytes in 371 allocations from stack b'CRYPTO_malloc+0x58 [libcrypto.so.1.0.2p]\n\t\t[unknown]' cynthia From: Zhou, Cynthia (NSB - CN/

Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-18 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Sent: Thursday, April 18, 2019 1:19 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Raghavendra Gowdappa ; gluster-devel@gluster.org Subject: Re: [Gluster-devel] glusterfsd memory leak issue found after enable ssl On Wed, 17 Apr 2019 at 10:53, Zhou, Cynthia (NSB - CN/Hangzhou) mailto:cynthia.z

[Gluster-devel] glusterfsd memory leak issue found after enable ssl

2019-04-16 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi, In my recent test, I found that there are very severe glusterfsd memory leak when enable socket ssl option. If I monitor glusterfsd process RSS with command: pidstat -r -p 1 And at the same tme, execute command: gluster v heal info, i find that the RSS keep increasing until system Out of m

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-15 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Ok, I got your point, thanks for responding! cynthia From: Raghavendra Gowdappa Sent: Monday, April 15, 2019 4:36 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: gluster-devel@gluster.org Subject: Re: glusterd stuck for glusterfs with version 3.12.15 On Mon, Apr 15, 2019 at 12:52 PM Zhou

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-15 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
arly to allow concurrent handling of the same socket happen, and after move it to the end of socket_event_poll this glusterd stuck issue disappeared. cynthia From: Raghavendra Gowdappa Sent: Monday, April 15, 2019 2:36 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: gluster-devel@gluster.org Subject: Re: g

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-14 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Ok, thanks for your comment! cynthia From: Raghavendra Gowdappa Sent: Monday, April 15, 2019 11:52 AM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: gluster-devel@gluster.org Subject: Re: glusterd stuck for glusterfs with version 3.12.15 Cynthia, On Mon, Apr 15, 2019 at 8:10 AM Zhou, Cynthia (NSB

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-14 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
priv->gen); return ret; } cynthia From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: Tuesday, April 09, 2019 3:57 PM To: 'Raghavendra Gowdappa' Cc: gluster-devel@gluster.org Subject: RE: glusterd stuck for glusterfs with version 3.12.15 Can you figure out some poss

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-09 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
rpc_transport_pollin_destroy again, and so stuck on this lock Also, there should not be two thread handling the same socket at the same time, although there has been a patch claimed to tackle this issue. cynthia From: Raghavendra Gowdappa Sent: Tuesday, April 09, 2019 3:52 PM To: Zhou, Cynthia (NSB - CN/Hangzhou

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-09 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
iobref_unref does not release iobref->lock , so thread 9 may block. cynthia From: Sanju Rakonde Sent: Tuesday, April 09, 2019 3:08 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Raghavendra Gowdappa ; gluster-devel@gluster.org Subject: Re: [Gluster-devel] glusterd stuck for glusterfs with vers

Re: [Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-08 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
t.c:124 #3 0x0040ab95 in main () (gdb) (gdb) (gdb) q! A syntax error in expression, near `'. (gdb) quit From: Sanju Rakonde Sent: Monday, April 08, 2019 4:58 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: Raghavendra Gowdappa ; gluster-devel@gluster.org Subject: Re: [Gluster-devel] glu

[Gluster-devel] glusterd stuck for glusterfs with version 3.12.15

2019-04-07 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi glusterfs experts, Good day! In my test env, sometimes glusterd stuck issue happened, and it is not responding to any gluster commands, when I checked this issue I find that glusterd thread 9 and thread 8 is dealing with the same socket, I thought following patch should be able to solve this

[Gluster-devel] when there are dangling entry(without gfid) in only one brick dir, the glusterfs heal info will keep showing the entry, glustershd can not really remove this entry from brick .

2018-10-11 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi glusterfs expert, I meet one problem in my test bed (3 brick on 3 sn nodes), the "/" is always in glusterfs v heal info output. In my ftest+reboot-sn-nodes-randomly test, the gluster v heal info output keeps showing entry "/" even for hours, and even you do some touch or ls of /mnt/mstate , i

Re: [Gluster-devel] query about one glustershd coredump issue

2018-09-28 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
From: Ravishankar N Sent: Thursday, September 27, 2018 6:04 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Subject: Re: query about one glustershd coredump issue Hi, I think it is better to send it on gluster-users mailing list to get more attention. Regards, Ravi On 09/27/2018 01:10 PM, Zhou

Re: [Gluster-devel] remaining entry in gluster volume heal info command even after reboot

2018-09-06 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
this situation. Do you have any comments? From: Zhou, Cynthia (NSB - CN/Hangzhou) Sent: Thursday, September 06, 2018 9:08 AM To: 'Pranith Kumar Karampuri' Cc: Gluster Devel ; Ravishankar N Subject: RE: remaining entry in gluster volume heal info command even after reboot This test ste

Re: [Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-10 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
non-zero. The gluster could self heal such scenario. But in this case the it could never self heal. From: Ravishankar N [mailto:ravishan...@redhat.com] Sent: Thursday, February 08, 2018 11:56 AM To: Zhou, Cynthia (NSB - CN/Hangzhou) ; Gluster-devel@gluster.org Subject: Re: query about a split-b

Re: [Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-10 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi, Thanks for responding? If split-brain happen in such kind of test is reasonable, how to fix this split-brain situation? From: Ravishankar N [mailto:ravishan...@redhat.com] Sent: Thursday, February 08, 2018 12:12 AM To: Zhou, Cynthia (NSB - CN/Hangzhou) ; Gluster-devel@gluster.org Subject

[Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-07 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi glusterfs expert: Good day. Lately, we meet a glusterfs split brain problem in our env in /mnt/export/testdir. We start 3 ior process (IOR tool) from non-sn nodes, which is creating/removing files repeatedly in testdir. then we reboot sn nodes(sn0 and sn1) by sequence. Then we

[Gluster-devel] query about a split-brain problem found in glusterfs3.12.3

2018-02-07 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
Hi glusterfs expert: Good day. Lately, we meet a glusterfs split brain problem in our env in /mnt/export/testdir. We start 3 ior process (IOR tool) from non-sn nodes, which is creating/removing files repeatedly in testdir. then we reboot sn nodes(sn0 and sn1) by sequence. Then we m

Re: [Gluster-devel] [Gluster-users] after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !

2017-09-28 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
)18657188311 From: Karthik Subrahmanya [mailto:ksubr...@redhat.com] Sent: Thursday, September 28, 2017 2:02 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: gluster-us...@gluster.org; gluster-devel@gluster.org Subject: Re: [Gluster-users] after hard reboot, split-brain happened, but nothing showed in

Re: [Gluster-devel] [Gluster-users] after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !

2017-09-28 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
The version I am using is glusterfs 3.6.9 Best regards, Cynthia (周琳) MBB SM HETRAN SW3 MATRIX Storage Mobile: +86 (0)18657188311 From: Karthik Subrahmanya [mailto:ksubr...@redhat.com] Sent: Thursday, September 28, 2017 2:37 PM To: Zhou, Cynthia (NSB - CN/Hangzhou) Cc: gluster-us...@gluster.org

[Gluster-devel] after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !

2017-09-27 Thread Zhou, Cynthia (NSB - CN/Hangzhou)
HI gluster experts, I meet a tough problem about “split-brain” issue. Sometimes, after hard reboot, we will find some files in split-brain, however its parent directory or anything could be shown in command “gluster volume heal info”, also, no entry in .glusterfs/indices/xattrop directory, ca