Re: [Gluster-devel] gluster 10.3: task glfs_fusenoti blocked for more than 120 seconds

2023-05-02 Thread Mohit Agrawal
I don't think the issue is on gluster side, it seems the issue is on kernel side (possible deadlock in fuse_reverse_inval_entry) https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bda9a71980e083699a0360963c0135657b73f47a On Tue, May 2, 2023 at 5:48 PM Hu Bert wrote: >

Re: [Gluster-devel] Problem during reproducing smallfile experiment on Gluster 10

2022-01-20 Thread Mohit Agrawal
run the test case (no_of_file and no_of_threads) Thanks, Mohit Agrawal On Thu, Jan 20, 2022 at 12:24 PM 박현승 wrote: > Dear Gluster developers, > > > > This is Hyunseung Park at Gluesys, South Korea. > > > > We are trying to replicate the test in > https://github.com

Re: [Gluster-devel] [Smallfile-Replica-3] Performance report for Gluster Upstream - 19/10/2021 Test Status: FAIL (14.45%)

2021-10-18 Thread Mohit Agrawal
It seems there is some issue on perf machines, there is no code level change between this and previous job but the current job is showing regression. Thanks, Mohit Agrawal On Tue, Oct 19, 2021 at 2:11 AM Gluster-jenkins wrote: > *Test details:* > RPM Location: Upstream > OS Version

Re: [Gluster-devel] [erik.jacob...@hpe.com: [Gluster-users] gluster forcing IPV6 on our IPV4 servers, glusterd fails (was gluster update question regarding new DNS resolution requirement)]

2021-09-21 Thread Mohit Agrawal
fs/pull/2666 Thanks, Mohit Agrawal On Tue, Sep 21, 2021 at 2:42 PM Erik Jacobson wrote: > Dear devel team - > > I botched the email address here. I type "hpcm-devel" like 30 times a > day so I mistyped that. Sorry about that. > > Any advice appreciated and see

[Gluster-devel] Mohit Agrawal is on PTO today and tomorrow

2021-08-29 Thread Mohit Agrawal
--- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] OOO: Mohit Agrawal is on PTO today and tomorrow

2021-07-28 Thread Mohit Agrawal
--- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Today i am suffering from fever, throat infection so taking sick leave !!

2021-05-11 Thread Mohit Agrawal
--- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Fwd: New Defects reported by Coverity Scan for gluster/glusterfs

2021-04-27 Thread Mohit Agrawal
+Nikhil Ladha Can you resolve the same? On Wed, Apr 28, 2021 at 12:10 PM Yaniv Kaul wrote: > 2 new coverity issues after yesterday's merge. > Y. > > > -- Forwarded message - > From: > Date: Wed, 28 Apr 2021, 8:57 > Subject: New Defects reported by Coverity Scan for gluster/glus

[Gluster-devel] Mohit Agrawal is on PTO today

2021-03-28 Thread Mohit Agrawal
--- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] OOO: Mohit is on PTO today

2021-03-14 Thread Mohit Agrawal
--- Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-02-25 Thread Mohit Agrawal
is around 600G (per tar size is around 1G). It will be good if we can run the same for long-duration like around 6000 times. On Tue, Feb 25, 2020 at 1:31 PM Mohit Agrawal wrote: > With these 2 changes, we are getting a good improvement in file creation > and > slight improvement in

Re: [Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-02-25 Thread Mohit Agrawal
ar was taking almost 36-37 hours With Patch tar is taking almost 26 hours We were getting a similar kind of improvement in smallfile tool also. On Tue, Feb 25, 2020 at 1:29 PM Mohit Agrawal wrote: > Hi, > We observed performance is mainly hurt while .glusterfs is having huge > data.A

[Gluster-devel] following up on the work underway for improvements in ls-l - looking for the data on the test runs

2020-02-24 Thread Mohit Agrawal
all's were also helpful to improve performance. Regards, Mohit Agrawal ___ Community Meeting Calendar: Schedule - Every Tuesday at 14:30 IST / 09:00 UTC Bridge: https://bluejeans.com/441850968 Gluster-devel mailing list Gluster-dev

[Gluster-devel] Regards to taking lock in dictionary

2019-10-23 Thread Mohit Agrawal
not consumed by multiple threads at the same time still we do need to take lock in the dictionary. Please share if I need to test something more to validate the same. Regards, Mohit Agrawal ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th T

Re: [Gluster-devel] Query regards to expose client-pid to fuse process

2019-10-11 Thread Mohit Agrawal
Hi, Yes, you are right it is not a default value. We can assign the client_pid only while volume has mounted after through a glusterfs binary directly like /usr/local/sbin/glusterfs --process-name fuse --volfile-server=192.168.1.3 --client-pid=-3 --volfile-id=/test /mnt1 Regards, Mohit Agrawal

[Gluster-devel] Query regards to expose client-pid to fuse process

2019-10-11 Thread Mohit Agrawal
to expose client-pid as an argument to the fuse process? I think we need to resolve it. Please share your view on the same. Thanks, Mohit Agrawal ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https

Re: [Gluster-devel] ./tests/bugs/snapshot/bug-1399598-uss-with-ssl.t generating core very often

2019-05-20 Thread Mohit Agrawal
I am working on it. On Mon, May 20, 2019 at 6:39 PM Deepshikha Khandelwal wrote: > Any updates on this? > > It's failing few of the regression runs. > > On Sat, May 18, 2019 at 6:27 PM Mohit Agrawal wrote: > >> Hi Rafi, >> >> I have not che

Re: [Gluster-devel] ./tests/bugs/snapshot/bug-1399598-uss-with-ssl.t generating core very often

2019-05-18 Thread Mohit Agrawal
Hi Rafi, I have not checked yet, on Monday I will check the same. Thanks, Mohit Agrawal On Sat, May 18, 2019 at 3:56 PM RAFI KC wrote: > All of this links have a common backtrace, and suggest a crash from socket > layer with ssl code path, > > Backtrace is > > Thread 1 (Thr

Re: [Gluster-devel] Query regarding dictionary logic

2019-05-01 Thread Mohit Agrawal
provide some other test to validate the same. Thanks, Mohit Agrawal On Tue, Apr 30, 2019 at 2:29 PM Mohit Agrawal wrote: > Thanks, Amar for sharing the patch, I will test and share the result. > > On Tue, Apr 30, 2019 at 2:23 PM Amar Tumballi Suryanarayan < > atumb...@re

Re: [Gluster-devel] Query regarding dictionary logic

2019-04-30 Thread Mohit Agrawal
ry for optimization was better at that time. > > -Amar > > On Tue, Apr 30, 2019 at 12:02 PM Mohit Agrawal > wrote: > >> sure Vijay, I will try and update. >> >> Regards, >> Mohit Agrawal >> >> On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur wrot

Re: [Gluster-devel] Query regarding dictionary logic

2019-04-29 Thread Mohit Agrawal
sure Vijay, I will try and update. Regards, Mohit Agrawal On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur wrote: > Hi Mohit, > > On Mon, Apr 29, 2019 at 7:15 AM Mohit Agrawal wrote: > >> Hi All, >> >> I was just looking at the code of dict, I have one query curre

[Gluster-devel] Query regarding dictionary logic

2019-04-29 Thread Mohit Agrawal
). Before optimizing the code I just want to know what was the exact reason to define hash_size is 1? Please share your view on the same. Thanks, Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman

[Gluster-devel] Regards glusterd.service is not started automatically after reboot the node

2019-04-16 Thread Mohit Agrawal
glusterfs.spec.in. I have posted a patch(https://review.gluster.org/#/c/glusterfs/+/22584/) to resolve the same. Thanks, Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Be careful before closing fd in a default case

2019-04-11 Thread Mohit Agrawal
uld not be zero. I have fixed the same from ( https://review.gluster.org/#/c/glusterfs/+/22549/) and upload a .t also. Regards, Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Failing test case ./tests/bugs/distribute/bug-1161311.t

2019-02-12 Thread Mohit Agrawal
-1161311.t; done 30 times on softserv vm that is similar to build infra, the test case is not taking time more than 3 minutes but on build server test case is getting timed out. Kindly share your input if you are facing the same. Thanks, Mohit Agrawal

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-12 Thread Mohit Agrawal
-patchy-server: 1486: FXATTROP 2 (d91f6331-d394-479d-ab51-6bcf674ac3e0), client: CTX_ID:b785c2b0-3453-4a03-b129-19e6ceeb5346-GRAPH_ID:0-PID:24147-HOST:softserve-moagrawa-test.1-PC_NAME:patchy-client-1-RECON_NO:-1, error-xlator: patchy-posix [Bad file descriptor] Thanks, Mohit Agrawal On Sat, Jan

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-12 Thread Mohit Agrawal
) at ec-heald.c:311 #6 0x7f83add0367b in ec_shd_full_healer (data=0x7f83a8030b20) at ec-heald.c:372 #7 0x7f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0 #8 0x7f83bafd634d in clone () from /usr/lib64/libc.so.6 Thread 1 (Thread 0x7f83bcdd1780 (LWP 25383)): #0 0x00007f83bb70af57 in

Re: [Gluster-devel] Regression health for release-5.next and release-6

2019-01-10 Thread Mohit Agrawal
I think we should consider regression-builds after merged the patch ( https://review.gluster.org/#/c/glusterfs/+/21990/) as we know this patch introduced some delay. Thanks, Mohit Agrawal On Thu, Jan 10, 2019 at 3:55 PM Atin Mukherjee wrote: > Mohit, Sanju - request you to investigate

Re: [Gluster-devel] Regression failure: https://build.gluster.org/job/centos7-regression/3678/

2018-11-14 Thread Mohit Agrawal
call this function(posix_fs_health_check) between cancellation point so ideally hostname and base_path will be free after calling this function (posix_fs_health_check) but here it seems hostname/base_path are freed at the time of calling gf_event. Thanks, Mohit Agrawal On Wed, Nov 14, 2018 at 9:42

Re: [Gluster-devel] tests/bugs/core/multiplex-limit-issue-151.t timed out

2018-08-10 Thread Mohit Agrawal
File a bug https://bugzilla.redhat.com/show_bug.cgi?id=1615003, I am not able to extract logs specific to this test case from the log dump. Thanks Mohit Agrawal On Sat, Aug 11, 2018 at 9:27 AM, Atin Mukherjee wrote: > https://build.gluster.org/job/line-coverage/455/consoleFull > &g

Re: [Gluster-devel] Test: ./tests/bugs/ec/bug-1236065.t

2018-08-07 Thread Mohit Agrawal
I have posted a patch https://review.gluster.org/#/c/20657/ and start brick-mux regression to validate the patch. Thanks Mohit Agrawal On Wed, Aug 8, 2018 at 7:22 AM, Atin Mukherjee wrote: > +Mohit > > Requesting Mohit for help. > > On Wed, 8 Aug 2018 at 06:53, Shyam Ranga

[Gluster-devel] Regression failing on tests/bugs/core/bug-1432542-mpx-restart-crash.t

2018-06-29 Thread Mohit Agrawal
it seems on your patch it is consistently reproducible. For this build to test your code you can mark it as a bad test and try to run a regression.I will check how we can resolve the same. Regards Mohit Agrawal ___ Gluster-devel mailing list Gluster

Re: [Gluster-devel] tests/bugs/rpc/bug-921072.t - fails almost all the times in mainline

2018-02-20 Thread Mohit Agrawal
TEST: 46 Y force_umount /mnt/nfs/0 ++ -- [2018-02-21 05:06:15.297241]:++ G_LOG:./tests/bugs/rpc/bug-921072.t: TEST: 57 mount_nfs localhost:/patchy /mnt/nfs/0 nolock ++ [2018-02-21 05:08:20.341869]:++ G_LOG:./tests/bugs/rpc/bug-921072.t: TEST: 58 Y force_umount /mnt/

Re: [Gluster-devel] Query specific to getting crash

2017-10-09 Thread Mohit Agrawal
Hi Niels, Thanks for your response.I will file a bug and will update same bt in bug also. I don't know about the reproducer, I was getting a crash only one time. Please let us know if anyone has objection to merge this patch. Thanks Mohit Agrawal On Mon, Oct 9, 2017 at 4:16 PM,

Re: [Gluster-devel] Query specific to getting crash

2017-10-09 Thread Mohit Agrawal
+ On Mon, Oct 9, 2017 at 11:33 AM, Mohit Agrawal wrote: > > On Mon, Oct 9, 2017 at 11:16 AM, Mohit Agrawal > wrote: > >> Hi All, >> >> >> For specific to this patch(https://review.gluster.org/#/c/18436/) i am >> getting crash in nfs(only once) for

Re: [Gluster-devel] brick multiplexing regression is broken

2017-10-06 Thread Mohit Agrawal
Without a patch test case will fail, it is an expected behavior. Regards Mohit Agrawal On Fri, Oct 6, 2017 at 11:04 AM, Ravishankar N wrote: > The test is failing on master without any patches: > [root@tuxpad glusterfs]# prove tests/bugs/bug-1371806_1.t > tests/bugs/bug-1371806_1

Re: [Gluster-devel] brick multiplexing regression is broken

2017-10-05 Thread Mohit Agrawal
Hi, Thanks for clarify it. I am already looking it, I will upload a new patch soon to resolve the same. Regards Mohit Agrawal On Fri, Oct 6, 2017 at 11:14 AM, Ravishankar N wrote: > > > On 10/06/2017 11:08 AM, Mohit Agrawal wrote: > > Without a patch test case will fail, it

Re: [Gluster-devel] Suggestion to Improve performance

2017-09-27 Thread Mohit Agrawal
Thanks Kaleb for your reply, I will try it and will share the result. Regards Mohit Agrawal On Wed, Sep 27, 2017 at 5:33 PM, Kaleb S. KEITHLEY wrote: > On 09/27/2017 04:17 AM, Mohit Agrawal wrote: > > Niels, > > > >Thanks for your reply, I think these built-in functio

Re: [Gluster-devel] Suggestion to Improve performance

2017-09-27 Thread Mohit Agrawal
using them should depends on BR2_TOOLCHAIN_GCC_AT_LEAST_4_7 and link with libatomic. For more please refer this http://lists.busybox.net/pipermail/buildroot/2016-January/150499.html Thanks Mohit Agrawal On Wed, Sep 27, 2017 at 2:29 PM, Niels de Vos wrote: > On Wed, Sep 27, 2017 at 01:47:00PM

Re: [Gluster-devel] Suggestion to Improve performance

2017-09-27 Thread Mohit Agrawal
Niels, Thanks for your reply, I think these built-in function provides by gcc and it should support most of the architecture. In your view what could be the archietecure that does not support these builtin function ?? Regards Mohit Agrawal On Wed, Sep 27, 2017 at 1:20 PM, Niels de Vos

[Gluster-devel] Suggestion to Improve performance

2017-09-27 Thread Mohit Agrawal
lease share your input on this, appreciate your input. Regards Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Regards to highlight the change in behavior of metadata operation for directory

2017-07-20 Thread Mohit Agrawal
s the MDS of that directory we will not allow to run the operation on directory otherwise dht wind a call to next xlator. Regards Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] create restrictions xlator

2017-07-13 Thread Mohit Agrawal
Yes, we will fix it before 4.0 (3.12). On Thu, Jul 13, 2017 at 1:09 PM, Taehwa Lee wrote: > the issue is same as me almost. > > It looks better than my suggestion. > > > but, It is planned on Gluster 4.0, isn’t it? > > Can I follow and develop this issue for 3.10 and master? > > > I want to know

[Gluster-devel] Suggestion to optimize posix_getxattr call

2017-05-15 Thread Mohit Agrawal
iate your input. Regards Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

[Gluster-devel] Query regards to take decision about hashed_subvol to heal user xattr

2017-04-04 Thread Mohit Agrawal
fop_wind = i; } } >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Please share your input if any issue in this approach to decide about hashed_subvolume. Appreciate your inputs. Regards Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-devel

Re: [Gluster-devel] Gluster-devel Digest, Vol 31, Issue 61

2016-10-27 Thread Mohit Agrawal
working after disconnect. Regards Mohit Agrawal On Thu, Oct 27, 2016 at 8:30 AM, wrote: > Send Gluster-devel mailing list submissions to > gluster-devel@gluster.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://www.gluster.org/mailman/l

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-16 Thread Mohit Agrawal
xattr on volume in case if the same does exist otherwise it will create new xattr. For specific to ALC/SeLinux because i am updating only user xattr so it will be remain same after done heal function. Regards Mohit Agrawal On Fri, Sep 16, 2016 at 9:42 AM, Nithya Balachandran wrote: > > &

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-10 Thread Mohit Agrawal
Hi All, I have upload a new patch (http://review.gluster.org/#/c/15456/),Please do the code review. Regards Mohit Agrawal On Thu, Sep 8, 2016 at 12:02 PM, Mohit Agrawal wrote: > Hi All, > >I have one another solution to heal user xattr but before implement it > i would lik

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-07 Thread Mohit Agrawal
function(dht_dir_xattr_heal) it will copy blindly all user xattr on all subvolume or i can compare subvol xattr with valid xattr if there is any mismatch then i will call syncop_setxattr otherwise no need to call. syncop_setxattr. Let me know if this approach is suitable. Regards Mohit Agrawal On

Re: [Gluster-devel] Query regards to heal xattr heal in dht

2016-09-07 Thread Mohit Agrawal
Hi Pranith, In current approach i am getting list of xattr from first up volume and update the user attributes from that xattr to all other volumes. I have assumed first up subvol is source and rest of them are sink as we are doing same in dht_dir_attr_heal. Regards Mohit Agrawal On Wed, Sep

[Gluster-devel] Query regards to heal xattr heal in dht

2016-09-07 Thread Mohit Agrawal
side 2) Compare change time before call healing function in dht_revalidate_cbk Please share your input on this. Appreciate your input. Regards Mohit Agrawal ___ Gluster-devel mailing list Gluster-devel@gluster.org http://www.gluster.org