I don't think the issue is on gluster side, it seems the issue is on kernel
side (possible deadlock in fuse_reverse_inval_entry)
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=bda9a71980e083699a0360963c0135657b73f47a
On Tue, May 2, 2023 at 5:48 PM Hu Bert wrote:
>
run the test case
(no_of_file and no_of_threads)
Thanks,
Mohit Agrawal
On Thu, Jan 20, 2022 at 12:24 PM 박현승 wrote:
> Dear Gluster developers,
>
>
>
> This is Hyunseung Park at Gluesys, South Korea.
>
>
>
> We are trying to replicate the test in
> https://github.com
It seems there is some issue on perf machines, there is no code level
change between this and previous job
but the current job is showing regression.
Thanks,
Mohit Agrawal
On Tue, Oct 19, 2021 at 2:11 AM Gluster-jenkins
wrote:
> *Test details:*
> RPM Location: Upstream
> OS Version
fs/pull/2666
Thanks,
Mohit Agrawal
On Tue, Sep 21, 2021 at 2:42 PM Erik Jacobson wrote:
> Dear devel team -
>
> I botched the email address here. I type "hpcm-devel" like 30 times a
> day so I mistyped that. Sorry about that.
>
> Any advice appreciated and see
---
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
---
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
---
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
+Nikhil Ladha Can you resolve the same?
On Wed, Apr 28, 2021 at 12:10 PM Yaniv Kaul wrote:
> 2 new coverity issues after yesterday's merge.
> Y.
>
>
> -- Forwarded message -
> From:
> Date: Wed, 28 Apr 2021, 8:57
> Subject: New Defects reported by Coverity Scan for gluster/glus
---
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
---
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
is around 600G (per tar size is around 1G).
It will be good if we can run the same for long-duration like around 6000
times.
On Tue, Feb 25, 2020 at 1:31 PM Mohit Agrawal wrote:
> With these 2 changes, we are getting a good improvement in file creation
> and
> slight improvement in
ar was taking almost 36-37 hours
With Patch
tar is taking almost 26 hours
We were getting a similar kind of improvement in smallfile tool also.
On Tue, Feb 25, 2020 at 1:29 PM Mohit Agrawal wrote:
> Hi,
> We observed performance is mainly hurt while .glusterfs is having huge
> data.A
all's were also
helpful to improve performance.
Regards,
Mohit Agrawal
___
Community Meeting Calendar:
Schedule -
Every Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968
Gluster-devel mailing list
Gluster-dev
not consumed by
multiple threads at the same time still we do need to take lock
in the dictionary.
Please share if I need to test something more to validate the same.
Regards,
Mohit Agrawal
___
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th T
Hi,
Yes, you are right it is not a default value.
We can assign the client_pid only while volume has mounted after through a
glusterfs binary directly like
/usr/local/sbin/glusterfs --process-name fuse --volfile-server=192.168.1.3
--client-pid=-3 --volfile-id=/test /mnt1
Regards,
Mohit Agrawal
to expose client-pid as an argument to the fuse
process?
I think we need to resolve it. Please share your view on the same.
Thanks,
Mohit Agrawal
___
Community Meeting Calendar:
APAC Schedule -
Every 2nd and 4th Tuesday at 11:30 AM IST
Bridge: https
I am working on it.
On Mon, May 20, 2019 at 6:39 PM Deepshikha Khandelwal
wrote:
> Any updates on this?
>
> It's failing few of the regression runs.
>
> On Sat, May 18, 2019 at 6:27 PM Mohit Agrawal wrote:
>
>> Hi Rafi,
>>
>> I have not che
Hi Rafi,
I have not checked yet, on Monday I will check the same.
Thanks,
Mohit Agrawal
On Sat, May 18, 2019 at 3:56 PM RAFI KC wrote:
> All of this links have a common backtrace, and suggest a crash from socket
> layer with ssl code path,
>
> Backtrace is
>
> Thread 1 (Thr
provide some other test to validate the same.
Thanks,
Mohit Agrawal
On Tue, Apr 30, 2019 at 2:29 PM Mohit Agrawal wrote:
> Thanks, Amar for sharing the patch, I will test and share the result.
>
> On Tue, Apr 30, 2019 at 2:23 PM Amar Tumballi Suryanarayan <
> atumb...@re
ry for optimization was better at that time.
>
> -Amar
>
> On Tue, Apr 30, 2019 at 12:02 PM Mohit Agrawal
> wrote:
>
>> sure Vijay, I will try and update.
>>
>> Regards,
>> Mohit Agrawal
>>
>> On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur wrot
sure Vijay, I will try and update.
Regards,
Mohit Agrawal
On Tue, Apr 30, 2019 at 11:44 AM Vijay Bellur wrote:
> Hi Mohit,
>
> On Mon, Apr 29, 2019 at 7:15 AM Mohit Agrawal wrote:
>
>> Hi All,
>>
>> I was just looking at the code of dict, I have one query curre
).
Before optimizing the code I just want to know what was the exact reason
to define
hash_size is 1?
Please share your view on the same.
Thanks,
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman
glusterfs.spec.in.
I have posted a patch(https://review.gluster.org/#/c/glusterfs/+/22584/)
to resolve the same.
Thanks,
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
uld not be zero.
I have fixed the same from (
https://review.gluster.org/#/c/glusterfs/+/22549/) and upload a .t also.
Regards,
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-devel
-1161311.t; done 30 times on softserv vm that is
similar to build infra, the test case is not taking time more than 3
minutes but on build server test case is getting timed out.
Kindly share your input if you are facing the same.
Thanks,
Mohit Agrawal
-patchy-server: 1486:
FXATTROP 2 (d91f6331-d394-479d-ab51-6bcf674ac3e0), client:
CTX_ID:b785c2b0-3453-4a03-b129-19e6ceeb5346-GRAPH_ID:0-PID:24147-HOST:softserve-moagrawa-test.1-PC_NAME:patchy-client-1-RECON_NO:-1,
error-xlator: patchy-posix [Bad file descriptor]
Thanks,
Mohit Agrawal
On Sat, Jan
) at ec-heald.c:311
#6 0x7f83add0367b in ec_shd_full_healer (data=0x7f83a8030b20) at
ec-heald.c:372
#7 0x7f83bb709e25 in start_thread () from /usr/lib64/libpthread.so.0
#8 0x7f83bafd634d in clone () from /usr/lib64/libc.so.6
Thread 1 (Thread 0x7f83bcdd1780 (LWP 25383)):
#0 0x00007f83bb70af57 in
I think we should consider regression-builds after merged the patch (
https://review.gluster.org/#/c/glusterfs/+/21990/)
as we know this patch introduced some delay.
Thanks,
Mohit Agrawal
On Thu, Jan 10, 2019 at 3:55 PM Atin Mukherjee wrote:
> Mohit, Sanju - request you to investigate
call
this function(posix_fs_health_check)
between cancellation point so ideally hostname and base_path will be free
after calling this function (posix_fs_health_check) but here it seems
hostname/base_path are freed at
the time of calling gf_event.
Thanks,
Mohit Agrawal
On Wed, Nov 14, 2018 at 9:42
File a bug https://bugzilla.redhat.com/show_bug.cgi?id=1615003, I am not
able to extract logs
specific to this test case from the log dump.
Thanks
Mohit Agrawal
On Sat, Aug 11, 2018 at 9:27 AM, Atin Mukherjee wrote:
> https://build.gluster.org/job/line-coverage/455/consoleFull
>
&g
I have posted a patch https://review.gluster.org/#/c/20657/ and start
brick-mux regression to validate the patch.
Thanks
Mohit Agrawal
On Wed, Aug 8, 2018 at 7:22 AM, Atin Mukherjee wrote:
> +Mohit
>
> Requesting Mohit for help.
>
> On Wed, 8 Aug 2018 at 06:53, Shyam Ranga
it seems on your
patch it is consistently reproducible.
For this build to test your code you can mark it as a bad test and try
to run a regression.I will check how we can resolve the same.
Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster
TEST: 46 Y force_umount /mnt/nfs/0
++
--
[2018-02-21 05:06:15.297241]:++
G_LOG:./tests/bugs/rpc/bug-921072.t: TEST: 57 mount_nfs localhost:/patchy
/mnt/nfs/0 nolock ++
[2018-02-21 05:08:20.341869]:++
G_LOG:./tests/bugs/rpc/bug-921072.t: TEST: 58 Y force_umount /mnt/
Hi Niels,
Thanks for your response.I will file a bug and will update same bt in
bug also.
I don't know about the reproducer, I was getting a crash only one time.
Please let us know if anyone has objection to merge this patch.
Thanks
Mohit Agrawal
On Mon, Oct 9, 2017 at 4:16 PM,
+
On Mon, Oct 9, 2017 at 11:33 AM, Mohit Agrawal wrote:
>
> On Mon, Oct 9, 2017 at 11:16 AM, Mohit Agrawal
> wrote:
>
>> Hi All,
>>
>>
>> For specific to this patch(https://review.gluster.org/#/c/18436/) i am
>> getting crash in nfs(only once) for
Without a patch test case will fail, it is an expected behavior.
Regards
Mohit Agrawal
On Fri, Oct 6, 2017 at 11:04 AM, Ravishankar N
wrote:
> The test is failing on master without any patches:
> [root@tuxpad glusterfs]# prove tests/bugs/bug-1371806_1.t
> tests/bugs/bug-1371806_1
Hi,
Thanks for clarify it. I am already looking it, I will upload a new patch
soon to resolve the same.
Regards
Mohit Agrawal
On Fri, Oct 6, 2017 at 11:14 AM, Ravishankar N
wrote:
>
>
> On 10/06/2017 11:08 AM, Mohit Agrawal wrote:
>
> Without a patch test case will fail, it
Thanks Kaleb for your reply, I will try it and will share the result.
Regards
Mohit Agrawal
On Wed, Sep 27, 2017 at 5:33 PM, Kaleb S. KEITHLEY
wrote:
> On 09/27/2017 04:17 AM, Mohit Agrawal wrote:
> > Niels,
> >
> >Thanks for your reply, I think these built-in functio
using them should
depends on BR2_TOOLCHAIN_GCC_AT_LEAST_4_7 and link with libatomic.
For more please refer this
http://lists.busybox.net/pipermail/buildroot/2016-January/150499.html
Thanks
Mohit Agrawal
On Wed, Sep 27, 2017 at 2:29 PM, Niels de Vos wrote:
> On Wed, Sep 27, 2017 at 01:47:00PM
Niels,
Thanks for your reply, I think these built-in function provides by gcc
and it should support most of the architecture.
In your view what could be the archietecure that does not support these
builtin function ??
Regards
Mohit Agrawal
On Wed, Sep 27, 2017 at 1:20 PM, Niels de Vos
lease share your input on this, appreciate your input.
Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
s the MDS of that
directory we will not allow to run the operation on directory otherwise
dht wind a call to next xlator.
Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
Yes, we will fix it before 4.0 (3.12).
On Thu, Jul 13, 2017 at 1:09 PM, Taehwa Lee wrote:
> the issue is same as me almost.
>
> It looks better than my suggestion.
>
>
> but, It is planned on Gluster 4.0, isn’t it?
>
> Can I follow and develop this issue for 3.10 and master?
>
>
> I want to know
iate your input.
Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
fop_wind = i;
}
}
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Please share your input if any issue in this approach to decide about
hashed_subvolume.
Appreciate your inputs.
Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-devel
working after disconnect.
Regards
Mohit Agrawal
On Thu, Oct 27, 2016 at 8:30 AM, wrote:
> Send Gluster-devel mailing list submissions to
> gluster-devel@gluster.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://www.gluster.org/mailman/l
xattr on
volume in case if the same does exist otherwise it will create new xattr.
For specific to ALC/SeLinux because i am updating only user xattr so it
will be remain same after done heal function.
Regards
Mohit Agrawal
On Fri, Sep 16, 2016 at 9:42 AM, Nithya Balachandran
wrote:
>
>
&
Hi All,
I have upload a new patch (http://review.gluster.org/#/c/15456/),Please
do the code review.
Regards
Mohit Agrawal
On Thu, Sep 8, 2016 at 12:02 PM, Mohit Agrawal wrote:
> Hi All,
>
>I have one another solution to heal user xattr but before implement it
> i would lik
function(dht_dir_xattr_heal) it will copy blindly all user xattr on
all subvolume or i can compare subvol xattr with valid xattr if there is
any mismatch then i will call syncop_setxattr otherwise no need to call.
syncop_setxattr.
Let me know if this approach is suitable.
Regards
Mohit Agrawal
On
Hi Pranith,
In current approach i am getting list of xattr from first up volume and
update the user attributes from that xattr to
all other volumes.
I have assumed first up subvol is source and rest of them are sink as we
are doing same in dht_dir_attr_heal.
Regards
Mohit Agrawal
On Wed, Sep
side
2) Compare change time before call healing function in dht_revalidate_cbk
Please share your input on this.
Appreciate your input.
Regards
Mohit Agrawal
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org
51 matches
Mail list logo