Re: [Gluster-users] Getting Permission Denied in version 6.4

2021-02-12 Thread ABHISHEK PALIWAL
Any pointer please??

On Thu, Feb 11, 2021 at 6:20 PM ABHISHEK PALIWAL 
wrote:

> Hi Team,
>
> Could you please let me know the reason of getting below error in mount
> logs:
>
> [2021-02-02 20:20:11.918444] I [MSGID: 139001]
> [posix-acl.c:263:posix_acl_log_permit_denied] 0-posix-acl-autoload: client:
> -, gfid: 110cea12-48d9-48c3-bfe0-a6bb8f1bc623,
> req(uid:102,gid:0,perm:2,ngrps:1),
> ctx(uid:1003,gid:0,in-groups:1,perm:664,updated-fop:READDIRP,
> acl:(tag:1,perm:6,id:4294967295)(tag:4,perm:4,id:4294967295)(tag:8,perm:4,id:,in-groups:0)(tag:8,perm:4,id:1113,in-groups:0)(tag:8,perm:4,id:1116,in-groups:0)(tag:8,perm:6,id:1120,in-groups:0)(tag:16,perm:6,id:4294967295)(tag:32,perm:4,id:4294967295)*
> [Permission denied]*
> [2021-02-02 20:20:11.918519] W [fuse-bridge.c:1583:fuse_setattr_cbk]
> 0-glusterfs-fuse: 59475: SETATTR() /java/CXC1721558_R107B01.jar => -1
> *(Permission denied)*
>
> --
> Regards
> Abhishek Paliwal
>


-- 




Regards
Abhishek Paliwal




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] Getting Permission Denied in version 6.4

2021-02-11 Thread ABHISHEK PALIWAL
Hi Team,

Could you please let me know the reason of getting below error in mount
logs:

[2021-02-02 20:20:11.918444] I [MSGID: 139001]
[posix-acl.c:263:posix_acl_log_permit_denied] 0-posix-acl-autoload: client:
-, gfid: 110cea12-48d9-48c3-bfe0-a6bb8f1bc623,
req(uid:102,gid:0,perm:2,ngrps:1),
ctx(uid:1003,gid:0,in-groups:1,perm:664,updated-fop:READDIRP,
acl:(tag:1,perm:6,id:4294967295)(tag:4,perm:4,id:4294967295)(tag:8,perm:4,id:,in-groups:0)(tag:8,perm:4,id:1113,in-groups:0)(tag:8,perm:4,id:1116,in-groups:0)(tag:8,perm:6,id:1120,in-groups:0)(tag:16,perm:6,id:4294967295)(tag:32,perm:4,id:4294967295)*
[Permission denied]*
[2021-02-02 20:20:11.918519] W [fuse-bridge.c:1583:fuse_setattr_cbk]
0-glusterfs-fuse: 59475: SETATTR() /java/CXC1721558_R107B01.jar => -1
*(Permission denied)*

-- 
Regards
Abhishek Paliwal




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Memory leak in glusterfs

2019-06-06 Thread ABHISHEK PALIWAL
Hi Nithya,

We are having the setup where copying the file to and deleting it from
gluster mount point to update the latest file. We noticed due to this
having some memory increase in glusterfsd process.

To find the memory leak we are using valgrind but didn't get any help.

That's why contacted to glusterfs community.

Regards,
Abhishek

On Thu, Jun 6, 2019, 16:08 Nithya Balachandran  wrote:

> Hi Abhishek,
>
> I am still not clear as to the purpose of the tests. Can you clarify why
> you are using valgrind and why you think there is a memory leak?
>
> Regards,
> Nithya
>
> On Thu, 6 Jun 2019 at 12:09, ABHISHEK PALIWAL 
> wrote:
>
>> Hi Nithya,
>>
>> Here is the Setup details and test which we are doing as below:
>>
>>
>> One client, two gluster Server.
>> The client is writing and deleting one file each 15 minutes by script
>> test_v4.15.sh.
>>
>> IP
>> Server side:
>> 128.224.98.157 /gluster/gv0/
>> 128.224.98.159 /gluster/gv0/
>>
>> Client side:
>> 128.224.98.160 /gluster_mount/
>>
>> Server side:
>> gluster volume create gv0 replica 2 128.224.98.157:/gluster/gv0/
>> 128.224.98.159:/gluster/gv0/ force
>> gluster volume start gv0
>>
>> root@128:/tmp/brick/gv0# gluster volume info
>>
>> Volume Name: gv0
>> Type: Replicate
>> Volume ID: 7105a475-5929-4d60-ba23-be57445d97b5
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: 128.224.98.157:/gluster/gv0
>> Brick2: 128.224.98.159:/gluster/gv0
>> Options Reconfigured:
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>>
>> exec script: ./ps_mem.py -p 605 -w 61 > log
>> root@128:/# ./ps_mem.py -p 605
>> Private + Shared = RAM used Program
>> 23668.0 KiB + 1188.0 KiB = 24856.0 KiB glusterfsd
>> -
>> 24856.0 KiB
>> =
>>
>>
>> Client side:
>> mount -t glusterfs -o acl -o resolve-gids 128.224.98.157:gv0
>> /gluster_mount
>>
>>
>> We are using the below script write and delete the file.
>>
>> *test_v4.15.sh <http://test_v4.15.sh>*
>>
>> Also the below script to see the memory increase whihle the script is
>> above script is running in background.
>>
>> *ps_mem.py*
>>
>> I am attaching the script files as well as the result got after testing
>> the scenario.
>>
>> On Wed, Jun 5, 2019 at 7:23 PM Nithya Balachandran 
>> wrote:
>>
>>> Hi,
>>>
>>> Writing to a volume should not affect glusterd. The stack you have shown
>>> in the valgrind looks like the memory used to initialise the structures
>>> glusterd uses and will free only when it is stopped.
>>>
>>> Can you provide more details to what it is you are trying to test?
>>>
>>> Regards,
>>> Nithya
>>>
>>>
>>> On Tue, 4 Jun 2019 at 15:41, ABHISHEK PALIWAL 
>>> wrote:
>>>
>>>> Hi Team,
>>>>
>>>> Please respond on the issue which I raised.
>>>>
>>>> Regards,
>>>> Abhishek
>>>>
>>>> On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> Anyone please reply
>>>>>
>>>>> On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL 
>>>>> wrote:
>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> I upload some valgrind logs from my gluster 5.4 setup. This is
>>>>>> writing to the volume every 15 minutes. I stopped glusterd and then copy
>>>>>> away the logs.  The test was running for some simulated days. They are
>>>>>> zipped in valgrind-54.zip.
>>>>>>
>>>>>> Lots of info in valgrind-2730.log. Lots of possibly lost bytes in
>>>>>> glusterfs and even some definitely lost bytes.
>>>>>>
>>>>>> ==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record
>>>>>> 391 of 391
>>>>>> ==2737== at 0x4C29C25: calloc (in
>>>>>> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
>>>>>> ==2737== by 0xA22485E: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>>>>>> ==2737== by 0xA217C94: ??? (in
>>>>>> /usr/lib64/glusterfs/5.4/xlator/mgmt/

Re: [Gluster-users] Memory leak in glusterfs

2019-06-04 Thread ABHISHEK PALIWAL
Hi Team,

Please respond on the issue which I raised.

Regards,
Abhishek

On Fri, May 17, 2019 at 2:46 PM ABHISHEK PALIWAL 
wrote:

> Anyone please reply
>
> On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL 
> wrote:
>
>> Hi Team,
>>
>> I upload some valgrind logs from my gluster 5.4 setup. This is writing to
>> the volume every 15 minutes. I stopped glusterd and then copy away the
>> logs.  The test was running for some simulated days. They are zipped in
>> valgrind-54.zip.
>>
>> Lots of info in valgrind-2730.log. Lots of possibly lost bytes in
>> glusterfs and even some definitely lost bytes.
>>
>> ==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record 391
>> of 391
>> ==2737== at 0x4C29C25: calloc (in
>> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
>> ==2737== by 0xA22485E: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA217C94: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA21D9F8: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA21DED9: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA21E685: ??? (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0xA1B9D8C: init (in
>> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
>> ==2737== by 0x4E511CE: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
>> ==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
>> ==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
>> /usr/lib64/libglusterfs.so.0.0.1)
>> ==2737== by 0x409C35: glusterfs_process_volfp (in /usr/sbin/glusterfsd)
>> ==2737== by 0x409D99: glusterfs_volumes_init (in /usr/sbin/glusterfsd)
>> ==2737==
>> ==2737== LEAK SUMMARY:
>> ==2737== definitely lost: 1,053 bytes in 10 blocks
>> ==2737== indirectly lost: 317 bytes in 3 blocks
>> ==2737== possibly lost: 2,374,971 bytes in 524 blocks
>> ==2737== still reachable: 53,277 bytes in 201 blocks
>> ==2737== suppressed: 0 bytes in 0 blocks
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>

-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Memory leak in glusterfs

2019-05-17 Thread ABHISHEK PALIWAL
Anyone please reply

On Thu, May 16, 2019, 10:49 ABHISHEK PALIWAL 
wrote:

> Hi Team,
>
> I upload some valgrind logs from my gluster 5.4 setup. This is writing to
> the volume every 15 minutes. I stopped glusterd and then copy away the
> logs.  The test was running for some simulated days. They are zipped in
> valgrind-54.zip.
>
> Lots of info in valgrind-2730.log. Lots of possibly lost bytes in
> glusterfs and even some definitely lost bytes.
>
> ==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record 391
> of 391
> ==2737== at 0x4C29C25: calloc (in
> /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
> ==2737== by 0xA22485E: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA217C94: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA21D9F8: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA21DED9: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA21E685: ??? (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0xA1B9D8C: init (in
> /usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
> ==2737== by 0x4E511CE: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
> ==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
> ==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
> /usr/lib64/libglusterfs.so.0.0.1)
> ==2737== by 0x409C35: glusterfs_process_volfp (in /usr/sbin/glusterfsd)
> ==2737== by 0x409D99: glusterfs_volumes_init (in /usr/sbin/glusterfsd)
> ==2737==
> ==2737== LEAK SUMMARY:
> ==2737== definitely lost: 1,053 bytes in 10 blocks
> ==2737== indirectly lost: 317 bytes in 3 blocks
> ==2737== possibly lost: 2,374,971 bytes in 524 blocks
> ==2737== still reachable: 53,277 bytes in 201 blocks
> ==2737== suppressed: 0 bytes in 0 blocks
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Memory leak in glusterfs process

2019-05-15 Thread ABHISHEK PALIWAL
Hi Team,

I upload some valgrind logs from my gluster 5.4 setup. This is writing to
the volume every 15 minutes. I stopped glusterd and then copy away the
logs.  The test was running for some simulated days. They are zipped in
valgrind-54.zip.

Lots of info in valgrind-2730.log. Lots of possibly lost bytes in glusterfs
and even some definitely lost bytes.

==2737== 1,572,880 bytes in 1 blocks are possibly lost in loss record 391
of 391
==2737== at 0x4C29C25: calloc (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==2737== by 0xA22485E: ??? (in
/usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA217C94: ??? (in
/usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA21D9F8: ??? (in
/usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA21DED9: ??? (in
/usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA21E685: ??? (in
/usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0xA1B9D8C: init (in
/usr/lib64/glusterfs/5.4/xlator/mgmt/glusterd.so)
==2737== by 0x4E511CE: xlator_init (in /usr/lib64/libglusterfs.so.0.0.1)
==2737== by 0x4E8A2B8: ??? (in /usr/lib64/libglusterfs.so.0.0.1)
==2737== by 0x4E8AAB3: glusterfs_graph_activate (in
/usr/lib64/libglusterfs.so.0.0.1)
==2737== by 0x409C35: glusterfs_process_volfp (in /usr/sbin/glusterfsd)
==2737== by 0x409D99: glusterfs_volumes_init (in /usr/sbin/glusterfsd)
==2737==
==2737== LEAK SUMMARY:
==2737== definitely lost: 1,053 bytes in 10 blocks
==2737== indirectly lost: 317 bytes in 3 blocks
==2737== possibly lost: 2,374,971 bytes in 524 blocks
==2737== still reachable: 53,277 bytes in 201 blocks
==2737== suppressed: 0 bytes in 0 blocks
-- 

Regards
Abhishek Paliwal
<>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-13 Thread ABHISHEK PALIWAL
are you sure its '--witn-tirpc'? as with it i am getting

WARNING: QA Issue: glusterfs: configure was passed unrecognised options:
--with-tirpc [unknown-configure-option]

also I tried with '--with-libtirpc' but result was same.

Regards,
Abhishek

On Wed, Mar 13, 2019 at 11:56 AM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> We recommend to use 'tirpc' in the later releases. use '--with-tirpc'
> while running ./configure
>
> On Wed, Mar 13, 2019 at 10:55 AM ABHISHEK PALIWAL 
> wrote:
>
>> Hi Amar,
>>
>> this problem seems to be configuration issue due to librpc.
>>
>> Could you please let me know what should be configuration I need to use?
>>
>> Regards,
>> Abhishek
>>
>> On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> logs for libgfrpc.so
>>>
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
>>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
>>> not a dynamic executable
>>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
>>> not a dynamic executable
>>>
>>>
>>> On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Here are the logs:
>>>>
>>>>
>>>> pabhishe@arn-build3$ldd
>>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
>>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
>>>> not a dynamic executable
>>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
>>>> not a dynamic executable
>>>> pabhishe@arn-build3$ldd
>>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
>>>> not a dynamic executable
>>>>
>>>>
>>>> For backtraces I have attached the core_logs.txt file.
>>>>
>>>> Regards,
>>>> Abhishek
>>>>
>>>> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
>>>> atumb...@redhat.com> wrote:
>>>>
>>>>> Hi Abhishek,
>>>>>
>>>>> Few more questions,
>>>>>
>>>>>
>>>>>> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Amar,
>>>>>>>
>>>>>>> Below are the requested logs
>>>>>>>
>>>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>>>>>>> not a dynamic executable
>>>>>>>
>>>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>>>>>>> not a dynamic executable
>>>>>>>
>>>>>>>
>>>>> Can you please add a * at the end, so it gets the linked library list
>>>>> from the actual files (ideally this is a symlink, but I expected it to
>>>>> resolve like in Fedora).
>>>>>
>>>>>
>>>>>
>>>>>> root@128:/# gdb /usr/sbin/glusterd core.1099
>>>>>>> GNU gdb (GDB) 7.10.1
>>>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>>>> License GPLv3+: GNU GPL version 3 or later <
>>>>>>> http://gnu.org/licenses/gpl.html>
>>>>>>> This is free software: you are free to change and redistribute it.
>>>>>>> There is NO WARRANTY, to the extent permitted by law.  Type "show
>>>>>>> copying"
>>>>>>> and "show warranty" for details.
>>>>>>> This GDB was configured as "powerpc64-wrs-linux".
>>>>>>> Type "show configuration" for configuration details.
>>>>>>> For bug reporting instructions, please see:
>>>>>>> <http://www.gnu.org/software/gdb/bugs/>.
>>>>>>> Find the GDB manual and other documentation resources online at:
>>>>>>> <http://www.gnu.org/software/gdb/documentation/>.
>>>>>>> For help, type "help".
>>>>>>> Type "apropos word" to search for commands related to "word"...
>>>>>>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>>>>>>> found)...done.
>>>>>>> [New LWP 1109]
>&

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
Hi Amar,

this problem seems to be configuration issue due to librpc.

Could you please let me know what should be configuration I need to use?

Regards,
Abhishek

On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL 
wrote:

> logs for libgfrpc.so
>
> pabhishe@arn-build3$ldd
> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
> not a dynamic executable
> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
> not a dynamic executable
>
>
> On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL 
> wrote:
>
>> Here are the logs:
>>
>>
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
>> not a dynamic executable
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
>> not a dynamic executable
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
>> not a dynamic executable
>>
>>
>> For backtraces I have attached the core_logs.txt file.
>>
>> Regards,
>> Abhishek
>>
>> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>>> Hi Abhishek,
>>>
>>> Few more questions,
>>>
>>>
>>>> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> Hi Amar,
>>>>>
>>>>> Below are the requested logs
>>>>>
>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>>>>> not a dynamic executable
>>>>>
>>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>>>>> not a dynamic executable
>>>>>
>>>>>
>>> Can you please add a * at the end, so it gets the linked library list
>>> from the actual files (ideally this is a symlink, but I expected it to
>>> resolve like in Fedora).
>>>
>>>
>>>
>>>> root@128:/# gdb /usr/sbin/glusterd core.1099
>>>>> GNU gdb (GDB) 7.10.1
>>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>>> License GPLv3+: GNU GPL version 3 or later <
>>>>> http://gnu.org/licenses/gpl.html>
>>>>> This is free software: you are free to change and redistribute it.
>>>>> There is NO WARRANTY, to the extent permitted by law.  Type "show
>>>>> copying"
>>>>> and "show warranty" for details.
>>>>> This GDB was configured as "powerpc64-wrs-linux".
>>>>> Type "show configuration" for configuration details.
>>>>> For bug reporting instructions, please see:
>>>>> <http://www.gnu.org/software/gdb/bugs/>.
>>>>> Find the GDB manual and other documentation resources online at:
>>>>> <http://www.gnu.org/software/gdb/documentation/>.
>>>>> For help, type "help".
>>>>> Type "apropos word" to search for commands related to "word"...
>>>>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>>>>> found)...done.
>>>>> [New LWP 1109]
>>>>> [New LWP 1101]
>>>>> [New LWP 1105]
>>>>> [New LWP 1110]
>>>>> [New LWP 1099]
>>>>> [New LWP 1107]
>>>>> [New LWP 1119]
>>>>> [New LWP 1103]
>>>>> [New LWP 1112]
>>>>> [New LWP 1116]
>>>>> [New LWP 1104]
>>>>> [New LWP 1239]
>>>>> [New LWP 1106]
>>>>> [New LWP ]
>>>>> [New LWP 1108]
>>>>> [New LWP 1117]
>>>>> [New LWP 1102]
>>>>> [New LWP 1118]
>>>>> [New LWP 1100]
>>>>> [New LWP 1114]
>>>>> [New LWP 1113]
>>>>> [New LWP 1115]
>>>>>
>>>>> warning: Could not load shared library symbols for linux-vdso64.so.1.
>>>>> Do you need "set solib-search-path" or "set sysroot"?
>>>>> [Thread debugging using libthread_db enabled]
>>>>> Using host libthread_db library "/lib64/libthread_db.so.1".
>>>>> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
>>>>> --volfile-id gv0.128.224.95.140.tmp-bric'.
>>>>> Program terminated with signal SIGSEGV, Segmentation fault.
&g

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
logs for libgfrpc.so

pabhishe@arn-build3$ldd
./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
not a dynamic executable
./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
not a dynamic executable


On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL 
wrote:

> Here are the logs:
>
>
> pabhishe@arn-build3$ldd
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
> not a dynamic executable
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
> not a dynamic executable
> pabhishe@arn-build3$ldd
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
> not a dynamic executable
>
>
> For backtraces I have attached the core_logs.txt file.
>
> Regards,
> Abhishek
>
> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> Few more questions,
>>
>>
>>> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Hi Amar,
>>>>
>>>> Below are the requested logs
>>>>
>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>>>> not a dynamic executable
>>>>
>>>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>>>> not a dynamic executable
>>>>
>>>>
>> Can you please add a * at the end, so it gets the linked library list
>> from the actual files (ideally this is a symlink, but I expected it to
>> resolve like in Fedora).
>>
>>
>>
>>> root@128:/# gdb /usr/sbin/glusterd core.1099
>>>> GNU gdb (GDB) 7.10.1
>>>> Copyright (C) 2015 Free Software Foundation, Inc.
>>>> License GPLv3+: GNU GPL version 3 or later <
>>>> http://gnu.org/licenses/gpl.html>
>>>> This is free software: you are free to change and redistribute it.
>>>> There is NO WARRANTY, to the extent permitted by law.  Type "show
>>>> copying"
>>>> and "show warranty" for details.
>>>> This GDB was configured as "powerpc64-wrs-linux".
>>>> Type "show configuration" for configuration details.
>>>> For bug reporting instructions, please see:
>>>> <http://www.gnu.org/software/gdb/bugs/>.
>>>> Find the GDB manual and other documentation resources online at:
>>>> <http://www.gnu.org/software/gdb/documentation/>.
>>>> For help, type "help".
>>>> Type "apropos word" to search for commands related to "word"...
>>>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>>>> found)...done.
>>>> [New LWP 1109]
>>>> [New LWP 1101]
>>>> [New LWP 1105]
>>>> [New LWP 1110]
>>>> [New LWP 1099]
>>>> [New LWP 1107]
>>>> [New LWP 1119]
>>>> [New LWP 1103]
>>>> [New LWP 1112]
>>>> [New LWP 1116]
>>>> [New LWP 1104]
>>>> [New LWP 1239]
>>>> [New LWP 1106]
>>>> [New LWP ]
>>>> [New LWP 1108]
>>>> [New LWP 1117]
>>>> [New LWP 1102]
>>>> [New LWP 1118]
>>>> [New LWP 1100]
>>>> [New LWP 1114]
>>>> [New LWP 1113]
>>>> [New LWP 1115]
>>>>
>>>> warning: Could not load shared library symbols for linux-vdso64.so.1.
>>>> Do you need "set solib-search-path" or "set sysroot"?
>>>> [Thread debugging using libthread_db enabled]
>>>> Using host libthread_db library "/lib64/libthread_db.so.1".
>>>> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
>>>> --volfile-id gv0.128.224.95.140.tmp-bric'.
>>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>>> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
>>>> bytes=bytes@entry=36) at malloc.c:3327
>>>> 3327 {
>>>> [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
>>>> (gdb) bt full
>>>>
>>>
>> This is backtrace of one particular thread. I need output of command
>>
>> (gdb) thread apply all bt full
>>
>>
>> Also, considering this is a crash in the malloc library call itself,
>> would like to know the details of OS, Kernel version and gcc versions.
>>
>> Regards,
>> Amar
>>
>> #0  0x

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
Hi Amar,

did you get time to check the logs?

Regards,
Abhishek

On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL 
wrote:

> Hi Amar,
>
> Below are the requested logs
>
> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
> not a dynamic executable
>
> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
> not a dynamic executable
>
> root@128:/# gdb /usr/sbin/glusterd core.1099
> GNU gdb (GDB) 7.10.1
> Copyright (C) 2015 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "powerpc64-wrs-linux".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> <http://www.gnu.org/software/gdb/bugs/>.
> Find the GDB manual and other documentation resources online at:
> <http://www.gnu.org/software/gdb/documentation/>.
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
> found)...done.
> [New LWP 1109]
> [New LWP 1101]
> [New LWP 1105]
> [New LWP 1110]
> [New LWP 1099]
> [New LWP 1107]
> [New LWP 1119]
> [New LWP 1103]
> [New LWP 1112]
> [New LWP 1116]
> [New LWP 1104]
> [New LWP 1239]
> [New LWP 1106]
> [New LWP ]
> [New LWP 1108]
> [New LWP 1117]
> [New LWP 1102]
> [New LWP 1118]
> [New LWP 1100]
> [New LWP 1114]
> [New LWP 1113]
> [New LWP 1115]
>
> warning: Could not load shared library symbols for linux-vdso64.so.1.
> Do you need "set solib-search-path" or "set sysroot"?
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140 --volfile-id
> gv0.128.224.95.140.tmp-bric'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
> bytes=bytes@entry=36) at malloc.c:3327
> 3327 {
> [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
> (gdb) bt full
> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
> bytes=bytes@entry=36) at malloc.c:3327
> nb = 
> idx = 
> bin = 
> victim = 
> size = 
> victim_index = 
> remainder = 
> remainder_size = 
> block = 
> bit = 
> map = 
> fwd = 
> bck = 
> errstr = 0x0
> __func__ = "_int_malloc"
> #1  0x3fffb76a93dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
> ar_ptr = 0x3fffa820
> victim = 
> hook = 
> __func__ = "__libc_malloc"
> #2  0x3fffb7764fd0 in x_inline (xdrs=0x3fffb1686d20, len= out>) at xdr_sizeof.c:89
> len = 36
> xdrs = 0x3fffb1686d20
> #3  0x3fffb7842488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #4  0x3fffb7842e84 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #5  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa81099f0, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa8109aa0 "\265\256\373\200\f\206\361j"
> stat = 
> #6  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
> objpp=0x3fffa81099f0, obj_size=,
> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> more_data = 1
> #7  0x3fffb7842ec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #8  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa8109870, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa8109920 "\232\373\377\315\352\325\005\271"
> stat = 
> #9  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
> objpp=0x3fffa8109870, obj_size=,
> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> more_data = 1
> #10 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #11 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa81096f0, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa81097a0 "\241X\372!\216\256=\342"
>

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-11 Thread ABHISHEK PALIWAL
5
more_data = 1
#16 0x3fffb7842ec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
No symbol table info available.
#17 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
pp=0x3fffa81093f0, size=, proc=) at
xdr_ref.c:84
loc = 0x3fffa81094a0 "\200L\027F'\177\366D"
stat = 
#18 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
objpp=0x3fffa81093f0, obj_size=,
xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
more_data = 1
#19 0x3fffb7842ec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
No symbol table info available.
#20 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
pp=0x3fffa8109270, size=, proc=) at
xdr_ref.c:84
loc = 0x3fffa8109320 "\217{dK(\001E\220"
stat = 
#21 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
objpp=0x3fffa8109270, obj_size=,
xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
more_data = 1
#22 0x3fffb7842ec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
No symbol table info available.
#23 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
pp=0x3fffa81090f0, size=, proc=) at
xdr_ref.c:84
loc = 0x3fffa81091a0 "\217\275\067\336\232\300(\005"
stat = 
#24 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
objpp=0x3fffa81090f0, obj_size=,
xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
more_data = 1
#25 0x3fffb7842ec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
No symbol table info available.
#26 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
pp=0x3fffa8108f70, size=, proc=) at
xdr_ref.c:84
loc = 0x3fffa8109020 "\260.\025\b\244\352IT"
stat = 
#27 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
objpp=0x3fffa8108f70, obj_size=,
xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
more_data = 1
#28 0x3fffb7842ec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
No symbol table info available.
#29 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
pp=0x3fffa8108df0, size=, proc=) at
xdr_ref.c:84
loc = 0x3fffa8108ea0 "\212GS\203l\035\n\\"
---Type  to continue, or q  to quit---


Regards,
Abhishek

On Mon, Mar 11, 2019 at 7:10 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> Hi Abhishek,
>
> Can you check and get back to us?
>
> ```
> bash# ldd /usr/lib64/libglusterfs.so
> bash# ldd /usr/lib64/libgfrpc.so
>
> ```
>
> Also considering you have the core, can you do `(gdb) thr apply all bt
> full`  and pass it on?
>
> Thanks & Regards,
> Amar
>
> On Mon, Mar 11, 2019 at 3:41 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi Team,
>>
>> COuld you please provide some pointer to debug it further.
>>
>> Regards,
>> Abhishek
>>
>> On Fri, Mar 8, 2019 at 2:19 PM ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi Team,
>>>
>>> I am using Glusterfs 5.4, where after setting the gluster mount point
>>> when trying to access it, glusterfsd is getting crashed and mount point
>>> through the "Transport endpoint is not connected error.
>>>
>>> Here I are the gdb log for the core file
>>>
>>> warning: Could not load shared library symbols for linux-vdso64.so.1.
>>> Do you need "set solib-search-path" or "set sysroot"?
>>> [Thread debugging using libthread_db enabled]
>>> Using host libthread_db library "/lib64/libthread_db.so.1".
>>> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
>>> --volfile-id gv0.128.224.95.140.tmp-bric'.
>>> Program terminated with signal SIGSEGV, Segmentation fault.
>>> #0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
>>> bytes=bytes@entry=36) at malloc.c:3327
>>> 3327 {
>>> [Current thread is 1 (Thread 0x3fff90394160 (LWP 811))]
>>> (gdb)
>>> (gdb)
>>> (gdb) bt
>>> #0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
>>> bytes=bytes@entry=36) at malloc.c:3327
>>> #1  0x3fff95ab43dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
>>> #2  0x3fff95b6ffd0 in x_inline (xdrs=0x3fff90391d20, len=>> out>) at xdr_sizeof.c:89
>>> #3  0x3fff95c4d488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
>>> #4  0x3fff95c4de84 in .xdr_gfx_dirplist () from
>>> /usr/lib64/libgfxdr.so.0
>>> #5  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
>>> pp=0x3fff7c132020, size=, proc=) at
>>> xdr_ref.c:84
>>> #6  0x3fff95b6f

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-11 Thread ABHISHEK PALIWAL
Hi Team,

COuld you please provide some pointer to debug it further.

Regards,
Abhishek

On Fri, Mar 8, 2019 at 2:19 PM ABHISHEK PALIWAL 
wrote:

> Hi Team,
>
> I am using Glusterfs 5.4, where after setting the gluster mount point when
> trying to access it, glusterfsd is getting crashed and mount point through
> the "Transport endpoint is not connected error.
>
> Here I are the gdb log for the core file
>
> warning: Could not load shared library symbols for linux-vdso64.so.1.
> Do you need "set solib-search-path" or "set sysroot"?
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140 --volfile-id
> gv0.128.224.95.140.tmp-bric'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
> bytes=bytes@entry=36) at malloc.c:3327
> 3327 {
> [Current thread is 1 (Thread 0x3fff90394160 (LWP 811))]
> (gdb)
> (gdb)
> (gdb) bt
> #0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
> bytes=bytes@entry=36) at malloc.c:3327
> #1  0x3fff95ab43dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
> #2  0x3fff95b6ffd0 in x_inline (xdrs=0x3fff90391d20, len= out>) at xdr_sizeof.c:89
> #3  0x3fff95c4d488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
> #4  0x3fff95c4de84 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #5  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c132020, size=, proc=) at
> xdr_ref.c:84
> #6  0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c132020, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #7  0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #8  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c131ea0, size=, proc=) at
> xdr_ref.c:84
> #9  0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c131ea0, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #10 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #11 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c131d20, size=, proc=) at
> xdr_ref.c:84
> #12 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c131d20, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #13 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #14 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c131ba0, size=, proc=) at
> xdr_ref.c:84
> #15 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c131ba0, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #16 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #17 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c131a20, size=, proc=) at
> xdr_ref.c:84
> #18 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c131a20, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #19 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #20 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c1318a0, size=, proc=) at
> xdr_ref.c:84
> #21 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c1318a0, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #22 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #23 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c131720, size=, proc=) at
> xdr_ref.c:84
> #24 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c131720, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #25 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #26 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
> pp=0x3fff7c1315a0, size=, proc=) at
> xdr_ref.c:84
> #27 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
> objpp=0x3fff7c1315a0, obj_size=,
> xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> #28 0x3fff95c4dec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> #29 0x3fff95b6fc2

[Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-08 Thread ABHISHEK PALIWAL
Hi Team,

I am using Glusterfs 5.4, where after setting the gluster mount point when
trying to access it, glusterfsd is getting crashed and mount point through
the "Transport endpoint is not connected error.

Here I are the gdb log for the core file

warning: Could not load shared library symbols for linux-vdso64.so.1.
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140 --volfile-id
gv0.128.224.95.140.tmp-bric'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
bytes=bytes@entry=36) at malloc.c:3327
3327 {
[Current thread is 1 (Thread 0x3fff90394160 (LWP 811))]
(gdb)
(gdb)
(gdb) bt
#0  0x3fff95ab1d48 in _int_malloc (av=av@entry=0x3fff7c20,
bytes=bytes@entry=36) at malloc.c:3327
#1  0x3fff95ab43dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
#2  0x3fff95b6ffd0 in x_inline (xdrs=0x3fff90391d20, len=) at xdr_sizeof.c:89
#3  0x3fff95c4d488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
#4  0x3fff95c4de84 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#5  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c132020, size=, proc=) at
xdr_ref.c:84
#6  0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c132020, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#7  0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#8  0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c131ea0, size=, proc=) at
xdr_ref.c:84
#9  0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c131ea0, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#10 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#11 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c131d20, size=, proc=) at
xdr_ref.c:84
#12 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c131d20, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#13 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#14 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c131ba0, size=, proc=) at
xdr_ref.c:84
#15 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c131ba0, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#16 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#17 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c131a20, size=, proc=) at
xdr_ref.c:84
#18 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c131a20, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#19 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#20 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c1318a0, size=, proc=) at
xdr_ref.c:84
#21 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c1318a0, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#22 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#23 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c131720, size=, proc=) at
xdr_ref.c:84
#24 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c131720, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#25 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#26 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c1315a0, size=, proc=) at
xdr_ref.c:84
#27 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c1315a0, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#28 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#29 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c131420, size=, proc=) at
xdr_ref.c:84
#30 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c131420, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xdr_ref.c:135
#31 0x3fff95c4dec0 in .xdr_gfx_dirplist () from /usr/lib64/libgfxdr.so.0
#32 0x3fff95b6fc28 in __GI_xdr_reference (xdrs=0x3fff90391d20,
pp=0x3fff7c1312a0, size=, proc=) at
xdr_ref.c:84
#33 0x3fff95b6fe04 in __GI_xdr_pointer (xdrs=0x3fff90391d20,
objpp=0x3fff7c1312a0, obj_size=,
xdr_obj=@0x3fff95c6a4b0: 0x3fff95c4ddc0 <.xdr_gfx_dirplist>) at
xd

Re: [Gluster-users] Not able to start glusterd

2019-03-06 Thread ABHISHEK PALIWAL
Hi Sanju,

Thanks for the response.

I have resolved the issue, actually I have updated from 3.7.6 to 5.0, in
new version RPC is coming from libtirpb , but I forgot to enable
"--with-libtirpc" in configuration.

After enabling able to start glusterd.

Regards,
Abhishek

On Wed, Mar 6, 2019 at 12:58 PM Sanju Rakonde  wrote:

> Abhishek,
>
> We need below information on investigate this issue.
> 1. gluster --version
> 2. Please run glusterd in gdb, so that we can capture the backtrace. I see
> some rpc errors in log, but backtrace will be more helpful.
> To run glusterd in gdb, you need start glusterd in gdb (i.e. gdb
> glusterd, and then give the command "run -N"). when you see a segmentation
>  fault, please capture the backtrace and paste it here.
>
> On Wed, Mar 6, 2019 at 10:07 AM ABHISHEK PALIWAL 
> wrote:
>
>> Hi Team,
>>
>> I am facing the issue where at the time of starting the glusterd
>> segmentation fault is reported.
>>
>> Below are the logs
>>
>> root@128:/usr/sbin# ./glusterd  --debug
>> [1970-01-01 15:19:43.940386] I [MSGID: 100030] [glusterfsd.c:2691:main]
>> 0-./glusterd: Started running ./glusterd version 5.0 (args: ./glusterd
>> --debug)
>> [1970-01-01 15:19:43.940855] D
>> [logging.c:1833:__gf_log_inject_timer_event] 0-logging-infra: Starting
>> timer now. Timeout = 120, current buf size = 5
>> [1970-01-01 15:19:43.941736] D [MSGID: 0] [glusterfsd.c:747:get_volfp]
>> 0-glusterfsd: loading volume file /etc/glusterfs/glusterd.vol
>> [1970-01-01 15:19:43.945796] D [MSGID: 101097]
>> [xlator.c:341:xlator_dynload_newway] 0-xlator: dlsym(xlator_api) on
>> /usr/lib64/glusterfs/5.0/xlator/mgmt/glusterd.so: undefined symbol:
>> xlator_api. Fall back to old symbols
>> [1970-01-01 15:19:43.946279] I [MSGID: 106478] [glusterd.c:1435:init]
>> 0-management: Maximum allowed open file descriptors set to 65536
>> [1970-01-01 15:19:43.946419] I [MSGID: 106479] [glusterd.c:1491:init]
>> 0-management: Using /var/lib/glusterd as working directory
>> [1970-01-01 15:19:43.946515] I [MSGID: 106479] [glusterd.c:1497:init]
>> 0-management: Using /var/run/gluster as pid file working directory
>> [1970-01-01 15:19:43.946968] D [MSGID: 0]
>> [glusterd.c:458:glusterd_rpcsvc_options_build] 0-glusterd: listen-backlog
>> value: 10
>> [1970-01-01 15:19:43.947139] D [rpcsvc.c:2607:rpcsvc_init] 0-rpc-service:
>> RPC service inited.
>> [1970-01-01 15:19:43.947241] D [rpcsvc.c:2146:rpcsvc_program_register]
>> 0-rpc-service: New program registered: GF-DUMP, Num: 123451501, Ver: 1,
>> Port: 0
>> [1970-01-01 15:19:43.947379] D [rpc-transport.c:269:rpc_transport_load]
>> 0-rpc-transport: attempt to load file
>> /usr/lib64/glusterfs/5.0/rpc-transport/socket.so
>> [1970-01-01 15:19:43.955198] D [socket.c:4464:socket_init]
>> 0-socket.management: Configued transport.tcp-user-timeout=0
>> [1970-01-01 15:19:43.955316] D [socket.c:4482:socket_init]
>> 0-socket.management: Reconfigued transport.keepalivecnt=9
>> [1970-01-01 15:19:43.955415] D
>> [socket.c:4167:ssl_setup_connection_params] 0-socket.management: SSL
>> support on the I/O path is NOT enabled
>> [1970-01-01 15:19:43.955504] D
>> [socket.c:4170:ssl_setup_connection_params] 0-socket.management: SSL
>> support for glusterd is NOT enabled
>> [1970-01-01 15:19:43.955612] D [name.c:572:server_fill_address_family]
>> 0-socket.management: option address-family not specified, defaulting to
>> inet6
>> [1970-01-01 15:19:43.955928] D [rpc-transport.c:269:rpc_transport_load]
>> 0-rpc-transport: attempt to load file
>> /usr/lib64/glusterfs/5.0/rpc-transport/rdma.so
>> [1970-01-01 15:19:43.956079] E [rpc-transport.c:273:rpc_transport_load]
>> 0-rpc-transport: /usr/lib64/glusterfs/5.0/rpc-transport/rdma.so: cannot
>> open shared object file: No such file or directory
>> [1970-01-01 15:19:43.956177] W [rpc-transport.c:277:rpc_transport_load]
>> 0-rpc-transport: volume 'rdma.management': transport-type 'rdma' is not
>> valid or not found on this machine
>> [1970-01-01 15:19:43.956270] W [rpcsvc.c:1789:rpcsvc_create_listener]
>> 0-rpc-service: cannot create listener, initing the transport failed
>> [1970-01-01 15:19:43.956362] E [MSGID: 106244] [glusterd.c:1798:init]
>> 0-management: creation of 1 listeners failed, continuing with succeeded
>> transport
>> [1970-01-01 15:19:43.956459] D [rpcsvc.c:2146:rpcsvc_program_register]
>> 0-rpc-service: New program registered: GlusterD svc peer, Num: 1238437,
>> Ver: 2, Port: 0
>> [1970-01-01 15:19:43.956561] D [rpcsvc.c:2146:rpcsvc_program_register]
>> 0-rpc-service:

[Gluster-users] Not able to start glusterd

2019-03-05 Thread ABHISHEK PALIWAL
 reduced. About to flush 5 extra log
messages
[1970-01-01 15:19:44.841156] D [logging.c:1808:gf_log_flush_extra_msgs]
0-logging-infra: Just flushed 5 extra log messages
pending frames:
patchset: git://git.gluster.org/glusterfs.git
signal received: 11
time of crash:
1970-01-01 15:19:44
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 5.0
/usr/lib64/libglusterfs.so.0(+0x422a4)[0x3fffa12ab2a4]
/usr/lib64/libglusterfs.so.0(gf_print_trace-0xf5080)[0x3fffa12b82e0]
./glusterd(glusterfsd_print_trace-0x22fa4)[0x100067ec]
linux-vdso64.so.1(__kernel_sigtramp_rt64+0x0)[0x3fffa13f0478]
/lib64/libc.so.6(xdr_accepted_reply-0x72d3c)[0x3fffa11375cc]
/lib64/libc.so.6(xdr_accepted_reply-0x72d9c)[0x3fffa113756c]
/lib64/libc.so.6(xdr_union-0x63a94)[0x3fffa1147dd4]
/lib64/libc.so.6(xdr_replymsg-0x72c58)[0x3fffa11376e0]
/lib64/libc.so.6(xdr_sizeof-0x62a78)[0x3fffa1149120]
/usr/lib64/libgfrpc.so.0(+0x9b0c)[0x3fffa124fb0c]
/usr/lib64/libgfrpc.so.0(rpcsvc_submit_generic-0x149f4)[0x3fffa125228c]
/usr/lib64/libgfrpc.so.0(+0xc614)[0x3fffa1252614]
/usr/lib64/libgfrpc.so.0(+0xcf00)[0x3fffa1252f00]
/usr/lib64/libgfrpc.so.0(+0xd224)[0x3fffa1253224]
/usr/lib64/libgfrpc.so.0(+0xd84c)[0x3fffa125384c]
/usr/lib64/libgfrpc.so.0(rpc_transport_notify-0x10eec)[0x3fffa125610c]
/usr/lib64/glusterfs/5.0/rpc-transport/socket.so(+0xc09c)[0x3fff9d51709c]
/usr/lib64/libglusterfs.so.0(+0xb84bc)[0x3fffa13214bc]
/lib64/libpthread.so.0(+0xbb30)[0x3fffa11bdb30]
/lib64/libc.so.6(clone-0x9e964)[0x3fffa110817c]
-
Segmentation fault (core dumped)

Could you  please help me, what actually the problem?


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread ABHISHEK PALIWAL
I am trying to build Gluster5.4 but getting below error at the time of
configure

conftest.c:11:28: fatal error: ac_nonexistent.h: No such file or directory

Could you please help me what is the reason of the above error.

Regards,
Abhishek

On Wed, Feb 27, 2019 at 8:42 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> GlusterD2 is not yet called out for standalone deployments.
>
> You can happily update to glusterfs-5.x (recommend you to wait for
> glusterfs-5.4 which is already tagged, and waiting for packages to be
> built).
>
> Regards,
> Amar
>
> On Wed, Feb 27, 2019 at 4:46 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi,
>>
>> Could  you please update on this and also let us know what is GlusterD2
>> (as it is under development in 5.0 release), so it is ok to uplift to 5.0?
>>
>> Regards,
>> Abhishek
>>
>> On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
>> wrote:
>>
>>> Hi,
>>>
>>> Currently we are using Glusterfs 3.7.6 and thinking to switch on
>>> Glusterfs 4.1 or 5.0, when I see there are too much code changes between
>>> these version, could you please let us know, is there any compatibility
>>> issue when we uplift any of the new mentioned version?
>>>
>>> Regards
>>> Abhishek
>>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Version uplift query

2019-02-27 Thread ABHISHEK PALIWAL
Hi,

Could  you please update on this and also let us know what is GlusterD2 (as
it is under development in 5.0 release), so it is ok to uplift to 5.0?

Regards,
Abhishek

On Tue, Feb 26, 2019 at 5:47 PM ABHISHEK PALIWAL 
wrote:

> Hi,
>
> Currently we are using Glusterfs 3.7.6 and thinking to switch on Glusterfs
> 4.1 or 5.0, when I see there are too much code changes between these
> version, could you please let us know, is there any compatibility issue
> when we uplift any of the new mentioned version?
>
> Regards
> Abhishek
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Version uplift query

2019-02-26 Thread ABHISHEK PALIWAL
Hi,

Currently we are using Glusterfs 3.7.6 and thinking to switch on Glusterfs
4.1 or 5.0, when I see there are too much code changes between these
version, could you please let us know, is there any compatibility issue
when we uplift any of the new mentioned version?

Regards
Abhishek
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-25 Thread ABHISHEK PALIWAL
Hi Pranith,

I have some questions if you can answer them:


What in LIBC exit() routine has resulted in SIGSEGV in this case ?

- Why the call trace always point to LIBC exit() in all these crash
instances on gluster ?

- Can there be any connection between LIBC exit() crash and SIGTERM
handling at early start of gluster ?



 Regards,

Abhishek

On Tue, Sep 25, 2018 at 2:27 PM Pranith Kumar Karampuri 
wrote:

>
>
> On Tue, Sep 25, 2018 at 2:17 PM ABHISHEK PALIWAL 
> wrote:
>
>> I don't have the step to reproduce, but its a race condition where it
>> seems cleanup_and_exit() is accessing the data structure which are not yet
>> initialised (as gluster is in starting phase), due to SIGTERM/SIGINT is
>> sent in between.
>>
>
> But the crash happened inside exit() code for which will be in libc which
> doesn't access any data structures in glusterfs.
>
>
>>
>> Regards,
>> Abhishek
>>
>> On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Mon, Sep 24, 2018 at 5:16 PM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Hi Pranith,
>>>>
>>>> As we know this problem is getting triggered at startup of the glusterd
>>>> process when it received the SIGTERM.
>>>>
>>>> I think there is a problem in glusterfs code, if at startup someone
>>>> sent the SIGTERM the exit handler should not be crash instead it should
>>>> with some information.
>>>>
>>>> Could please let me know the possibility to fix it from glusterfs side?
>>>>
>>>
>>> I am not as confident as you about the RC you provided. If you could
>>> give the steps to re-create, I will be happy to confirm that the RC is
>>> correct and then I will send out the fix.
>>>
>>>
>>>>
>>>> Regards,
>>>> Abhishek
>>>>
>>>> On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri <
>>>> pkara...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> Could you please let me know about the bug in libc which you are
>>>>>> talking.
>>>>>>
>>>>>
>>>>> No, I mean, if you give the steps to reproduce, we will be able to pin
>>>>> point if the issue is with libc or glusterfs.
>>>>>
>>>>>
>>>>>>
>>>>>> On Mon, Sep 24, 2018 at 2:01 PM Pranith Kumar Karampuri <
>>>>>> pkara...@redhat.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 24, 2018 at 1:57 PM ABHISHEK PALIWAL <
>>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>>
>>>>>>>> If you see the source code in cleanup_and_exit() we are getting the
>>>>>>>> SIGSEGV crash when 'exit(0)' is triggered.
>>>>>>>>
>>>>>>>
>>>>>>> yes, that is what I was mentioning earlier. It is crashing in libc.
>>>>>>> So either there is a bug in libc (glusterfs actually found 1 bug so far 
>>>>>>> in
>>>>>>> libc, so I wouldn't rule out that possibility) or there is something 
>>>>>>> that
>>>>>>> is happening in glusterfs which is leading to the problem.
>>>>>>> Valgrind/address-sanitizer would help find where the problem could be in
>>>>>>> some cases, so before reaching out libc developers, it is better to 
>>>>>>> figure
>>>>>>> out where the problem is. Do you have steps to recreate it?
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Sep 24, 2018 at 1:41 PM Pranith Kumar Karampuri <
>>>>>>>> pkara...@redhat.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Mon, Sep 24, 2018 at 1:36 PM ABHISHEK PALIWAL <
>>>>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Hi Sanju,
>>>>>>>>>>
>>>>>>>>>> Do you have an

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-25 Thread ABHISHEK PALIWAL
I don't have the step to reproduce, but its a race condition where it seems
cleanup_and_exit() is accessing the data structure which are not yet
initialised (as gluster is in starting phase), due to SIGTERM/SIGINT is
sent in between.

Regards,
Abhishek

On Mon, Sep 24, 2018 at 9:11 PM Pranith Kumar Karampuri 
wrote:

>
>
> On Mon, Sep 24, 2018 at 5:16 PM ABHISHEK PALIWAL 
> wrote:
>
>> Hi Pranith,
>>
>> As we know this problem is getting triggered at startup of the glusterd
>> process when it received the SIGTERM.
>>
>> I think there is a problem in glusterfs code, if at startup someone sent
>> the SIGTERM the exit handler should not be crash instead it should with
>> some information.
>>
>> Could please let me know the possibility to fix it from glusterfs side?
>>
>
> I am not as confident as you about the RC you provided. If you could give
> the steps to re-create, I will be happy to confirm that the RC is correct
> and then I will send out the fix.
>
>
>>
>> Regards,
>> Abhishek
>>
>> On Mon, Sep 24, 2018 at 3:12 PM Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Mon, Sep 24, 2018 at 2:09 PM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Could you please let me know about the bug in libc which you are
>>>> talking.
>>>>
>>>
>>> No, I mean, if you give the steps to reproduce, we will be able to pin
>>> point if the issue is with libc or glusterfs.
>>>
>>>
>>>>
>>>> On Mon, Sep 24, 2018 at 2:01 PM Pranith Kumar Karampuri <
>>>> pkara...@redhat.com> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Sep 24, 2018 at 1:57 PM ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> If you see the source code in cleanup_and_exit() we are getting the
>>>>>> SIGSEGV crash when 'exit(0)' is triggered.
>>>>>>
>>>>>
>>>>> yes, that is what I was mentioning earlier. It is crashing in libc. So
>>>>> either there is a bug in libc (glusterfs actually found 1 bug so far in
>>>>> libc, so I wouldn't rule out that possibility) or there is something that
>>>>> is happening in glusterfs which is leading to the problem.
>>>>> Valgrind/address-sanitizer would help find where the problem could be in
>>>>> some cases, so before reaching out libc developers, it is better to figure
>>>>> out where the problem is. Do you have steps to recreate it?
>>>>>
>>>>>
>>>>>>
>>>>>> On Mon, Sep 24, 2018 at 1:41 PM Pranith Kumar Karampuri <
>>>>>> pkara...@redhat.com> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Sep 24, 2018 at 1:36 PM ABHISHEK PALIWAL <
>>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Sanju,
>>>>>>>>
>>>>>>>> Do you have any update on this?
>>>>>>>>
>>>>>>>
>>>>>>> This seems to happen while the process is dying, in libc. I am not
>>>>>>> completely sure if there is anything glusterfs is contributing to it 
>>>>>>> from
>>>>>>> the bt at the moment. Do you have any steps to re-create this problem? 
>>>>>>> It
>>>>>>> is probably better to run the steps with valgrind/address-sanitizer and 
>>>>>>> see
>>>>>>> if it points to the problem in glusterfs.
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Abhishek
>>>>>>>>
>>>>>>>> On Fri, Sep 21, 2018 at 4:07 PM ABHISHEK PALIWAL <
>>>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Sanju,
>>>>>>>>>
>>>>>>>>> Output of 't a a bt full'
>>>>>>>>>
>>>>>>>>> (gdb) t a a bt full
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Thread 7 (LWP 1743):
>>>>>>>>>
>>>>>

Re: [Gluster-users] [Gluster-devel] Crash in glusterfs!!!

2018-09-21 Thread ABHISHEK PALIWAL
all-template.S:84
#1  0x3fff7a5c4f28 in gf_timer_proc (ctx=0x10027010) at timer.c:205
#2  0x3fff7a4ccb30 in start_thread (arg=0x3fff78a51160) at
pthread_create.c:462
#3  0x3fff7a4170fc in .__clone () at
../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96
---Type  to continue, or q  to quit---

Thread 2 (LWP 445):
#0  0x3fff7a4d4ccc in __pthread_cond_timedwait (cond=0x10059a98,
mutex=0x10059a70, abstime=0x3fff77250670) at pthread_cond_timedwait.c:198
#1  0x3fff7a5f1e74 in syncenv_task (proc=0x10054468) at syncop.c:607
#2  0x3fff7a5f2cdc in syncenv_processor (thdata=0x10054468) at
syncop.c:699
#3  0x3fff7a4ccb30 in start_thread (arg=0x3fff77251160) at
pthread_create.c:462
#4  0x3fff7a4170fc in .__clone () at
../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96

Thread 1 (LWP 443):
#0  0x3fff7a3953b0 in _IO_unbuffer_all () at genops.c:960
#1  _IO_cleanup () at genops.c:1020
#2  0x3fff7a34fd00 in __run_exit_handlers (status=,
listp=, run_list_atexit=run_list_atexit@entry=true) at
exit.c:95
#3  0x3fff7a34fe1c in __GI_exit (status=) at exit.c:104
#4  0x1000984c in cleanup_and_exit (signum=) at
glusterfsd.c:1295
#5  0x10009a64 in glusterfs_sigwaiter (arg=) at
glusterfsd.c:2016
#6  0x3fff7a4ccb30 in start_thread (arg=0x3fff78251160) at
pthread_create.c:462
#7  0x3fff7a4170fc in .__clone () at
../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96
(gdb)
(gdb)

On Fri, Sep 21, 2018 at 3:06 PM Sanju Rakonde  wrote:

> Hi Abhishek,
>
> Can you please share the output of "t a a bt" with us?
>
> Thanks,
> Sanju
>
> On Fri, Sep 21, 2018 at 2:55 PM, ABHISHEK PALIWAL  > wrote:
>
>>
>> We have seen a SIGSEGV crash on glusterfs process on kernel restart at
>> start up.
>>
>> (gdb) bt
>> #0  0x3fffad4463b0 in _IO_unbuffer_all () at genops.c:960
>> #1  _IO_cleanup () at genops.c:1020
>> #2  0x3fffad400d00 in __run_exit_handlers (status=,
>> listp=, run_list_atexit=run_list_atexit@entry=true) at
>> exit.c:95
>> #3  0x3fffad400e1c in __GI_exit (status=) at
>> exit.c:104
>> #4  0x1000984c in cleanup_and_exit (signum=) at
>> glusterfsd.c:1295
>> #5  0x10009a64 in *glusterfs_sigwaiter *(arg=) at
>> glusterfsd.c:2016
>> #6  0x3fffad57db30 in start_thread (arg=0x3fffab302160) at
>> pthread_create.c:462
>> #7  0x3fffad4c7cdc in .__clone () at
>> ../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96
>>
>> (gdb) bt full
>> #0  0x3fffad4463b0 in _IO_unbuffer_all () at genops.c:960
>> __result = 0
>> __self = 0x3fffab302160
>> cnt = 1
>> fp = 0x3fffa4001f00
>> #1  _IO_cleanup () at genops.c:1020
>> result = 0
>> #2  0x3fffad400d00 in __run_exit_handlers (status=,
>> listp=, run_list_atexit=run_list_atexit@entry=true) at
>> exit.c:95
>> ptr = 0x3fffad557000 <__elf_set___libc_atexit_element__IO_cleanup__>
>> #3  0x3fffad400e1c in __GI_exit (status=) at
>> exit.c:104
>> No locals.
>> #4  0x1000984c in cleanup_and_exit (signum=) at
>> glusterfsd.c:1295
>> ctx = 
>> trav = 
>> __FUNCTION__ = > memory at address 0x10010e38)>
>> #5  0x10009a64 in glusterfs_sigwaiter (arg=) at
>> glusterfsd.c:2016
>> set = {__val = {18947, 0 }}
>> ret = 
>> sig = 15
>> #6  0x3fffad57db30 in start_thread (arg=0x3fffab302160) at
>> pthread_create.c:462
>> pd = 0x3fffab302160
>> now = 
>> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {5451414826039278896,
>> 70367357615104, 5451414826003312788, 0, 0, 70367312883712, 70367321268768,
>> 8388608,
>> 70367357575200, 70367913735952, 268595776, 70367357600728,
>> 268588656, 3, 0, 70367357600744, 70367913735600, 70367913735656, 4001536,
>> 70367357576216, 70367321265984, -3187653564, 0 > times>}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data =
>> {prev = 0x0,
>>   cleanup = 0x0, canceltype = 0}}}
>> not_first_call = 
>> pagesize_m1 = 
>> sp = 
>> freesize = 
>> __PRETTY_FUNCTION__ = "start_thread"
>> ---Type  to continue, or q  to quit---
>> #7  0x3fffad4c7cdc in .__clone () at
>> ../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96
>> No locals
>>
>> *Can you please help us in finding the cause for SIGSEGV. ?*
>> *Also please share your understanding on this issue.*
>> --
>> Regards
>> Abhishek Paliwal
>>
>> ___
>> Gluster-devel mailing list
>> gluster-de...@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-devel
>>
>
>
>
> --
> Thanks,
> Sanju
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Crash in glusterfs!!!

2018-09-21 Thread ABHISHEK PALIWAL
We have seen a SIGSEGV crash on glusterfs process on kernel restart at
start up.

(gdb) bt
#0  0x3fffad4463b0 in _IO_unbuffer_all () at genops.c:960
#1  _IO_cleanup () at genops.c:1020
#2  0x3fffad400d00 in __run_exit_handlers (status=,
listp=, run_list_atexit=run_list_atexit@entry=true) at
exit.c:95
#3  0x3fffad400e1c in __GI_exit (status=) at exit.c:104
#4  0x1000984c in cleanup_and_exit (signum=) at
glusterfsd.c:1295
#5  0x10009a64 in *glusterfs_sigwaiter *(arg=) at
glusterfsd.c:2016
#6  0x3fffad57db30 in start_thread (arg=0x3fffab302160) at
pthread_create.c:462
#7  0x3fffad4c7cdc in .__clone () at
../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96

(gdb) bt full
#0  0x3fffad4463b0 in _IO_unbuffer_all () at genops.c:960
__result = 0
__self = 0x3fffab302160
cnt = 1
fp = 0x3fffa4001f00
#1  _IO_cleanup () at genops.c:1020
result = 0
#2  0x3fffad400d00 in __run_exit_handlers (status=,
listp=, run_list_atexit=run_list_atexit@entry=true) at
exit.c:95
ptr = 0x3fffad557000 <__elf_set___libc_atexit_element__IO_cleanup__>
#3  0x3fffad400e1c in __GI_exit (status=) at exit.c:104
No locals.
#4  0x1000984c in cleanup_and_exit (signum=) at
glusterfsd.c:1295
ctx = 
trav = 
__FUNCTION__ = 
#5  0x10009a64 in glusterfs_sigwaiter (arg=) at
glusterfsd.c:2016
set = {__val = {18947, 0 }}
ret = 
sig = 15
#6  0x3fffad57db30 in start_thread (arg=0x3fffab302160) at
pthread_create.c:462
pd = 0x3fffab302160
now = 
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {5451414826039278896,
70367357615104, 5451414826003312788, 0, 0, 70367312883712, 70367321268768,
8388608,
70367357575200, 70367913735952, 268595776, 70367357600728,
268588656, 3, 0, 70367357600744, 70367913735600, 70367913735656, 4001536,
70367357576216, 70367321265984, -3187653564, 0 }, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data =
{prev = 0x0,
  cleanup = 0x0, canceltype = 0}}}
not_first_call = 
pagesize_m1 = 
sp = 
freesize = 
__PRETTY_FUNCTION__ = "start_thread"
---Type  to continue, or q  to quit---
#7  0x3fffad4c7cdc in .__clone () at
../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:96
No locals

*Can you please help us in finding the cause for SIGSEGV. ?*
*Also please share your understanding on this issue.*
-- 
Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Status of the patch!!!

2018-01-31 Thread ABHISHEK PALIWAL
Hi Atin,

Yes, agree that you explained it repeatedly. Even we are getting this issue
very rarely and that is when we are doing repeated reboot of system.

We tried to debug it further but not able to identify in which
situation/rare case it is generating the empty info file which is causing
this issue.

As, you guys working on this to fix, that is why I asked if you guys know
the race condition or root cause of the problem.


Regards,
Abhishek

On Wed, Jan 31, 2018 at 5:43 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

> I have repeatedly explained this multiple times the way to hit this
> problem is *extremely rare* and until and unless you prove us wrong and
> explain why do you think you can get into this situation often. I still see
> that information is not being made available to us to think through why
> this fix is critical. Also as I mentioned earlier, this piece of change
> touches upon the core store utils code path which need to be thought out
> really well with all the corner cases before pushing the commit.
>
> On Wed, Jan 31, 2018 at 5:32 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Team,
>>
>> I am facing one issue which is exactly same as mentioned on the below link
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1408431
>>
>> Also there are some patches available to fix the issue but seems those
>> are not approved and still discussion is going on
>>
>> https://review.gluster.org/#/c/16279/
>>
>> Currently the status is "Abandoned".
>>
>> Could you please let me know what is our plan to release this patch?
>> Please respond as it is important for us.
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>> _______
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Status of the patch!!!

2018-01-31 Thread ABHISHEK PALIWAL
Hi Team,

I am facing one issue which is exactly same as mentioned on the below link

https://bugzilla.redhat.com/show_bug.cgi?id=1408431

Also there are some patches available to fix the issue but seems those are
not approved and still discussion is going on

https://review.gluster.org/#/c/16279/

Currently the status is "Abandoned".

Could you please let me know what is our plan to release this patch? Please
respond as it is important for us.


Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Crash in glusterd!!!

2017-12-06 Thread ABHISHEK PALIWAL
I hope these logs were sufficient... please let me know if you require more
logs.

On Wed, Dec 6, 2017 at 3:26 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Atin,
>
> Please find the backtrace and logs files attached here.
>
> Also, below are the BT from core.
>
> (gdb) bt
>
> #0  0x3fff8834b898 in __GI_raise (sig=) at
> ../sysdeps/unix/sysv/linux/raise.c:55
>
> #1  0x3fff88350fd0 in __GI_abort () at abort.c:89
>
>
>
> [**ALERT: The abort() might not be exactly invoked from the following
> function line.
>
> If the trail function contains multiple abort() calls,
> then you should cross check by other means to get correct abort() call
> location.
>
> This is due to the optimized compilation which hides the
> debug info for multiple abort() calls in a given function.
>
> Refer TR HU16995 for more information]
>
>
>
> #2  0x3fff8838be04 in __libc_message (do_abort=,
> fmt=) at ../sysdeps/posix/libc_fatal.c:175
>
> #3  0x3fff8839aba8 in malloc_printerr (action=,
> str=0x3fff8847e498 "double free or corruption (!prev)", ptr= out>, ar_ptr=) at malloc.c:5007
>
> #4  0x3fff8839ba40 in _int_free (av=0x3fff6c20, p=,
> have_lock=) at malloc.c:3868
>
> #5  0x3fff885e0814 in __gf_free (free_ptr=0x3fff6c045da0) at
> mem-pool.c:336
>
> #6  0x3fff849093c4 in glusterd_friend_sm () at glusterd-sm.c:1295
>
> #7  0x3fff84901a58 in __glusterd_handle_incoming_unfriend_req
> (req=0x3fff8481c06c) at glusterd-handler.c:2606
>
> #8  0x3fff848fb870 in glusterd_big_locked_handler (req=0x3fff8481c06c,
> actor_fn=@0x3fff84a43e70: 0x3fff84901830 
> <__glusterd_handle_incoming_unfriend_req>)
> at glusterd-handler.c:83
>
> #9  0x3fff848fbd08 in glusterd_handle_incoming_unfriend_req
> (req=) at glusterd-handler.c:2615
>
> #10 0x3fff8854e87c in rpcsvc_handle_rpc_call (svc=0x10062fd0
> <_GLOBAL__sub_I__ZN27UehChSwitchFachToDchC_ActorC2EP12RTControllerP10RTActorRef()+1148>,
> trans=, msg=0x3fff6c000920) at rpcsvc.c:705
>
> #11 0x3fff8854eb7c in rpcsvc_notify (trans=0x3fff74002210,
> mydata=, event=, data=) at
> rpcsvc.c:799
>
> #12 0x3fff885514fc in rpc_transport_notify (this=,
> event=, data=) at rpc-transport.c:546
>
> #13 0x3fff847fcd44 in socket_event_poll_in 
> (this=this@entry=0x3fff74002210)
> at socket.c:2236
>
> #14 0x3fff847ff89c in socket_event_handler (fd=,
> idx=, data=0x3fff74002210, poll_in=,
> poll_out=, poll_err=) at socket.c:2349
>
> #15 0x3fff88616874 in event_dispatch_epoll_handler
> (event=0x3fff83d9d6a0, event_pool=0x10045bc0 <_GLOBAL__sub_I__
> ZN29DrhIfRhControlPdrProxyC_ActorC2EP12RTControllerP10RTActorRef()+116>)
> at event-epoll.c:575
>
> #16 event_dispatch_epoll_worker (data=0x100bb4a0
> <main_thread_func__()+1756>) at event-epoll.c:678
>
> #17 0x3fff884cfb10 in start_thread (arg=0x3fff83d9e160) at
> pthread_create.c:339
>
> #18 0x3fff88419c0c in .__clone () at ../sysdeps/unix/sysv/linux/
> powerpc/powerpc64/clone.S:96
>
>
>
> (gdb) bt full
>
> #0  0x3fff8834b898 in __GI_raise (sig=) at
> ../sysdeps/unix/sysv/linux/raise.c:55
>
> r4 = 1560
>
> r7 = 16
>
> arg2 = 1560
>
> r5 = 6
>
> r8 = 0
>
> arg3 = 6
>
> r0 = 250
>
> r3 = 0
>
> r6 = 8
>
> arg1 = 0
>
> sc_err = 
>
> sc_ret = 
>
> pd = 0x3fff83d9e160
>
> pid = 0
>
> ---Type  to continue, or q  to quit---
>
> selftid = 1560
>
> #1  0x3fff88350fd0 in __GI_abort () at abort.c:89
>
> save_stage = 2
>
> act = {__sigaction_handler = {sa_handler = 0x0, sa_sigaction =
> 0x0}, sa_mask = {__val = {0 }}, sa_flags = 0, sa_restorer
> = 0x0}
>
> sigs = {__val = {32, 0 }}
>
>
>
> [**ALERT: The abort() might not be exactly invoked from the following
> function line.
>
> If the trail function contains multiple abort() calls,
> then you should cross check by other means to get correct abort() call
> location.
>
> This is due to the optimized compilation which hides the
> debug info for multiple abort() calls in a given function.
>
> Refer TR HU16995 for more information]
>
>
>
> #2  0x3fff8838be04 in __libc_message (do_abort=,
> fmt=) at ../sysdeps/posix/libc_fatal.c:175
>
> ap = 
>
> fd = 
>
> on_2 = 
>
> list = 
>
> nlist = 
>
> cp = 
>
> written 

Re: [Gluster-users] Crash in glusterd!!!

2017-12-06 Thread ABHISHEK PALIWAL
Any suggestion

On Dec 6, 2017 11:51, "ABHISHEK PALIWAL" <abhishpali...@gmail.com> wrote:

> Hi Team,
>
> We are getting the crash in glusterd after start of it. When I tried to
> debug in brick logs we are getting below errors:
>
> [2017-12-01 14:10:14.684122] E [MSGID: 100018]
> [glusterfsd.c:1960:glusterfs_pidfile_update] 0-glusterfsd: pidfile
> /system/glusterd/vols/c_glusterfs/run/10.32.1.144-opt-lvmdir-c2-brick.pid
> lock failed [Resource temporarily unavailable]
> :
> :
> :
> [2017-12-01 14:10:16.862903] E [MSGID: 113001] 
> [posix-helpers.c:1228:posix_fhandle_pair]
> 0-c_glusterfs-posix: fd=18: key:trusted.bit-rot.version [No space left on
> device]
> [2017-12-01 14:10:16.862985] I [MSGID: 115063] 
> [server-rpc-fops.c:1317:server_ftruncate_cbk]
> 0-c_glusterfs-server: 92: FTRUNCATE 1 
> (934f08b7-e3b5-4690-84fc-742a4b1fb78b)==>
> (No space left on device) [No space left on device]
> [2017-12-01 14:10:16.907037] E [MSGID: 113001] 
> [posix-helpers.c:1228:posix_fhandle_pair]
> 0-c_glusterfs-posix: fd=17: key:trusted.bit-rot.version [No space left on
> device]
> [2017-12-01 14:10:16.907108] I [MSGID: 115063] 
> [server-rpc-fops.c:1317:server_ftruncate_cbk]
> 0-c_glusterfs-server: 35: FTRUNCATE 0 
> (109d6537-a1ec-4556-8ce1-04c365c451eb)==>
> (No space left on device) [No space left on device]
> [2017-12-01 14:10:16.947541] E [MSGID: 113001] 
> [posix-helpers.c:1228:posix_fhandle_pair]
> 0-c_glusterfs-posix: fd=17: key:trusted.bit-rot.version [No space left on
> device]
> [2017-12-01 14:10:16.947623] I [MSGID: 115063] 
> [server-rpc-fops.c:1317:server_ftruncate_cbk]
> 0-c_glusterfs-server: 70: FTRUNCATE 0 
> (8f9c8054-b0d7-4b93-a95b-cd3ab249c56d)==>
> (No space left on device) [No space left on device]
> [2017-12-01 14:10:16.968515] E [MSGID: 113001] 
> [posix.c:4616:_posix_remove_xattr]
> 0-c_glusterfs-posix: removexattr failed on /opt/lvmdir/c2/brick/.
> glusterfs/00/00/----0001/configuration (for
> trusted.glusterfs.dht) [No space left on device]
> [2017-12-01 14:10:16.968589] I [MSGID: 115058]
> [server-rpc-fops.c:740:server_removexattr_cbk] 0-c_glusterfs-server: 90:
> REMOVEXATTR 
> (a240d2fd-869c-408d-9b95-62ee1bff074e) of key  ==> (No space left on
> device) [No space left on device]
> [2017-12-01 14:10:17.039815] E [MSGID: 113001] 
> [posix-helpers.c:1228:posix_fhandle_pair]
> 0-c_glusterfs-posix: fd=17: key:trusted.bit-rot.version [No space left on
> device]
> [2017-12-01 14:10:17.039900] I [MSGID: 115063] 
> [server-rpc-fops.c:1317:server_ftruncate_cbk]
> 0-c_glusterfs-server: 152: FTRUNCATE 0 
> (d67bcfcd-ff19-4b58-9823-46d6cce9ace3)==>
> (No space left on device) [No space left on device]
> [2017-12-01 14:10:17.048767] E [MSGID: 113001] 
> [posix-helpers.c:1228:posix_fhandle_pair]
> 0-c_glusterfs-posix: fd=17: key:trusted.bit-rot.version [No space left on
> device]
> [2017-12-01 14:10:17.048874] I [MSGID: 115063] 
> [server-rpc-fops.c:1317:server_ftruncate_cbk]
> 0-c_glusterfs-server: 163: FTRUNCATE 0 
> (0e3ee6ad-408b-4fcf-a1a7-4262ec113316)==>
> (No space left on device) [No space left on device]
> [2017-12-01 14:10:17.075007] E [MSGID: 113001] 
> [posix.c:4616:_posix_remove_xattr]
> 0-c_glusterfs-posix: removexattr failed on /opt/lvmdir/c2/brick/.
> glusterfs/00/00/----0001/java (for
> trusted.glusterfs.dht) [No space left on device]
>
> Also, we are having the lack disk space.
>
> Could any one please explain me what glusterd is doing in brick so that it
> is causing of its crash.
>
> Please find the brick logs in attachment.
>
> Thanks in advance!!!
> --
> Regards
> Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Permission for glusterfs logs.

2017-09-20 Thread ABHISHEK PALIWAL
Hi Team,

I did some modification in glusterfs code and now able to modify the
permission of maximum of files.

But still 2 file's permission in 0600

1. cli.log
2. file which contains the mounting information for "mount -t glusterfs"
command

I will really appreciate, if some can point light on this area. Also is
there any side effect of changing these permissions apart from other user
can access these.

Regards,
Abhishek

On Tue, Sep 19, 2017 at 6:52 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Any suggestion would be appreciated...
>
> On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" <abhishpali...@gmail.com> wrote:
>
>> Any quick suggestion.?
>>
>> On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> As you can see permission for the glusterfs logs in /var/log/glusterfs
>>> is 600.
>>>
>>> drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
>>> *-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
>>> drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
>>> drwxr-xr-x 3 root root  100 Jan  3 20:21 .
>>> *-rw--- 1 root root 2102 Jan  3 20:21 etc-glusterfs-glusterd.vol.log*
>>>
>>> Due to that non-root user is not able to access these logs files, could
>>> you please let me know how can I change these permission. So that non-root
>>> user can also access these log files.
>>>
>>> Regards,
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Permission for glusterfs logs.

2017-09-20 Thread ABHISHEK PALIWAL
I have modified the source code and its working fine but only below two
files permission is not getting change even after modification.

1. cli.log
2. file which contains the mounting information for "mount -t glusterfs"
command

On Wed, Sep 20, 2017 at 5:20 PM, Kaleb S. KEITHLEY <kkeit...@redhat.com>
wrote:

> On 09/18/2017 09:22 PM, ABHISHEK PALIWAL wrote:
> > Any suggestion would be appreciated...
> >
> > On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" <abhishpali...@gmail.com
> > <mailto:abhishpali...@gmail.com>> wrote:
> >
> > Any quick suggestion.?
> >
> > On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL
> > <abhishpali...@gmail.com <mailto:abhishpali...@gmail.com>> wrote:
> >
> > Hi Team,
> >
> > As you can see permission for the glusterfs logs in
> > /var/log/glusterfs is 600.
> >
> > drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
> > *-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
> > drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
> > drwxr-xr-x 3 root root  100 Jan  3 20:21 .
> > *-rw--- 1 root root 2102 Jan  3 20:21
> > etc-glusterfs-glusterd.vol.log*
> >
> > Due to that non-root user is not able to access these logs
> > files, could you please let me know how can I change these
> > permission. So that non-root user can also access these log
> files.
> >
>
> There is no "quick fix."  Gluster creates the log files with 0600 — like
> nearly everything else in /var/log.
>
> The admin can chmod the files, but when the logs rotate the new log
> files will be 0600 again.
>
> You'd have to patch the source and rebuild to get different permission
> bits.
>
> You can probably do something with ACLs, but as above, when the logs
> rotate the new files won't have the ACLs.
>
>
>
> --
>
> Kaleb
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Permission for glusterfs logs.

2017-09-20 Thread ABHISHEK PALIWAL
Any suggestion would be appreciated...

On Sep 18, 2017 15:05, "ABHISHEK PALIWAL" <abhishpali...@gmail.com> wrote:

> Any quick suggestion.?
>
> On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Team,
>>
>> As you can see permission for the glusterfs logs in /var/log/glusterfs is
>> 600.
>>
>> drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
>> *-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
>> drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
>> drwxr-xr-x 3 root root  100 Jan  3 20:21 .
>> *-rw--- 1 root root 2102 Jan  3 20:21 etc-glusterfs-glusterd.vol.log*
>>
>> Due to that non-root user is not able to access these logs files, could
>> you please let me know how can I change these permission. So that non-root
>> user can also access these log files.
>>
>> Regards,
>> Abhishek Paliwal
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Permission for glusterfs logs.

2017-09-19 Thread ABHISHEK PALIWAL
Any quick suggestion.?

On Mon, Sep 18, 2017 at 1:50 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Team,
>
> As you can see permission for the glusterfs logs in /var/log/glusterfs is
> 600.
>
> drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
> *-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
> drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
> drwxr-xr-x 3 root root  100 Jan  3 20:21 .
> *-rw--- 1 root root 2102 Jan  3 20:21 etc-glusterfs-glusterd.vol.log*
>
> Due to that non-root user is not able to access these logs files, could
> you please let me know how can I change these permission. So that non-root
> user can also access these log files.
>
> Regards,
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Permission for glusterfs logs.

2017-09-19 Thread ABHISHEK PALIWAL
Hi Team,

As you can see permission for the glusterfs logs in /var/log/glusterfs is
600.

drwxr-xr-x 3 root root  140 Jan  1 00:00 ..
*-rw--- 1 root root0 Jan  3 20:21 cmd_history.log*
drwxr-xr-x 2 root root   40 Jan  3 20:21 bricks
drwxr-xr-x 3 root root  100 Jan  3 20:21 .
*-rw--- 1 root root 2102 Jan  3 20:21 etc-glusterfs-glusterd.vol.log*

Due to that non-root user is not able to access these logs files, could you
please let me know how can I change these permission. So that non-root user
can also access these log files.

Regards,
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] High load on glusterfs!!

2017-08-30 Thread ABHISHEK PALIWAL
Do we have ACL support on nfs-ganesha?

On Aug 30, 2017 3:08 PM, "Niels de Vos" <nde...@redhat.com> wrote:

> On Wed, Aug 30, 2017 at 01:52:59PM +0530, ABHISHEK PALIWAL wrote:
> > What is Gluster/NFS and how can we use this.
>
> Gluster/NFS (or gNFS) is the NFS-server that comes with GlusterFS. It is
> a NFSv3 server and can only be used to export Gluster volumes.
>
> You can enable it:
>  - install the glusterfs-gnfs RPM (glusterfs >= 3.11)
>  - the glusterfs-server RPM might contain the NFS-server (glusterfs < 3.11)
>  - build with "./configure --enable-gnfs"
>  - enable per volume with: gluster volume set $VOLUME nfs.disable false
>  - logs are in /var/log/gluster/nfs.log
>
> But really, NFS-Ganesha is the recommendation. It has many more features
> and will receive regular updates for improvements.
>
> Niels
>
>
> >
> > On Wed, Aug 30, 2017 at 1:24 PM, Niels de Vos <nde...@redhat.com> wrote:
> >
> > > On Thu, Aug 17, 2017 at 12:03:02PM +0530, ABHISHEK PALIWAL wrote:
> > > > Hi Team,
> > > >
> > > > I have an query regarding the usage of ACL on gluster volume. I have
> > > > noticed that when we use normal gluster volume (without ACL) CPU
> load is
> > > > low, but when we apply the ACL on gluster volume which internally
> uses
> > > Fuse
> > > > ACL, CPU load gets increase about 6x times.
> > > >
> > > > Could you please let me know is this expected or we can do some other
> > > > configuration to reduce this type of overhead on gluster volume with
> > > ACLs.
> > > >
> > > > For more clarification we are using kernel NFS for exporting the
> gluster
> > > > volume.
> > >
> > > Exporting Gluster volumes over FUSE and kernel NFS is not something we
> > > suggest and test. There are (or at least were) certain limitations in
> > > FUSE that prevented good support for this.
> > >
> > > Please use NFS-Ganesha instead, that is the NFS server we actively
> > > develop with. Gluster/NFS is still available too, but is only receiving
> > > the occasional fixes and is only suggested for legacy users that did
> not
> > > move to NFS-Ganesha yet.
> > >
> > > HTH,
> > > Niels
> > >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] High load on glusterfs!!

2017-08-30 Thread ABHISHEK PALIWAL
What is Gluster/NFS and how can we use this.

On Wed, Aug 30, 2017 at 1:24 PM, Niels de Vos <nde...@redhat.com> wrote:

> On Thu, Aug 17, 2017 at 12:03:02PM +0530, ABHISHEK PALIWAL wrote:
> > Hi Team,
> >
> > I have an query regarding the usage of ACL on gluster volume. I have
> > noticed that when we use normal gluster volume (without ACL) CPU load is
> > low, but when we apply the ACL on gluster volume which internally uses
> Fuse
> > ACL, CPU load gets increase about 6x times.
> >
> > Could you please let me know is this expected or we can do some other
> > configuration to reduce this type of overhead on gluster volume with
> ACLs.
> >
> > For more clarification we are using kernel NFS for exporting the gluster
> > volume.
>
> Exporting Gluster volumes over FUSE and kernel NFS is not something we
> suggest and test. There are (or at least were) certain limitations in
> FUSE that prevented good support for this.
>
> Please use NFS-Ganesha instead, that is the NFS server we actively
> develop with. Gluster/NFS is still available too, but is only receiving
> the occasional fixes and is only suggested for legacy users that did not
> move to NFS-Ganesha yet.
>
> HTH,
> Niels
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load on glusterfs!!

2017-08-30 Thread ABHISHEK PALIWAL
Is it possible to suggest something here?

On Aug 17, 2017 12:03 PM, "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
wrote:

> Hi Team,
>
> I have an query regarding the usage of ACL on gluster volume. I have
> noticed that when we use normal gluster volume (without ACL) CPU load is
> low, but when we apply the ACL on gluster volume which internally uses Fuse
> ACL, CPU load gets increase about 6x times.
>
> Could you please let me know is this expected or we can do some other
> configuration to reduce this type of overhead on gluster volume with ACLs.
>
> For more clarification we are using kernel NFS for exporting the gluster
> volume.
>
> Please let me know if you require more information.
>
> --
> Regards
> Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] High load on glusterfs!!

2017-08-17 Thread ABHISHEK PALIWAL
Hi Team,

I have an query regarding the usage of ACL on gluster volume. I have
noticed that when we use normal gluster volume (without ACL) CPU load is
low, but when we apply the ACL on gluster volume which internally uses Fuse
ACL, CPU load gets increase about 6x times.

Could you please let me know is this expected or we can do some other
configuration to reduce this type of overhead on gluster volume with ACLs.

For more clarification we are using kernel NFS for exporting the gluster
volume.

Please let me know if you require more information.

-- 
Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load on CPU due to glusterfsd process

2017-08-02 Thread ABHISHEK PALIWAL
Could you please response?

On Fri, Jul 28, 2017 at 5:55 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Team,
>
> Whenever I am performing the IO operation on gluster volume, the loads is
> getting increase on CPU which reaches upto 70-80 sometimes.
>
> when we started debugging, found that the io_worker thread is created to
> server the IO request and consume high CPU till that request gets completed.
>
> Could you please let me know why io_worker thread takes this much of CPU.
>
> Is there any way to resole this?
>
> --
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] High load on CPU due to glusterfsd process

2017-07-28 Thread ABHISHEK PALIWAL
Hi Team,

Whenever I am performing the IO operation on gluster volume, the loads is
getting increase on CPU which reaches upto 70-80 sometimes.

when we started debugging, found that the io_worker thread is created to
server the IO request and consume high CPU till that request gets completed.

Could you please let me know why io_worker thread takes this much of CPU.

Is there any way to resole this?

-- 

Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Empty info file preventing glusterd from starting

2017-06-01 Thread ABHISHEK PALIWAL
Hi Niels,

I have backported that patch on Gluster 3.7.6 and we haven't seen any other
issue due to that patch.

Everything is fine till now in our testing and its going on extensively.

Regards,
Abhishek

On Thu, Jun 1, 2017 at 1:46 PM, Niels de Vos <nde...@redhat.com> wrote:

> On Thu, Jun 01, 2017 at 01:03:25PM +0530, ABHISHEK PALIWAL wrote:
> > Hi Niels,
> >
> > No problem we wil try to backport that patch on 3.7.6.
> >
> > Could you please let me know in which release Gluster community is going
> to
> > provide this patch and date of that release?
>
> It really depends on when someone has time to work on it. Our releases
> are time based, and will happen even when a bugfix/feature is not merged
> or implemented. We can't give any guarantees about availability for
> final patche (or backports).
>
> The best you can do is help testing a potential fix, and work with the
> developer(s) of that patch to improve and get it accepted in the master
> branch. If developers do not have time to work on it, or progress is
> slow, you can ask them if you can take it over from if you are
> comfortable with writing the code.
>
> Niels
>
>
> >
> > Regards,
> > Abhishek
> >
> > On Wed, May 31, 2017 at 10:05 PM, Niels de Vos <nde...@redhat.com>
> wrote:
> >
> > > On Wed, May 31, 2017 at 04:08:06PM +0530, ABHISHEK PALIWAL wrote:
> > > > We are using 3.7.6 and on link https://review.gluster.org/#/c/16279
> > > status
> > > > is "can't merge"
> > >
> > > Note that 3.7.x will not get any updates anymore. We currently maintain
> > > version 3.8.x, 3.10.x and 3.11.x. See the release schedele for more
> > > details:
> > >   https://www.gluster.org/community/release-schedule/
> > >
> > > Niels
> > >
> > >
> > > >
> > > > On Wed, May 31, 2017 at 4:05 PM, Amar Tumballi <atumb...@redhat.com>
> > > wrote:
> > > >
> > > > > This is already part of 3.11.0 release?
> > > > >
> > > > > On Wed, May 31, 2017 at 3:47 PM, ABHISHEK PALIWAL <
> > > abhishpali...@gmail.com
> > > > > > wrote:
> > > > >
> > > > >> Hi Atin,
> > > > >>
> > > > >> Could you please let us know any time plan for deliver of this
> patch.
> > > > >>
> > > > >> Regards,
> > > > >> Abhishek
> > > > >>
> > > > >> On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <
> > > abhishpali...@gmail.com
> > > > >> > wrote:
> > > > >>
> > > > >>> Actually it is very risky if it will reproduce in production
> thats is
> > > > >>> why I said it is on high priority as want to resolve it before
> > > production.
> > > > >>>
> > > > >>> On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <
> amukh...@redhat.com>
> > > > >>> wrote:
> > > > >>>
> > > > >>>>
> > > > >>>>
> > > > >>>> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <
> > > > >>>> abhishpali...@gmail.com> wrote:
> > > > >>>>
> > > > >>>>> Hi Atin,
> > > > >>>>>
> > > > >>>>> Thanks for your reply.
> > > > >>>>>
> > > > >>>>>
> > > > >>>>> Its urgent because this error is very rarely reproducible we
> have
> > > seen
> > > > >>>>> this 2 3 times in our system till now.
> > > > >>>>>
> > > > >>>>> We have delivery in near future so that we want it asap. Please
> > > try to
> > > > >>>>> review it internally.
> > > > >>>>>
> > > > >>>>
> > > > >>>> I don't think your statements justified the reason of urgency
> as (a)
> > > > >>>> you have mentioned it to be *rarely* reproducible and (b) I am
> still
> > > > >>>> waiting for a real use case where glusterd will go through
> multiple
> > > > >>>> restarts in a loop?
> > > > >>>>
> > > > >>>>
> > > > >>>>> Regards,
> > > > >>>>> Abhishek
> > > > >>>>>
&

Re: [Gluster-users] [Gluster-devel] Empty info file preventing glusterd from starting

2017-06-01 Thread ABHISHEK PALIWAL
Hi Niels,

No problem we wil try to backport that patch on 3.7.6.

Could you please let me know in which release Gluster community is going to
provide this patch and date of that release?

Regards,
Abhishek

On Wed, May 31, 2017 at 10:05 PM, Niels de Vos <nde...@redhat.com> wrote:

> On Wed, May 31, 2017 at 04:08:06PM +0530, ABHISHEK PALIWAL wrote:
> > We are using 3.7.6 and on link https://review.gluster.org/#/c/16279
> status
> > is "can't merge"
>
> Note that 3.7.x will not get any updates anymore. We currently maintain
> version 3.8.x, 3.10.x and 3.11.x. See the release schedele for more
> details:
>   https://www.gluster.org/community/release-schedule/
>
> Niels
>
>
> >
> > On Wed, May 31, 2017 at 4:05 PM, Amar Tumballi <atumb...@redhat.com>
> wrote:
> >
> > > This is already part of 3.11.0 release?
> > >
> > > On Wed, May 31, 2017 at 3:47 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com
> > > > wrote:
> > >
> > >> Hi Atin,
> > >>
> > >> Could you please let us know any time plan for deliver of this patch.
> > >>
> > >> Regards,
> > >> Abhishek
> > >>
> > >> On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com
> > >> > wrote:
> > >>
> > >>> Actually it is very risky if it will reproduce in production thats is
> > >>> why I said it is on high priority as want to resolve it before
> production.
> > >>>
> > >>> On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <amukh...@redhat.com>
> > >>> wrote:
> > >>>
> > >>>>
> > >>>>
> > >>>> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <
> > >>>> abhishpali...@gmail.com> wrote:
> > >>>>
> > >>>>> Hi Atin,
> > >>>>>
> > >>>>> Thanks for your reply.
> > >>>>>
> > >>>>>
> > >>>>> Its urgent because this error is very rarely reproducible we have
> seen
> > >>>>> this 2 3 times in our system till now.
> > >>>>>
> > >>>>> We have delivery in near future so that we want it asap. Please
> try to
> > >>>>> review it internally.
> > >>>>>
> > >>>>
> > >>>> I don't think your statements justified the reason of urgency as (a)
> > >>>> you have mentioned it to be *rarely* reproducible and (b) I am still
> > >>>> waiting for a real use case where glusterd will go through multiple
> > >>>> restarts in a loop?
> > >>>>
> > >>>>
> > >>>>> Regards,
> > >>>>> Abhishek
> > >>>>>
> > >>>>> On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <
> amukh...@redhat.com>
> > >>>>> wrote:
> > >>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <
> > >>>>>> abhishpali...@gmail.com> wrote:
> > >>>>>>
> > >>>>>>> + Muthu-vingeshwaran
> > >>>>>>>
> > >>>>>>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <
> > >>>>>>> abhishpali...@gmail.com> wrote:
> > >>>>>>>
> > >>>>>>>> Hi Atin/Team,
> > >>>>>>>>
> > >>>>>>>> We are using gluster-3.7.6 with setup of two brick and while
> > >>>>>>>> restart of system I have seen that the glusterd daemon is
> getting failed
> > >>>>>>>> from start.
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> At the time of analyzing the logs from etc-glusterfs...log
> file
> > >>>>>>>> I have received the below logs
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030]
> > >>>>>>>> [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running
> > >>>>>>>> /usr/sbin/glusterd version 3.7.6 (args: /usr/sbin/glusterd -p
> > >>>>>>>> 

Re: [Gluster-users] [Gluster-devel] Empty info file preventing glusterd from starting

2017-05-31 Thread ABHISHEK PALIWAL
So is there any one working on it to fix this issue either by this patch or
some other way? if yes then please provide the time plan.

On Wed, May 31, 2017 at 4:25 PM, Amar Tumballi <atumb...@redhat.com> wrote:

>
>
> On Wed, May 31, 2017 at 4:08 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> We are using 3.7.6 and on link https://review.gluster.org/#/c/16279
>> status is "can't merge"
>>
>> On Wed, May 31, 2017 at 4:05 PM, Amar Tumballi <atumb...@redhat.com>
>> wrote:
>>
>>> This is already part of 3.11.0 release?
>>>
>>
> Sorry about confusion! I was thinking of another patch. This patch is not
> part of any releases yet.
>
> It says can't merge because it is failing regression tests and also looks
> like it needs a rebase to latest master branch too.
>
> -Amar
>
>
>>> On Wed, May 31, 2017 at 3:47 PM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Hi Atin,
>>>>
>>>> Could you please let us know any time plan for deliver of this patch.
>>>>
>>>> Regards,
>>>> Abhishek
>>>>
>>>> On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> Actually it is very risky if it will reproduce in production thats is
>>>>> why I said it is on high priority as want to resolve it before production.
>>>>>
>>>>>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Empty info file preventing glusterd from starting

2017-05-31 Thread ABHISHEK PALIWAL
We are using 3.7.6 and on link https://review.gluster.org/#/c/16279 status
is "can't merge"

On Wed, May 31, 2017 at 4:05 PM, Amar Tumballi <atumb...@redhat.com> wrote:

> This is already part of 3.11.0 release?
>
> On Wed, May 31, 2017 at 3:47 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Atin,
>>
>> Could you please let us know any time plan for deliver of this patch.
>>
>> Regards,
>> Abhishek
>>
>> On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
>> > wrote:
>>
>>> Actually it is very risky if it will reproduce in production thats is
>>> why I said it is on high priority as want to resolve it before production.
>>>
>>> On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <amukh...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> Hi Atin,
>>>>>
>>>>> Thanks for your reply.
>>>>>
>>>>>
>>>>> Its urgent because this error is very rarely reproducible we have seen
>>>>> this 2 3 times in our system till now.
>>>>>
>>>>> We have delivery in near future so that we want it asap. Please try to
>>>>> review it internally.
>>>>>
>>>>
>>>> I don't think your statements justified the reason of urgency as (a)
>>>> you have mentioned it to be *rarely* reproducible and (b) I am still
>>>> waiting for a real use case where glusterd will go through multiple
>>>> restarts in a loop?
>>>>
>>>>
>>>>> Regards,
>>>>> Abhishek
>>>>>
>>>>> On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <amukh...@redhat.com>
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <
>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>
>>>>>>> + Muthu-vingeshwaran
>>>>>>>
>>>>>>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>>>>>>> abhishpali...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Atin/Team,
>>>>>>>>
>>>>>>>> We are using gluster-3.7.6 with setup of two brick and while
>>>>>>>> restart of system I have seen that the glusterd daemon is getting 
>>>>>>>> failed
>>>>>>>> from start.
>>>>>>>>
>>>>>>>>
>>>>>>>> At the time of analyzing the logs from etc-glusterfs...log file
>>>>>>>> I have received the below logs
>>>>>>>>
>>>>>>>>
>>>>>>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030]
>>>>>>>> [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running
>>>>>>>> /usr/sbin/glusterd version 3.7.6 (args: /usr/sbin/glusterd -p
>>>>>>>> /var/run/glusterd.pid --log-level INFO)
>>>>>>>> [2017-05-06 03:33:39.807859] I [MSGID: 106478]
>>>>>>>> [glusterd.c:1350:init] 0-management: Maximum allowed open file 
>>>>>>>> descriptors
>>>>>>>> set to 65536
>>>>>>>> [2017-05-06 03:33:39.807974] I [MSGID: 106479]
>>>>>>>> [glusterd.c:1399:init] 0-management: Using /system/glusterd as working
>>>>>>>> directory
>>>>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]
>>>>>>>> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:
>>>>>>>> retrieved op-version: 30706
>>>>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]
>>>>>>>> [glusterd-store.c:2562:glusterd_store_update_volinfo]
>>>>>>>> 0-management: Failed to get next store iter
>>>>>>>> [2017-05-06 03:33:39.827563] E [MSGID: 106207]
>>>>>>>> [glusterd-store.c:2844:glusterd_store_retrieve_volume]
>>>>>>>> 0-management: Failed to update volinfo for c_glusterfs volume
>>>>>>>> [2017-05-06 03:33:39.827625] E [MSGID: 106201]
>>>>>>>> [glusterd-store.c:3042:glusterd_store_retr

Re: [Gluster-users] Empty info file preventing glusterd from starting

2017-05-31 Thread ABHISHEK PALIWAL
Hi Atin,

Could you please let us know any time plan for deliver of this patch.

Regards,
Abhishek

On Tue, May 9, 2017 at 6:37 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Actually it is very risky if it will reproduce in production thats is why
> I said it is on high priority as want to resolve it before production.
>
> On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
>
>>
>>
>> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
>> > wrote:
>>
>>> Hi Atin,
>>>
>>> Thanks for your reply.
>>>
>>>
>>> Its urgent because this error is very rarely reproducible we have seen
>>> this 2 3 times in our system till now.
>>>
>>> We have delivery in near future so that we want it asap. Please try to
>>> review it internally.
>>>
>>
>> I don't think your statements justified the reason of urgency as (a) you
>> have mentioned it to be *rarely* reproducible and (b) I am still waiting
>> for a real use case where glusterd will go through multiple restarts in a
>> loop?
>>
>>
>>> Regards,
>>> Abhishek
>>>
>>> On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <amukh...@redhat.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> + Muthu-vingeshwaran
>>>>>
>>>>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> Hi Atin/Team,
>>>>>>
>>>>>> We are using gluster-3.7.6 with setup of two brick and while restart
>>>>>> of system I have seen that the glusterd daemon is getting failed from 
>>>>>> start.
>>>>>>
>>>>>>
>>>>>> At the time of analyzing the logs from etc-glusterfs...log file I
>>>>>> have received the below logs
>>>>>>
>>>>>>
>>>>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030]
>>>>>> [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running
>>>>>> /usr/sbin/glusterd version 3.7.6 (args: /usr/sbin/glusterd -p
>>>>>> /var/run/glusterd.pid --log-level INFO)
>>>>>> [2017-05-06 03:33:39.807859] I [MSGID: 106478] [glusterd.c:1350:init]
>>>>>> 0-management: Maximum allowed open file descriptors set to 65536
>>>>>> [2017-05-06 03:33:39.807974] I [MSGID: 106479] [glusterd.c:1399:init]
>>>>>> 0-management: Using /system/glusterd as working directory
>>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]
>>>>>> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:
>>>>>> retrieved op-version: 30706
>>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]
>>>>>> [glusterd-store.c:2562:glusterd_store_update_volinfo] 0-management:
>>>>>> Failed to get next store iter
>>>>>> [2017-05-06 03:33:39.827563] E [MSGID: 106207]
>>>>>> [glusterd-store.c:2844:glusterd_store_retrieve_volume] 0-management:
>>>>>> Failed to update volinfo for c_glusterfs volume
>>>>>> [2017-05-06 03:33:39.827625] E [MSGID: 106201]
>>>>>> [glusterd-store.c:3042:glusterd_store_retrieve_volumes]
>>>>>> 0-management: Unable to restore volume: c_glusterfs
>>>>>> [2017-05-06 03:33:39.827722] E [MSGID: 101019]
>>>>>> [xlator.c:428:xlator_init] 0-management: Initialization of volume
>>>>>> 'management' failed, review your volfile again
>>>>>> [2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_init]
>>>>>> 0-management: initializing translator failed
>>>>>> [2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_activate]
>>>>>> 0-graph: init failed
>>>>>> [2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_and_exit]
>>>>>> (-->/usr/sbin/glusterd(glusterfs_volumes_init-0x1b0b8) [0x1000a648]
>>>>>> -->/usr/sbin/glusterd(glusterfs_process_volfp-0x1b210) [0x1000a4d8]
>>>>>> -->/usr/sbin/glusterd(cleanup_and_exit-0x1beac) [0x100097ac] ) 0-:
>>>>>> received signum (0), shutting down
>>>>>>
>>>>>
>>>> Abhishek,
>>

Re: [Gluster-users] High load on glusterfsd process

2017-05-23 Thread ABHISHEK PALIWAL
Hi Kotresh,

As we know this problem occurs when BitRot start versioning of file of big
size.

Is there any possibility to disable this feature totally means remove the
BitRot feature so that it will not do this even when it is disabled.

Regards,
Abhishek

On Tue, Apr 25, 2017 at 12:47 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Thanks Kotresh.
>
> Let me discuss in my team and will let you know.
>
> Regards,
> Abhishek
>
> On Tue, Apr 25, 2017 at 12:41 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> As this is an enhancement it won't be back ported to 3.7/3.8/3.10
>> It would be only available from upcoming 3.11 release.
>>
>> But I did try applying it to 3.7.6. It has lot of conflicts.
>> If it's important for you, you can upgrade to latest version.
>> available and back port it. If it's impossible to upgrade to
>> latest version, atleast 3.7.20 would do. It has minimal
>> conflicts. I can help you out with that.
>>
>> Thanks and Regards,
>> Kotresh H R
>>
>> - Original Message -
>> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
>> > Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Gluster Devel" <
>> gluster-de...@gluster.org>, "gluster-users"
>> > <gluster-users@gluster.org>
>> > Sent: Tuesday, April 25, 2017 10:58:41 AM
>> > Subject: Re: [Gluster-users] High load on glusterfsd process
>> >
>> > Hi Kotresh,
>> >
>> > Could you please update whether it is possible to get the patch or
>> bakport
>> > this patch on Gluster 3.7.6 version.
>> >
>> > Regards,
>> > Abhishek
>> >
>> > On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com>
>> > wrote:
>> >
>> > > What is the way to take this patch on Gluster 3.7.6 or only way to
>> upgrade
>> > > the version?
>> > >
>> > > On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com
>> > > > wrote:
>> > >
>> > >> Hi Kotresh,
>> > >>
>> > >> I have seen the patch available on the link which you shared. It
>> seems we
>> > >> don't have some files in gluser 3.7.6 which you modified in the
>> patch.
>> > >>
>> > >> Is there any possibility to provide the patch for Gluster 3.7.6?
>> > >>
>> > >> Regards,
>> > >> Abhishek
>> > >>
>> > >> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
>> > >> khire...@redhat.com> wrote:
>> > >>
>> > >>> Hi Abhishek,
>> > >>>
>> > >>> Bitrot requires versioning of files to be down on writes.
>> > >>> This was being done irrespective of whether bitrot is
>> > >>> enabled or not. This takes considerable CPU. With the
>> > >>> fix https://review.gluster.org/#/c/14442/, it is made
>> > >>> optional and is enabled only with bitrot. If bitrot
>> > >>> is not enabled, then you won't see any setxattr/getxattrs
>> > >>> related to bitrot.
>> > >>>
>> > >>> The fix would be available in 3.11.
>> > >>>
>> > >>>
>> > >>> Thanks and Regards,
>> > >>> Kotresh H R
>> > >>>
>> > >>> - Original Message -
>> > >>> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> > >>> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> > >>> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users"
>> <
>> > >>> gluster-users@gluster.org>, "Kotresh Hiremath
>> > >>> > Ravishankar" <khire...@redhat.com>
>> > >>> > Sent: Monday, April 24, 2017 11:30:57 AM
>> > >>> > Subject: Re: [Gluster-users] High load on glusterfsd process
>> > >>> >
>> > >>> > Hi Kotresh,
>> > >>> >
>> > >>> > Could you please update me on this?
>> > >>> >
>> > >>> > Regards,
>> > >>> > Abhishek
>> > >>&g

Re: [Gluster-users] Empty info file preventing glusterd from starting

2017-05-09 Thread ABHISHEK PALIWAL
Actually it is very risky if it will reproduce in production thats is why I
said it is on high priority as want to resolve it before production.

On Tue, May 9, 2017 at 6:20 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On Tue, May 9, 2017 at 6:10 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
> wrote:
>
>> Hi Atin,
>>
>> Thanks for your reply.
>>
>>
>> Its urgent because this error is very rarely reproducible we have seen
>> this 2 3 times in our system till now.
>>
>> We have delivery in near future so that we want it asap. Please try to
>> review it internally.
>>
>
> I don't think your statements justified the reason of urgency as (a) you
> have mentioned it to be *rarely* reproducible and (b) I am still waiting
> for a real use case where glusterd will go through multiple restarts in a
> loop?
>
>
>> Regards,
>> Abhishek
>>
>> On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> + Muthu-vingeshwaran
>>>>
>>>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> Hi Atin/Team,
>>>>>
>>>>> We are using gluster-3.7.6 with setup of two brick and while restart
>>>>> of system I have seen that the glusterd daemon is getting failed from 
>>>>> start.
>>>>>
>>>>>
>>>>> At the time of analyzing the logs from etc-glusterfs...log file I
>>>>> have received the below logs
>>>>>
>>>>>
>>>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030]
>>>>> [glusterfsd.c:2348:main] 0-/usr/sbin/glusterd: Started running
>>>>> /usr/sbin/glusterd version 3.7.6 (args: /usr/sbin/glusterd -p
>>>>> /var/run/glusterd.pid --log-level INFO)
>>>>> [2017-05-06 03:33:39.807859] I [MSGID: 106478] [glusterd.c:1350:init]
>>>>> 0-management: Maximum allowed open file descriptors set to 65536
>>>>> [2017-05-06 03:33:39.807974] I [MSGID: 106479] [glusterd.c:1399:init]
>>>>> 0-management: Using /system/glusterd as working directory
>>>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]
>>>>> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:
>>>>> retrieved op-version: 30706
>>>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]
>>>>> [glusterd-store.c:2562:glusterd_store_update_volinfo] 0-management:
>>>>> Failed to get next store iter
>>>>> [2017-05-06 03:33:39.827563] E [MSGID: 106207]
>>>>> [glusterd-store.c:2844:glusterd_store_retrieve_volume] 0-management:
>>>>> Failed to update volinfo for c_glusterfs volume
>>>>> [2017-05-06 03:33:39.827625] E [MSGID: 106201]
>>>>> [glusterd-store.c:3042:glusterd_store_retrieve_volumes] 0-management:
>>>>> Unable to restore volume: c_glusterfs
>>>>> [2017-05-06 03:33:39.827722] E [MSGID: 101019]
>>>>> [xlator.c:428:xlator_init] 0-management: Initialization of volume
>>>>> 'management' failed, review your volfile again
>>>>> [2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_init]
>>>>> 0-management: initializing translator failed
>>>>> [2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_activate]
>>>>> 0-graph: init failed
>>>>> [2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_and_exit]
>>>>> (-->/usr/sbin/glusterd(glusterfs_volumes_init-0x1b0b8) [0x1000a648]
>>>>> -->/usr/sbin/glusterd(glusterfs_process_volfp-0x1b210) [0x1000a4d8]
>>>>> -->/usr/sbin/glusterd(cleanup_and_exit-0x1beac) [0x100097ac] ) 0-:
>>>>> received signum (0), shutting down
>>>>>
>>>>
>>> Abhishek,
>>>
>>> This patch needs to be thoroughly reviewed to ensure that it doesn't
>>> cause any regression given this touches on the core store management
>>> functionality of glusterd. AFAICT, we get into an empty info file only when
>>> volume set operation is executed and in parallel one of the glusterd
>>> instance in other nodes have been brought down and whole sequence of
>>> operation happens in a loop. The test case through which you can get into
>>> this situation is not something

Re: [Gluster-users] Empty info file preventing glusterd from starting

2017-05-09 Thread ABHISHEK PALIWAL
Hi Atin,

Thanks for your reply.


Its urgent because this error is very rarely reproducible we have seen this
2 3 times in our system till now.

We have delivery in near future so that we want it asap. Please try to
review it internally.

Regards,
Abhishek

On Tue, May 9, 2017 at 5:58 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On Tue, May 9, 2017 at 3:37 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
> wrote:
>
>> + Muthu-vingeshwaran
>>
>> On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Atin/Team,
>>>
>>> We are using gluster-3.7.6 with setup of two brick and while restart of
>>> system I have seen that the glusterd daemon is getting failed from start.
>>>
>>>
>>> At the time of analyzing the logs from etc-glusterfs...log file I
>>> have received the below logs
>>>
>>>
>>> [2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main]
>>> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6
>>> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
>>> [2017-05-06 03:33:39.807859] I [MSGID: 106478] [glusterd.c:1350:init]
>>> 0-management: Maximum allowed open file descriptors set to 65536
>>> [2017-05-06 03:33:39.807974] I [MSGID: 106479] [glusterd.c:1399:init]
>>> 0-management: Using /system/glusterd as working directory
>>> [2017-05-06 03:33:39.826833] I [MSGID: 106513]
>>> [glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd:
>>> retrieved op-version: 30706
>>> [2017-05-06 03:33:39.827515] E [MSGID: 106206]
>>> [glusterd-store.c:2562:glusterd_store_update_volinfo] 0-management:
>>> Failed to get next store iter
>>> [2017-05-06 03:33:39.827563] E [MSGID: 106207]
>>> [glusterd-store.c:2844:glusterd_store_retrieve_volume] 0-management:
>>> Failed to update volinfo for c_glusterfs volume
>>> [2017-05-06 03:33:39.827625] E [MSGID: 106201]
>>> [glusterd-store.c:3042:glusterd_store_retrieve_volumes] 0-management:
>>> Unable to restore volume: c_glusterfs
>>> [2017-05-06 03:33:39.827722] E [MSGID: 101019]
>>> [xlator.c:428:xlator_init] 0-management: Initialization of volume
>>> 'management' failed, review your volfile again
>>> [2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_init]
>>> 0-management: initializing translator failed
>>> [2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_activate]
>>> 0-graph: init failed
>>> [2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_and_exit]
>>> (-->/usr/sbin/glusterd(glusterfs_volumes_init-0x1b0b8) [0x1000a648]
>>> -->/usr/sbin/glusterd(glusterfs_process_volfp-0x1b210) [0x1000a4d8]
>>> -->/usr/sbin/glusterd(cleanup_and_exit-0x1beac) [0x100097ac] ) 0-:
>>> received signum (0), shutting down
>>>
>>
> Abhishek,
>
> This patch needs to be thoroughly reviewed to ensure that it doesn't cause
> any regression given this touches on the core store management
> functionality of glusterd. AFAICT, we get into an empty info file only when
> volume set operation is executed and in parallel one of the glusterd
> instance in other nodes have been brought down and whole sequence of
> operation happens in a loop. The test case through which you can get into
> this situation is not something you'd hit in production. Please help me to
> understand the urgency here.
>
> Also in one of the earlier thread, I did mention the workaround of this
> issue back to Xin through http://lists.gluster.org/
> pipermail/gluster-users/2017-January/029600.html
>
> "If you end up in having a 0 byte info file you'd need to copy the same info 
> file from other node and put it there and restart glusterd"
>
>
>>>
>>> I have found one of the existing case is there and also solution patch
>>> is available but the status of that patch in "cannot merge". Also the
>>> "info" file is empty and "info.tmp" file present in "lib/glusterd/vol"
>>> directory.
>>>
>>> Below is the link of the existing case.
>>>
>>> https://review.gluster.org/#/c/16279/5
>>>
>>> please let me know what is the plan of community to provide the solution
>>> of this problem and in which version.
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Empty info file preventing glusterd from starting

2017-05-09 Thread ABHISHEK PALIWAL
+ Muthu-vingeshwaran

On Tue, May 9, 2017 at 11:30 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Atin/Team,
>
> We are using gluster-3.7.6 with setup of two brick and while restart of
> system I have seen that the glusterd daemon is getting failed from start.
>
>
> At the time of analyzing the logs from etc-glusterfs...log file I have
> received the below logs
>
>
> [2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main]
> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6
> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
> [2017-05-06 03:33:39.807859] I [MSGID: 106478] [glusterd.c:1350:init]
> 0-management: Maximum allowed open file descriptors set to 65536
> [2017-05-06 03:33:39.807974] I [MSGID: 106479] [glusterd.c:1399:init]
> 0-management: Using /system/glusterd as working directory
> [2017-05-06 03:33:39.826833] I [MSGID: 106513] 
> [glusterd-store.c:2047:glusterd_restore_op_version]
> 0-glusterd: retrieved op-version: 30706
> [2017-05-06 03:33:39.827515] E [MSGID: 106206] 
> [glusterd-store.c:2562:glusterd_store_update_volinfo]
> 0-management: Failed to get next store iter
> [2017-05-06 03:33:39.827563] E [MSGID: 106207] [glusterd-store.c:2844:
> glusterd_store_retrieve_volume] 0-management: Failed to update volinfo
> for c_glusterfs volume
> [2017-05-06 03:33:39.827625] E [MSGID: 106201] [glusterd-store.c:3042:
> glusterd_store_retrieve_volumes] 0-management: Unable to restore volume:
> c_glusterfs
> [2017-05-06 03:33:39.827722] E [MSGID: 101019] [xlator.c:428:xlator_init]
> 0-management: Initialization of volume 'management' failed, review your
> volfile again
> [2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_init]
> 0-management: initializing translator failed
> [2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_activate]
> 0-graph: init failed
> [2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_and_exit]
> (-->/usr/sbin/glusterd(glusterfs_volumes_init-0x1b0b8) [0x1000a648]
> -->/usr/sbin/glusterd(glusterfs_process_volfp-0x1b210) [0x1000a4d8]
> -->/usr/sbin/glusterd(cleanup_and_exit-0x1beac) [0x100097ac] ) 0-:
> received signum (0), shutting down
>
>
> I have found one of the existing case is there and also solution patch is
> available but the status of that patch in "cannot merge". Also the "info"
> file is empty and "info.tmp" file present in "lib/glusterd/vol" directory.
>
> Below is the link of the existing case.
>
> https://review.gluster.org/#/c/16279/5
>
> please let me know what is the plan of community to provide the solution
> of this problem and in which version.
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Empty info file preventing glusterd from starting

2017-05-09 Thread ABHISHEK PALIWAL
Hi Atin/Team,

We are using gluster-3.7.6 with setup of two brick and while restart of
system I have seen that the glusterd daemon is getting failed from start.


At the time of analyzing the logs from etc-glusterfs...log file I have
received the below logs


[2017-05-06 03:33:39.798087] I [MSGID: 100030] [glusterfsd.c:2348:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.7.6
(args: /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO)
[2017-05-06 03:33:39.807859] I [MSGID: 106478] [glusterd.c:1350:init]
0-management: Maximum allowed open file descriptors set to 65536
[2017-05-06 03:33:39.807974] I [MSGID: 106479] [glusterd.c:1399:init]
0-management: Using /system/glusterd as working directory
[2017-05-06 03:33:39.826833] I [MSGID: 106513]
[glusterd-store.c:2047:glusterd_restore_op_version] 0-glusterd: retrieved
op-version: 30706
[2017-05-06 03:33:39.827515] E [MSGID: 106206]
[glusterd-store.c:2562:glusterd_store_update_volinfo] 0-management: Failed
to get next store iter
[2017-05-06 03:33:39.827563] E [MSGID: 106207]
[glusterd-store.c:2844:glusterd_store_retrieve_volume] 0-management: Failed
to update volinfo for c_glusterfs volume
[2017-05-06 03:33:39.827625] E [MSGID: 106201]
[glusterd-store.c:3042:glusterd_store_retrieve_volumes] 0-management:
Unable to restore volume: c_glusterfs
[2017-05-06 03:33:39.827722] E [MSGID: 101019] [xlator.c:428:xlator_init]
0-management: Initialization of volume 'management' failed, review your
volfile again
[2017-05-06 03:33:39.827762] E [graph.c:322:glusterfs_graph_init]
0-management: initializing translator failed
[2017-05-06 03:33:39.827784] E [graph.c:661:glusterfs_graph_activate]
0-graph: init failed
[2017-05-06 03:33:39.828396] W [glusterfsd.c:1238:cleanup_and_exit]
(-->/usr/sbin/glusterd(glusterfs_volumes_init-0x1b0b8) [0x1000a648]
-->/usr/sbin/glusterd(glusterfs_process_volfp-0x1b210) [0x1000a4d8]
-->/usr/sbin/glusterd(cleanup_and_exit-0x1beac) [0x100097ac] ) 0-: received
signum (0), shutting down


I have found one of the existing case is there and also solution patch is
available but the status of that patch in "cannot merge". Also the "info"
file is empty and "info.tmp" file present in "lib/glusterd/vol" directory.

Below is the link of the existing case.

https://review.gluster.org/#/c/16279/5

please let me know what is the plan of community to provide the solution of
this problem and in which version.

Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load on glusterfsd process

2017-04-25 Thread ABHISHEK PALIWAL
Thanks Kotresh.

Let me discuss in my team and will let you know.

Regards,
Abhishek

On Tue, Apr 25, 2017 at 12:41 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi Abhishek,
>
> As this is an enhancement it won't be back ported to 3.7/3.8/3.10
> It would be only available from upcoming 3.11 release.
>
> But I did try applying it to 3.7.6. It has lot of conflicts.
> If it's important for you, you can upgrade to latest version.
> available and back port it. If it's impossible to upgrade to
> latest version, atleast 3.7.20 would do. It has minimal
> conflicts. I can help you out with that.
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > To: "Kotresh Hiremath Ravishankar" <khire...@redhat.com>
> > Cc: "Pranith Kumar Karampuri" <pkara...@redhat.com>, "Gluster Devel" <
> gluster-de...@gluster.org>, "gluster-users"
> > <gluster-users@gluster.org>
> > Sent: Tuesday, April 25, 2017 10:58:41 AM
> > Subject: Re: [Gluster-users] High load on glusterfsd process
> >
> > Hi Kotresh,
> >
> > Could you please update whether it is possible to get the patch or
> bakport
> > this patch on Gluster 3.7.6 version.
> >
> > Regards,
> > Abhishek
> >
> > On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com>
> > wrote:
> >
> > > What is the way to take this patch on Gluster 3.7.6 or only way to
> upgrade
> > > the version?
> > >
> > > On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com
> > > > wrote:
> > >
> > >> Hi Kotresh,
> > >>
> > >> I have seen the patch available on the link which you shared. It
> seems we
> > >> don't have some files in gluser 3.7.6 which you modified in the patch.
> > >>
> > >> Is there any possibility to provide the patch for Gluster 3.7.6?
> > >>
> > >> Regards,
> > >> Abhishek
> > >>
> > >> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
> > >> khire...@redhat.com> wrote:
> > >>
> > >>> Hi Abhishek,
> > >>>
> > >>> Bitrot requires versioning of files to be down on writes.
> > >>> This was being done irrespective of whether bitrot is
> > >>> enabled or not. This takes considerable CPU. With the
> > >>> fix https://review.gluster.org/#/c/14442/, it is made
> > >>> optional and is enabled only with bitrot. If bitrot
> > >>> is not enabled, then you won't see any setxattr/getxattrs
> > >>> related to bitrot.
> > >>>
> > >>> The fix would be available in 3.11.
> > >>>
> > >>>
> > >>> Thanks and Regards,
> > >>> Kotresh H R
> > >>>
> > >>> - Original Message -
> > >>> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > >>> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > >>> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
> > >>> gluster-users@gluster.org>, "Kotresh Hiremath
> > >>> > Ravishankar" <khire...@redhat.com>
> > >>> > Sent: Monday, April 24, 2017 11:30:57 AM
> > >>> > Subject: Re: [Gluster-users] High load on glusterfsd process
> > >>> >
> > >>> > Hi Kotresh,
> > >>> >
> > >>> > Could you please update me on this?
> > >>> >
> > >>> > Regards,
> > >>> > Abhishek
> > >>> >
> > >>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
> > >>> > pkara...@redhat.com> wrote:
> > >>> >
> > >>> > > +Kotresh who seems to have worked on the bug you mentioned.
> > >>> > >
> > >>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> > >>> > > abhishpali...@gmail.com> wrote:
> > >>> > >
> > >>> > >>
> > >>> > >> If the patch provided in that case will resolve my bug as well
> then
> > >>> > >> please provide the patch so that I will backport it on 3.7.6
> > >>> >

Re: [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
Hi Kotresh,

Could you please update whether it is possible to get the patch or bakport
this patch on Gluster 3.7.6 version.

Regards,
Abhishek

On Mon, Apr 24, 2017 at 6:14 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
> the version?
>
> On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Kotresh,
>>
>> I have seen the patch available on the link which you shared. It seems we
>> don't have some files in gluser 3.7.6 which you modified in the patch.
>>
>> Is there any possibility to provide the patch for Gluster 3.7.6?
>>
>> Regards,
>> Abhishek
>>
>> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
>> khire...@redhat.com> wrote:
>>
>>> Hi Abhishek,
>>>
>>> Bitrot requires versioning of files to be down on writes.
>>> This was being done irrespective of whether bitrot is
>>> enabled or not. This takes considerable CPU. With the
>>> fix https://review.gluster.org/#/c/14442/, it is made
>>> optional and is enabled only with bitrot. If bitrot
>>> is not enabled, then you won't see any setxattr/getxattrs
>>> related to bitrot.
>>>
>>> The fix would be available in 3.11.
>>>
>>>
>>> Thanks and Regards,
>>> Kotresh H R
>>>
>>> - Original Message -
>>> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>>> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>>> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
>>> gluster-users@gluster.org>, "Kotresh Hiremath
>>> > Ravishankar" <khire...@redhat.com>
>>> > Sent: Monday, April 24, 2017 11:30:57 AM
>>> > Subject: Re: [Gluster-users] High load on glusterfsd process
>>> >
>>> > Hi Kotresh,
>>> >
>>> > Could you please update me on this?
>>> >
>>> > Regards,
>>> > Abhishek
>>> >
>>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
>>> > pkara...@redhat.com> wrote:
>>> >
>>> > > +Kotresh who seems to have worked on the bug you mentioned.
>>> > >
>>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
>>> > > abhishpali...@gmail.com> wrote:
>>> > >
>>> > >>
>>> > >> If the patch provided in that case will resolve my bug as well then
>>> > >> please provide the patch so that I will backport it on 3.7.6
>>> > >>
>>> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>>> > >> abhishpali...@gmail.com> wrote:
>>> > >>
>>> > >>> Hi Team,
>>> > >>>
>>> > >>> I have noticed that there are so many glusterfsd threads are
>>> running in
>>> > >>> my system and we observed some of those thread consuming more cpu.
>>> I
>>> > >>> did “strace” on two such threads (before the problem disappeared by
>>> > >>> itself)
>>> > >>> and found that there is a continuous activity like below:
>>> > >>>
>>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>>> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170
>>> 126T113552+.log.gz",
>>> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
>>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
>>> 0170126T113552+.log.gz",
>>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>>> data
>>> > >>> available)
>>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_2
>>> 0170126T113552+.log.gz",
>>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
>>> data
>>> > >>> available)
>>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>>

Re: [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
What is the way to take this patch on Gluster 3.7.6 or only way to upgrade
the version?

On Mon, Apr 24, 2017 at 3:22 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Kotresh,
>
> I have seen the patch available on the link which you shared. It seems we
> don't have some files in gluser 3.7.6 which you modified in the patch.
>
> Is there any possibility to provide the patch for Gluster 3.7.6?
>
> Regards,
> Abhishek
>
> On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
> khire...@redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> Bitrot requires versioning of files to be down on writes.
>> This was being done irrespective of whether bitrot is
>> enabled or not. This takes considerable CPU. With the
>> fix https://review.gluster.org/#/c/14442/, it is made
>> optional and is enabled only with bitrot. If bitrot
>> is not enabled, then you won't see any setxattr/getxattrs
>> related to bitrot.
>>
>> The fix would be available in 3.11.
>>
>>
>> Thanks and Regards,
>> Kotresh H R
>>
>> - Original Message -
>> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
>> gluster-users@gluster.org>, "Kotresh Hiremath
>> > Ravishankar" <khire...@redhat.com>
>> > Sent: Monday, April 24, 2017 11:30:57 AM
>> > Subject: Re: [Gluster-users] High load on glusterfsd process
>> >
>> > Hi Kotresh,
>> >
>> > Could you please update me on this?
>> >
>> > Regards,
>> > Abhishek
>> >
>> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
>> > pkara...@redhat.com> wrote:
>> >
>> > > +Kotresh who seems to have worked on the bug you mentioned.
>> > >
>> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
>> > > abhishpali...@gmail.com> wrote:
>> > >
>> > >>
>> > >> If the patch provided in that case will resolve my bug as well then
>> > >> please provide the patch so that I will backport it on 3.7.6
>> > >>
>> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>> > >> abhishpali...@gmail.com> wrote:
>> > >>
>> > >>> Hi Team,
>> > >>>
>> > >>> I have noticed that there are so many glusterfsd threads are
>> running in
>> > >>> my system and we observed some of those thread consuming more cpu. I
>> > >>> did “strace” on two such threads (before the problem disappeared by
>> > >>> itself)
>> > >>> and found that there is a continuous activity like below:
>> > >>>
>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170
>> 126T113552+.log.gz",
>> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_
>> 20170126T113552+.log.gz",
>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_
>> 20170126T113552+.log.gz",
>> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>> > >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T
>> 180550+.log.gz",
>> > >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
>> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170
>> 123T180550+.log.gz",
>> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
>> data
>> > >>> available)
>> > >>> lgetxattr("/opt/lvmdir/c2

Re: [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
Hi Kotresh,

I have seen the patch available on the link which you shared. It seems we
don't have some files in gluser 3.7.6 which you modified in the patch.

Is there any possibility to provide the patch for Gluster 3.7.6?

Regards,
Abhishek

On Mon, Apr 24, 2017 at 3:07 PM, Kotresh Hiremath Ravishankar <
khire...@redhat.com> wrote:

> Hi Abhishek,
>
> Bitrot requires versioning of files to be down on writes.
> This was being done irrespective of whether bitrot is
> enabled or not. This takes considerable CPU. With the
> fix https://review.gluster.org/#/c/14442/, it is made
> optional and is enabled only with bitrot. If bitrot
> is not enabled, then you won't see any setxattr/getxattrs
> related to bitrot.
>
> The fix would be available in 3.11.
>
>
> Thanks and Regards,
> Kotresh H R
>
> - Original Message -
> > From: "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> > To: "Pranith Kumar Karampuri" <pkara...@redhat.com>
> > Cc: "Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
> gluster-users@gluster.org>, "Kotresh Hiremath
> > Ravishankar" <khire...@redhat.com>
> > Sent: Monday, April 24, 2017 11:30:57 AM
> > Subject: Re: [Gluster-users] High load on glusterfsd process
> >
> > Hi Kotresh,
> >
> > Could you please update me on this?
> >
> > Regards,
> > Abhishek
> >
> > On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
> > pkara...@redhat.com> wrote:
> >
> > > +Kotresh who seems to have worked on the bug you mentioned.
> > >
> > > On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> > > abhishpali...@gmail.com> wrote:
> > >
> > >>
> > >> If the patch provided in that case will resolve my bug as well then
> > >> please provide the patch so that I will backport it on 3.7.6
> > >>
> > >> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
> > >> abhishpali...@gmail.com> wrote:
> > >>
> > >>> Hi Team,
> > >>>
> > >>> I have noticed that there are so many glusterfsd threads are running
> in
> > >>> my system and we observed some of those thread consuming more cpu. I
> > >>> did “strace” on two such threads (before the problem disappeared by
> > >>> itself)
> > >>> and found that there is a continuous activity like below:
> > >>>
> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> > >>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_
> 20170126T113552+.log.gz",
> > >>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-
> 425_20170126T113552+.log.gz",
> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-
> 425_20170126T113552+.log.gz",
> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
> > >>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_
> 20170123T180550+.log.gz",
> > >>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_
> 20170123T180550+.log.gz",
> > >>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
> > >>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_
> 20170123T180550+.log.gz",
> > >>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No
> data
> > >>> available)
> > >>>
> > >>> I have found the below existing issue which is very similar to my
> > >>> scenario.
> > >>>
> > >>> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
> > >>>
> > >>> We are using the gluster-3.7.6 and it seems that the issue is fixed
> in
> > >>> 3.8.4 version.
> > >>>
> > >>> Could you please let me know why it showing the number of above logs
> and
> > >>> reason behind it as it is not explained in the above bug.
> > >>>
> > >>> Regards,
> > >>> Abhishek
> > >>>
> > >>> --
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> Regards
> > >>> Abhishek Paliwal
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >>
> > >>
> > >>
> > >>
> > >> Regards
> > >> Abhishek Paliwal
> > >>
> > >> ___
> > >> Gluster-users mailing list
> > >> Gluster-users@gluster.org
> > >> http://lists.gluster.org/mailman/listinfo/gluster-users
> > >>
> > >
> > >
> > >
> > > --
> > > Pranith
> > >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> >
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load on glusterfsd process

2017-04-24 Thread ABHISHEK PALIWAL
Hi Kotresh,

Could you please update me on this?

Regards,
Abhishek

On Sat, Apr 22, 2017 at 12:31 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> +Kotresh who seems to have worked on the bug you mentioned.
>
> On Fri, Apr 21, 2017 at 12:21 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>>
>> If the patch provided in that case will resolve my bug as well then
>> please provide the patch so that I will backport it on 3.7.6
>>
>> On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I have noticed that there are so many glusterfsd threads are running in
>>> my system and we observed some of those thread consuming more cpu. I
>>> did “strace” on two such threads (before the problem disappeared by itself)
>>> and found that there is a continuous activity like below:
>>>
>>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>>> dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
>>> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
>>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
>>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
>>> available)
>>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
>>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
>>> available)
>>> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4
>>> dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
>>> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
>>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
>>> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
>>> available)
>>> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92
>>> f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
>>> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
>>> available)
>>>
>>> I have found the below existing issue which is very similar to my
>>> scenario.
>>>
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
>>>
>>> We are using the gluster-3.7.6 and it seems that the issue is fixed in
>>> 3.8.4 version.
>>>
>>> Could you please let me know why it showing the number of above logs and
>>> reason behind it as it is not explained in the above bug.
>>>
>>> Regards,
>>> Abhishek
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] High load on glusterfsd process

2017-04-21 Thread ABHISHEK PALIWAL
If the patch provided in that case will resolve my bug as well then please
provide the patch so that I will backport it on 3.7.6

On Fri, Apr 21, 2017 at 11:30 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Team,
>
> I have noticed that there are so many glusterfsd threads are running in my
> system and we observed some of those thread consuming more cpu. I did
> “strace” on two such threads (before the problem disappeared by itself) and
> found that there is a continuous activity like below:
>
> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-
> 4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
> {st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-
> 246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
> available)
> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-
> 246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
> available)
> lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-
> 4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
> {st_mode=S_IFREG|0670, st_size=169, ...}) = 0
> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-
> 246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
> "trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
> available)
> lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-
> 246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
> "trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
> available)
>
> I have found the below existing issue which is very similar to my scenario.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1298258
>
> We are using the gluster-3.7.6 and it seems that the issue is fixed in
> 3.8.4 version.
>
> Could you please let me know why it showing the number of above logs and
> reason behind it as it is not explained in the above bug.
>
> Regards,
> Abhishek
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] High load on glusterfsd process

2017-04-21 Thread ABHISHEK PALIWAL
Hi Team,

I have noticed that there are so many glusterfsd threads are running in my
system and we observed some of those thread consuming more cpu. I did
“strace” on two such threads (before the problem disappeared by itself) and
found that there is a continuous activity like below:

lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
{st_mode=S_IFREG|0670, st_size=1995, ...}) = 0
lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
"trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
available)
lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-246e901cbdf1/002700/firewall_-J208482-425_20170126T113552+.log.gz",
"trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
available)
lstat("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
{st_mode=S_IFREG|0670, st_size=169, ...}) = 0
lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
"trusted.bit-rot.bad-file", 0x3fff81f58550, 255) = -1 ENODATA (No data
available)
lgetxattr("/opt/lvmdir/c2/brick/.glusterfs/e7/7d/e77d12b3-92f8-4dfe-9a7f-246e901cbdf1/002700/tcli_-J208482-425_20170123T180550+.log.gz",
"trusted.bit-rot.signature", 0x3fff81f58550, 255) = -1 ENODATA (No data
available)

I have found the below existing issue which is very similar to my scenario.

https://bugzilla.redhat.com/show_bug.cgi?id=1298258

We are using the gluster-3.7.6 and it seems that the issue is fixed in
3.8.4 version.

Could you please let me know why it showing the number of above logs and
reason behind it as it is not explained in the above bug.

Regards,
Abhishek

-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-16 Thread ABHISHEK PALIWAL
There is no need but it could happen accidentally and I think it should be
protect or should not be permissible.



On Mon, Apr 17, 2017 at 8:36 AM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On Mon, 17 Apr 2017 at 08:23, ABHISHEK PALIWAL <abhishpali...@gmail.com>
> wrote:
>
>> Hi All,
>>
>> Here we have below steps to reproduce the issue
>>
>> Reproduction steps:
>>
>>
>>
>> root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
>> - create the gluster volume
>>
>> volume create: brick: success: please start the volume to access data
>>
>> root@128:~# gluster volume set brick nfs.disable true
>>
>> volume set: success
>>
>> root@128:~# gluster volume start brick
>>
>> volume start: brick: success
>>
>> root@128:~# gluster volume info
>>
>> Volume Name: brick
>>
>> Type: Distribute
>>
>> Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3
>>
>> Status: Started
>>
>> Number of Bricks: 1
>>
>> Transport-type: tcp
>>
>> Bricks:
>>
>> Brick1: 128.224.95.140:/tmp/brick
>>
>> Options Reconfigured:
>>
>> nfs.disable: true
>>
>> performance.readdir-ahead: on
>>
>> root@128:~# gluster volume status
>>
>> Status of volume: brick
>>
>> Gluster process TCP Port RDMA Port Online Pid
>>
>> 
>> --
>>
>> Brick 128.224.95.140:/tmp/brick 49155 0 Y 768
>>
>>
>>
>> Task Status of Volume brick
>>
>> 
>> --
>>
>> There are no active volume tasks
>>
>>
>>
>> root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/
>>
>> root@128:~# cd gluster/
>>
>> root@128:~/gluster# du -sh
>>
>> 0 .
>>
>> root@128:~/gluster# mkdir -p test/
>>
>> root@128:~/gluster# cp ~/tmp.file gluster/
>>
>> root@128:~/gluster# cp tmp.file test
>>
>> root@128:~/gluster# cd /tmp/brick
>>
>> root@128:/tmp/brick# du -sh *
>>
>> 768K test
>>
>> 768K tmp.file
>>
>> root@128:/tmp/brick# rm -rf test - delete the test directory and
>> data in the server side, not reasonable
>>
>> root@128:/tmp/brick# ls
>>
>> tmp.file
>>
>> root@128:/tmp/brick# du -sh *
>>
>> 768K tmp.file
>>
>> *root@128:/tmp/brick# du -sh (brick dir)*
>>
>> *1.6M .*
>>
>> root@128:/tmp/brick# cd .glusterfs/
>>
>> root@128:/tmp/brick/.glusterfs# du -sh *
>>
>> 0 00
>>
>> 0 2a
>>
>> 0 bb
>>
>> 768K c8
>>
>> 0 c9
>>
>> 0 changelogs
>>
>> 768K d0
>>
>> 4.0K health_check
>>
>> 0 indices
>>
>> 0 landfill
>>
>> *root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*
>>
>> *1.6M .*
>>
>> root@128:/tmp/brick# cd ~/gluster
>>
>> root@128:~/gluster# ls
>>
>> tmp.file
>>
>> *root@128:~/gluster# du -sh * (Mount dir)*
>>
>> *768K tmp.file*
>>
>>
>>
>> In the reproduce steps, we delete the test directory in the server side,
>> not in the client side. I think this delete operation is not reasonable.
>> Please ask the customer to check whether they do this unreasonable
>> operation.
>>
>
> What's the need of deleting data from backend (i.e bricks) directly?
>
>
>> *It seems while deleting the data from BRICK, metadata will not deleted
>> from .glusterfs directory.*
>>
>>
>> *I don't know whether it is a bug of limitations, please let us know
>> about this?*
>>
>>
>> Regards,
>>
>> Abhishek
>>
>>
>> On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>>
>>>
>>> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> yes it is ext4. but what is the impact of this.
>>>>
>>>
>>> Did you have a lot of data before and you deleted all that data? ext4 if
>>> I remember correctly doesn't decrease size of directory once it expands it.
>>> So in ext4 inside a directory if you create lots and lots of files and
>>> delete them all, the directory size would increase at the time of creation
>>> but won't decreas

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-16 Thread ABHISHEK PALIWAL
Hi All,

Here we have below steps to reproduce the issue

Reproduction steps:



root@128:~# gluster volume create brick 128.224.95.140:/tmp/brick force
- create the gluster volume

volume create: brick: success: please start the volume to access data

root@128:~# gluster volume set brick nfs.disable true

volume set: success

root@128:~# gluster volume start brick

volume start: brick: success

root@128:~# gluster volume info

Volume Name: brick

Type: Distribute

Volume ID: a59b479a-2b21-426d-962a-79d6d294fee3

Status: Started

Number of Bricks: 1

Transport-type: tcp

Bricks:

Brick1: 128.224.95.140:/tmp/brick

Options Reconfigured:

nfs.disable: true

performance.readdir-ahead: on

root@128:~# gluster volume status

Status of volume: brick

Gluster process TCP Port RDMA Port Online Pid


--

Brick 128.224.95.140:/tmp/brick 49155 0 Y 768



Task Status of Volume brick


--

There are no active volume tasks



root@128:~# mount -t glusterfs 128.224.95.140:/brick gluster/

root@128:~# cd gluster/

root@128:~/gluster# du -sh

0 .

root@128:~/gluster# mkdir -p test/

root@128:~/gluster# cp ~/tmp.file gluster/

root@128:~/gluster# cp tmp.file test

root@128:~/gluster# cd /tmp/brick

root@128:/tmp/brick# du -sh *

768K test

768K tmp.file

root@128:/tmp/brick# rm -rf test - delete the test directory and
data in the server side, not reasonable

root@128:/tmp/brick# ls

tmp.file

root@128:/tmp/brick# du -sh *

768K tmp.file

*root@128:/tmp/brick# du -sh (brick dir)*

*1.6M .*

root@128:/tmp/brick# cd .glusterfs/

root@128:/tmp/brick/.glusterfs# du -sh *

0 00

0 2a

0 bb

768K c8

0 c9

0 changelogs

768K d0

4.0K health_check

0 indices

0 landfill

*root@128:/tmp/brick/.glusterfs# du -sh (.glusterfs dir)*

*1.6M .*

root@128:/tmp/brick# cd ~/gluster

root@128:~/gluster# ls

tmp.file

*root@128:~/gluster# du -sh * (Mount dir)*

*768K tmp.file*



In the reproduce steps, we delete the test directory in the server side,
not in the client side. I think this delete operation is not reasonable.
Please ask the customer to check whether they do this unreasonable
operation.


*It seems while deleting the data from BRICK, metadata will not deleted
from .glusterfs directory.*


*I don't know whether it is a bug of limitations, please let us know about
this?*


Regards,

Abhishek


On Thu, Apr 13, 2017 at 2:29 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

>
>
> On Thu, Apr 13, 2017 at 12:19 PM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> yes it is ext4. but what is the impact of this.
>>
>
> Did you have a lot of data before and you deleted all that data? ext4 if I
> remember correctly doesn't decrease size of directory once it expands it.
> So in ext4 inside a directory if you create lots and lots of files and
> delete them all, the directory size would increase at the time of creation
> but won't decrease after deletion. I don't have any system with ext4 at the
> moment to test it now. This is something we faced 5-6 years back but not
> sure if it is fixed in ext4 in the latest releases.
>
>
>>
>> On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
>> pkara...@redhat.com> wrote:
>>
>>> Yes
>>>
>>> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Means the fs where this brick has been created?
>>>> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <pkara...@redhat.com>
>>>> wrote:
>>>>
>>>>> Is your backend filesystem ext4?
>>>>>
>>>>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> No,we are not using sharding
>>>>>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <a...@metalit.com> wrote:
>>>>>>
>>>>>>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>>>>>>
>>>>>>> I have did more investigation and find out that brick dir size is
>>>>>>> equivalent to gluster mount point but .glusterfs having too much 
>>>>>>> difference
>>>>>>>
>>>>>>>
>>>>>>> You are probably using sharding?
>>>>>>>
>>>>>>>
>>>>>>> Buon lavoro.
>>>>>>> *Alessandro Briosi*
>>>>>>>
>>>>>>> *METAL.it Nord S.r.l.*
>>>>>>> Via Maioliche 57/C - 38068 Rovereto (TN)
>>>>>>> Tel.+39.0464.430130 - Fax +39.0464.437393
>>>>>>> www.metalit.com
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> ___
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users@gluster.org
>>>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Pranith
>>>>>
>>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
> Pranith
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-13 Thread ABHISHEK PALIWAL
yes it is ext4. but what is the impact of this.

On Thu, Apr 13, 2017 at 9:26 AM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> Yes
>
> On Thu, Apr 13, 2017 at 8:21 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Means the fs where this brick has been created?
>> On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <pkara...@redhat.com>
>> wrote:
>>
>>> Is your backend filesystem ext4?
>>>
>>> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> No,we are not using sharding
>>>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <a...@metalit.com> wrote:
>>>>
>>>>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>>>>
>>>>> I have did more investigation and find out that brick dir size is
>>>>> equivalent to gluster mount point but .glusterfs having too much 
>>>>> difference
>>>>>
>>>>>
>>>>> You are probably using sharding?
>>>>>
>>>>>
>>>>> Buon lavoro.
>>>>> *Alessandro Briosi*
>>>>>
>>>>> *METAL.it Nord S.r.l.*
>>>>> Via Maioliche 57/C - 38068 Rovereto (TN)
>>>>> Tel.+39.0464.430130 - Fax +39.0464.437393
>>>>> www.metalit.com
>>>>>
>>>>>
>>>>>
>>>>
>>>> ___
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>>
>>> --
>>> Pranith
>>>
>>
>
>
> --
> Pranith
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-12 Thread ABHISHEK PALIWAL
Means the fs where this brick has been created?
On Apr 13, 2017 8:19 AM, "Pranith Kumar Karampuri" <pkara...@redhat.com>
wrote:

> Is your backend filesystem ext4?
>
> On Thu, Apr 13, 2017 at 6:29 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> No,we are not using sharding
>> On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <a...@metalit.com> wrote:
>>
>>> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>>>
>>> I have did more investigation and find out that brick dir size is
>>> equivalent to gluster mount point but .glusterfs having too much difference
>>>
>>>
>>> You are probably using sharding?
>>>
>>>
>>> Buon lavoro.
>>> *Alessandro Briosi*
>>>
>>> *METAL.it Nord S.r.l.*
>>> Via Maioliche 57/C - 38068 Rovereto (TN)
>>> Tel.+39.0464.430130 - Fax +39.0464.437393
>>> www.metalit.com
>>>
>>>
>>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-12 Thread ABHISHEK PALIWAL
No,we are not using sharding
On Apr 12, 2017 7:29 PM, "Alessandro Briosi" <a...@metalit.com> wrote:

> Il 12/04/2017 14:16, ABHISHEK PALIWAL ha scritto:
>
> I have did more investigation and find out that brick dir size is
> equivalent to gluster mount point but .glusterfs having too much difference
>
>
> You are probably using sharding?
>
>
> Buon lavoro.
> *Alessandro Briosi*
>
> *METAL.it Nord S.r.l.*
> Via Maioliche 57/C - 38068 Rovereto (TN)
> Tel.+39.0464.430130 - Fax +39.0464.437393
> www.metalit.com
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-12 Thread ABHISHEK PALIWAL
I have did more investigation and find out that brick dir size is
equivalent to gluster mount point but .glusterfs having too much difference

opt/lvmdir/c2/brick
# du -sch *
96K  RNC_Exceptions
36K  configuration
63Mjava
176K   license
471M  loadmodules
127M  loadmodules_norepl
246M  loadmodules_tftp
7.1M   logfiles
8.0Klost+found
24K  node_id
16K  pm_data
1.6M   pmd
9.2M   public_html
268K   rnc
80K  security
1.6M   systemfiles
292K   tmp
80K  toplog.txt
1.5M   up
8.0Kusr
8.0Kxb
927M  total
# du -sch .glusterfs/
3.3G.glusterfs/
3.3Gtotal


Could any one answer why .glusterfs having 3.3G data w.r.t 927M  brick data.

Regards,
Abhishek

On Fri, Apr 7, 2017 at 3:04 PM, Ashish Pandey <aspan...@redhat.com> wrote:

>
> If you are creating a fresh volume, then it is your responsibility to have
> clean bricks.
> I don't think gluster will give you a guarantee that it will be cleaned up.
>
> So you have to investigate if you have any previous data on that brick.
> What I meant was that you have to find out the number of files you see on
> your mount point and the corresponding number of gfid are same.
> If you create a file on mount point test-file, it will have a gfid
> xx-yy- and in .glusterfs the path would be xx/yy/xx-yy-
> so one test-file = one xx/yy/xx-yy-j
> In this way you have to find out if you have anything  in .glusterfs which
> should not be. I would have started from the biggest entry I see in
> .glusterfs
> like "154M/opt/lvmdir/c2/brick/.glusterfs/08"
>
>
>
> --
> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> *To: *"Ashish Pandey" <aspan...@redhat.com>
> *Cc: *"gluster-users" <gluster-users@gluster.org>, "Gluster Devel" <
> gluster-de...@gluster.org>
> *Sent: *Friday, April 7, 2017 2:28:46 PM
> *Subject: *Re: [Gluster-users] [Gluster-devel] Glusterfs meta data
> spaceconsumption issue
>
>
> Hi Ashish,
>
> I don't think so that count of files on mount point and .glusterfs/ will
> remain same. Because I have created one file on the gluster mount poing but
> on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
> creates .glusterfs/xx/xx/x... which is two parent dir and one
> truefid file.
>
>
> Regards,
> Abhishek
>
> On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey <aspan...@redhat.com> wrote:
>
>>
>> Are you sure that the bricks which you used for this volume was not
>> having any previous data?
>> Find out the total number of files and directories on your mount point
>> (recursively) and then see the number of entries on .glusterfs/
>>
>>
>> --
>> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> *To: *"Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
>> gluster-users@gluster.org>
>> *Sent: *Friday, April 7, 2017 12:15:22 PM
>> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption
>> issue
>>
>>
>>
>> Is there any update ??
>>
>> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We are currently experiencing a serious issue w.r.t volume space usage
>>> by glusterfs.
>>>
>>> In the below outputs, we can see that the size of the real data in /c
>>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>>
>>> Can you tell us why the volume space is fully used by glusterfs even
>>> though the real data size is around 1GB itself ?
>>>
>>> # gluster peer status
>>> Number of Peers: 0
>>> #
>>> #
>>> # gluster volume status
>>> Status of volume: c_glusterfs
>>> Gluster process TCP Port  RDMA Port
>>> Online  Pid
>>> 
>>> --
>>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0
>>> Y   1507
>>>
>>> Task Status of Volume c_glusterfs
>>> --

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
Means if old data is present in brick and volume is not present then it
should be visible in our brick dir /opt/lvmdir/c2/brick?

On Fri, Apr 7, 2017 at 3:04 PM, Ashish Pandey <aspan...@redhat.com> wrote:

>
> If you are creating a fresh volume, then it is your responsibility to have
> clean bricks.
> I don't think gluster will give you a guarantee that it will be cleaned up.
>
> So you have to investigate if you have any previous data on that brick.
> What I meant was that you have to find out the number of files you see on
> your mount point and the corresponding number of gfid are same.
> If you create a file on mount point test-file, it will have a gfid
> xx-yy- and in .glusterfs the path would be xx/yy/xx-yy-
> so one test-file = one xx/yy/xx-yy-j
> In this way you have to find out if you have anything  in .glusterfs which
> should not be. I would have started from the biggest entry I see in
> .glusterfs
> like "154M/opt/lvmdir/c2/brick/.glusterfs/08"
>
>
>
> --
> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> *To: *"Ashish Pandey" <aspan...@redhat.com>
> *Cc: *"gluster-users" <gluster-users@gluster.org>, "Gluster Devel" <
> gluster-de...@gluster.org>
> *Sent: *Friday, April 7, 2017 2:28:46 PM
> *Subject: *Re: [Gluster-users] [Gluster-devel] Glusterfs meta data
> spaceconsumption issue
>
>
> Hi Ashish,
>
> I don't think so that count of files on mount point and .glusterfs/ will
> remain same. Because I have created one file on the gluster mount poing but
> on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
> creates .glusterfs/xx/xx/x... which is two parent dir and one
> truefid file.
>
>
> Regards,
> Abhishek
>
> On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey <aspan...@redhat.com> wrote:
>
>>
>> Are you sure that the bricks which you used for this volume was not
>> having any previous data?
>> Find out the total number of files and directories on your mount point
>> (recursively) and then see the number of entries on .glusterfs/
>>
>>
>> --
>> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> *To: *"Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
>> gluster-users@gluster.org>
>> *Sent: *Friday, April 7, 2017 12:15:22 PM
>> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption
>> issue
>>
>>
>>
>> Is there any update ??
>>
>> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We are currently experiencing a serious issue w.r.t volume space usage
>>> by glusterfs.
>>>
>>> In the below outputs, we can see that the size of the real data in /c
>>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>>
>>> Can you tell us why the volume space is fully used by glusterfs even
>>> though the real data size is around 1GB itself ?
>>>
>>> # gluster peer status
>>> Number of Peers: 0
>>> #
>>> #
>>> # gluster volume status
>>> Status of volume: c_glusterfs
>>> Gluster process TCP Port  RDMA Port
>>> Online  Pid
>>> 
>>> --
>>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0
>>> Y   1507
>>>
>>> Task Status of Volume c_glusterfs
>>> 
>>> --
>>> There are no active volume tasks
>>>
>>> # gluster volume info
>>>
>>> Volume Name: c_glusterfs
>>> Type: Distribute
>>> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
>>> Status: Started
>>> Number of Bricks: 1
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>>> Options Reconfigured:
>>> nfs.disable: on
>>> network.ping-timeout: 4
>>> performance.readdir-ahead: on
>>> #
>>> # ls -a /c/
>>> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
>>> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
>>> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
>>> toplog.txt  up  usr 

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
Hi Ashish,

I don't think so that count of files on mount point and .glusterfs/ will
remain same. Because I have created one file on the gluster mount poing but
on .glusterfs/ it increased by 3 in numbers. Reason behind that is it
creates .glusterfs/xx/xx/x... which is two parent dir and one
truefid file.


Regards,
Abhishek

On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey <aspan...@redhat.com> wrote:

>
> Are you sure that the bricks which you used for this volume was not having
> any previous data?
> Find out the total number of files and directories on your mount point
> (recursively) and then see the number of entries on .glusterfs/
>
>
> ------
> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> *To: *"Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
> gluster-users@gluster.org>
> *Sent: *Friday, April 7, 2017 12:15:22 PM
> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption issue
>
>
>
> Is there any update ??
>
> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi,
>>
>> We are currently experiencing a serious issue w.r.t volume space usage by
>> glusterfs.
>>
>> In the below outputs, we can see that the size of the real data in /c
>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>
>> Can you tell us why the volume space is fully used by glusterfs even
>> though the real data size is around 1GB itself ?
>>
>> # gluster peer status
>> Number of Peers: 0
>> #
>> #
>> # gluster volume status
>> Status of volume: c_glusterfs
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>> 
>> --
>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
>> 1507
>>
>> Task Status of Volume c_glusterfs
>> 
>> --
>> There are no active volume tasks
>>
>> # gluster volume info
>>
>> Volume Name: c_glusterfs
>> Type: Distribute
>> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
>> Status: Started
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>> Options Reconfigured:
>> nfs.disable: on
>> network.ping-timeout: 4
>> performance.readdir-ahead: on
>> #
>> # ls -a /c/
>> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
>> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
>> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
>> toplog.txt  up  usr  xb
>> # du -sh /c/.trashcan/
>> 8.0K/c/.trashcan/
>> # du -sh /c/*
>> 11K /c/RNC_Exceptions
>> 5.5K/c/configuration
>> 62M /c/java
>> 138K/c/license
>> 609M/c/loadmodules
>> 90M /c/loadmodules_norepl
>> 246M/c/loadmodules_tftp
>> 4.1M/c/logfiles
>> 4.0K/c/lost+found
>> 5.0K/c/node_id
>> 8.0K/c/pm_data
>> 4.5K/c/pmd
>> 9.1M/c/public_html
>> 113K/c/rnc
>> 16K /c/security
>> 1.3M/c/systemfiles
>> 228K/c/tmp
>> 75K /c/toplog.txt
>> 1.5M/c/up
>> 4.0K/c/usr
>> 4.0K/c/xb
>> # du -sh /c/
>> 1022M   /c/
>> # df -h /c/
>> Filesystem  Size  Used Avail Use% Mounted on
>> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
>> #
>> #
>> #
>> # du -sh /opt/lvmdir/c2/brick/
>> 3.4G/opt/lvmdir/c2/brick/
>> # du -sh /opt/lvmdir/c2/brick/*
>> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
>> 36K /opt/lvmdir/c2/brick/configuration
>> 63M /opt/lvmdir/c2/brick/java
>> 176K/opt/lvmdir/c2/brick/license
>> 610M/opt/lvmdir/c2/brick/loadmodules
>> 95M /opt/lvmdir/c2/brick/loadmodules_norepl
>> 246M/opt/lvmdir/c2/brick/loadmodules_tftp
>> 4.2M/opt/lvmdir/c2/brick/logfiles
>> 8.0K/opt/lvmdir/c2/brick/lost+found
>> 24K /opt/lvmdir/c2/brick/node_id
>> 16K /opt/lvmdir/c2/brick/pm_data
>> 16K /opt/lvmdir/c2/brick/pmd
>> 9.2M/opt/lvmdir/c2/brick/public_html
>> 268K/opt/lvmdir/c2/brick/rnc
>> 80K /opt/lvmdir/c2/brick/security
>> 1.4M/opt/lvmdir/c2/brick/systemfiles
>> 252K/opt/lvmdir/c2/brick/tmp
>> 80K /opt/lvmdir/c2/brick/

Re: [Gluster-users] [Gluster-devel] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
HI Ashish,


Even if there is a old data then it should be clear by gluster it self
right? or you want to do it manually?

Regards,
Abhishek

On Fri, Apr 7, 2017 at 1:31 PM, Ashish Pandey <aspan...@redhat.com> wrote:

>
> Are you sure that the bricks which you used for this volume was not having
> any previous data?
> Find out the total number of files and directories on your mount point
> (recursively) and then see the number of entries on .glusterfs/
>
>
> ------
> *From: *"ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> *To: *"Gluster Devel" <gluster-de...@gluster.org>, "gluster-users" <
> gluster-users@gluster.org>
> *Sent: *Friday, April 7, 2017 12:15:22 PM
> *Subject: *Re: [Gluster-devel] Glusterfs meta data space consumption issue
>
>
>
> Is there any update ??
>
> On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi,
>>
>> We are currently experiencing a serious issue w.r.t volume space usage by
>> glusterfs.
>>
>> In the below outputs, we can see that the size of the real data in /c
>> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
>> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>>
>> Can you tell us why the volume space is fully used by glusterfs even
>> though the real data size is around 1GB itself ?
>>
>> # gluster peer status
>> Number of Peers: 0
>> #
>> #
>> # gluster volume status
>> Status of volume: c_glusterfs
>> Gluster process TCP Port  RDMA Port  Online
>> Pid
>> 
>> --
>> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
>> 1507
>>
>> Task Status of Volume c_glusterfs
>> 
>> --
>> There are no active volume tasks
>>
>> # gluster volume info
>>
>> Volume Name: c_glusterfs
>> Type: Distribute
>> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
>> Status: Started
>> Number of Bricks: 1
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
>> Options Reconfigured:
>> nfs.disable: on
>> network.ping-timeout: 4
>> performance.readdir-ahead: on
>> #
>> # ls -a /c/
>> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
>> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
>> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
>> toplog.txt  up  usr  xb
>> # du -sh /c/.trashcan/
>> 8.0K/c/.trashcan/
>> # du -sh /c/*
>> 11K /c/RNC_Exceptions
>> 5.5K/c/configuration
>> 62M /c/java
>> 138K/c/license
>> 609M/c/loadmodules
>> 90M /c/loadmodules_norepl
>> 246M/c/loadmodules_tftp
>> 4.1M/c/logfiles
>> 4.0K/c/lost+found
>> 5.0K/c/node_id
>> 8.0K/c/pm_data
>> 4.5K/c/pmd
>> 9.1M/c/public_html
>> 113K/c/rnc
>> 16K /c/security
>> 1.3M/c/systemfiles
>> 228K/c/tmp
>> 75K /c/toplog.txt
>> 1.5M/c/up
>> 4.0K/c/usr
>> 4.0K/c/xb
>> # du -sh /c/
>> 1022M   /c/
>> # df -h /c/
>> Filesystem  Size  Used Avail Use% Mounted on
>> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
>> #
>> #
>> #
>> # du -sh /opt/lvmdir/c2/brick/
>> 3.4G/opt/lvmdir/c2/brick/
>> # du -sh /opt/lvmdir/c2/brick/*
>> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
>> 36K /opt/lvmdir/c2/brick/configuration
>> 63M /opt/lvmdir/c2/brick/java
>> 176K/opt/lvmdir/c2/brick/license
>> 610M/opt/lvmdir/c2/brick/loadmodules
>> 95M /opt/lvmdir/c2/brick/loadmodules_norepl
>> 246M/opt/lvmdir/c2/brick/loadmodules_tftp
>> 4.2M/opt/lvmdir/c2/brick/logfiles
>> 8.0K/opt/lvmdir/c2/brick/lost+found
>> 24K /opt/lvmdir/c2/brick/node_id
>> 16K /opt/lvmdir/c2/brick/pm_data
>> 16K /opt/lvmdir/c2/brick/pmd
>> 9.2M/opt/lvmdir/c2/brick/public_html
>> 268K/opt/lvmdir/c2/brick/rnc
>> 80K /opt/lvmdir/c2/brick/security
>> 1.4M/opt/lvmdir/c2/brick/systemfiles
>> 252K/opt/lvmdir/c2/brick/tmp
>> 80K /opt/lvmdir/c2/brick/toplog.txt
>> 1.5M/opt/lvmdir/c2/brick/up
>> 8.0K/opt/lvmdir/c2/brick/usr
>> 8.0K/opt/lvmdir/c2/brick/xb
>> # du -sh /opt/lvmdir/c2/brick/.glusterfs/
>>

Re: [Gluster-users] Glusterfs meta data space consumption issue

2017-04-07 Thread ABHISHEK PALIWAL
Is there any update ??

On Thu, Apr 6, 2017 at 12:45 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi,
>
> We are currently experiencing a serious issue w.r.t volume space usage by
> glusterfs.
>
> In the below outputs, we can see that the size of the real data in /c
> (glusterfs volume) is nearly 1GB but the “.glusterfs” directory inside the
> brick (i.e., “/opt/lvmdir/c2/brick”) is consuming around 3.4 GB
>
> Can you tell us why the volume space is fully used by glusterfs even
> though the real data size is around 1GB itself ?
>
> # gluster peer status
> Number of Peers: 0
> #
> #
> # gluster volume status
> Status of volume: c_glusterfs
> Gluster process TCP Port  RDMA Port  Online
> Pid
> 
> --
> Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y
> 1507
>
> Task Status of Volume c_glusterfs
> 
> --
> There are no active volume tasks
>
> # gluster volume info
>
> Volume Name: c_glusterfs
> Type: Distribute
> Volume ID: d83b1b8c-bc37-4615-bf4b-529f56968ecc
> Status: Started
> Number of Bricks: 1
> Transport-type: tcp
> Bricks:
> Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
> Options Reconfigured:
> nfs.disable: on
> network.ping-timeout: 4
> performance.readdir-ahead: on
> #
> # ls -a /c/
> .  ..  .trashcan  RNC_Exceptions  configuration  java  license
> loadmodules  loadmodules_norepl  loadmodules_tftp  logfiles  lost+found
> node_id  pm_data  pmd  public_html  rnc  security  systemfiles  tmp
> toplog.txt  up  usr  xb
> # du -sh /c/.trashcan/
> 8.0K/c/.trashcan/
> # du -sh /c/*
> 11K /c/RNC_Exceptions
> 5.5K/c/configuration
> 62M /c/java
> 138K/c/license
> 609M/c/loadmodules
> 90M /c/loadmodules_norepl
> 246M/c/loadmodules_tftp
> 4.1M/c/logfiles
> 4.0K/c/lost+found
> 5.0K/c/node_id
> 8.0K/c/pm_data
> 4.5K/c/pmd
> 9.1M/c/public_html
> 113K/c/rnc
> 16K /c/security
> 1.3M/c/systemfiles
> 228K/c/tmp
> 75K /c/toplog.txt
> 1.5M/c/up
> 4.0K/c/usr
> 4.0K/c/xb
> # du -sh /c/
> 1022M   /c/
> # df -h /c/
> Filesystem  Size  Used Avail Use% Mounted on
> 10.32.0.48:c_glusterfs  3.6G  3.4G 0 100% /mnt/c
> #
> #
> #
> # du -sh /opt/lvmdir/c2/brick/
> 3.4G/opt/lvmdir/c2/brick/
> # du -sh /opt/lvmdir/c2/brick/*
> 112K/opt/lvmdir/c2/brick/RNC_Exceptions
> 36K /opt/lvmdir/c2/brick/configuration
> 63M /opt/lvmdir/c2/brick/java
> 176K/opt/lvmdir/c2/brick/license
> 610M/opt/lvmdir/c2/brick/loadmodules
> 95M /opt/lvmdir/c2/brick/loadmodules_norepl
> 246M/opt/lvmdir/c2/brick/loadmodules_tftp
> 4.2M/opt/lvmdir/c2/brick/logfiles
> 8.0K/opt/lvmdir/c2/brick/lost+found
> 24K /opt/lvmdir/c2/brick/node_id
> 16K /opt/lvmdir/c2/brick/pm_data
> 16K /opt/lvmdir/c2/brick/pmd
> 9.2M/opt/lvmdir/c2/brick/public_html
> 268K/opt/lvmdir/c2/brick/rnc
> 80K /opt/lvmdir/c2/brick/security
> 1.4M/opt/lvmdir/c2/brick/systemfiles
> 252K/opt/lvmdir/c2/brick/tmp
> 80K /opt/lvmdir/c2/brick/toplog.txt
> 1.5M/opt/lvmdir/c2/brick/up
> 8.0K/opt/lvmdir/c2/brick/usr
> 8.0K/opt/lvmdir/c2/brick/xb
> # du -sh /opt/lvmdir/c2/brick/.glusterfs/
> 3.4G/opt/lvmdir/c2/brick/.glusterfs/
>
> Below are the statics of the below command "du -sh /opt/lvmdir/c2/brick/.
> glusterfs/*"
>
> # du -sh /opt/lvmdir/c2/brick/.glusterfs/*
> 14M /opt/lvmdir/c2/brick/.glusterfs/00
> 8.3M/opt/lvmdir/c2/brick/.glusterfs/01
> 23M /opt/lvmdir/c2/brick/.glusterfs/02
> 17M /opt/lvmdir/c2/brick/.glusterfs/03
> 7.1M/opt/lvmdir/c2/brick/.glusterfs/04
> 336K/opt/lvmdir/c2/brick/.glusterfs/05
> 3.5M/opt/lvmdir/c2/brick/.glusterfs/06
> 1.7M/opt/lvmdir/c2/brick/.glusterfs/07
> 154M/opt/lvmdir/c2/brick/.glusterfs/08
> 14M /opt/lvmdir/c2/brick/.glusterfs/09
> 9.5M/opt/lvmdir/c2/brick/.glusterfs/0a
> 5.5M/opt/lvmdir/c2/brick/.glusterfs/0b
> 11M /opt/lvmdir/c2/brick/.glusterfs/0c
> 764K/opt/lvmdir/c2/brick/.glusterfs/0d
> 69M /opt/lvmdir/c2/brick/.glusterfs/0e
> 3.7M/opt/lvmdir/c2/brick/.glusterfs/0f
> 14M /opt/lvmdir/c2/brick/.glusterfs/10
> 1.8M/opt/lvmdir/c2/brick/.glusterfs/11
> 5.0M/opt/lvmdir/c2/brick/.glusterfs/12
> 18M /opt/lvmdir/c2/brick/.glusterfs/13
> 7.8M/opt/lvmdir/c2/brick/.glusterfs/14
> 151M/opt/lvmdir/c2/brick/.glusterfs/15
> 15M /opt/lvmdir/c2/brick/.glusterfs/16
> 9.0M/opt/lvmd

[Gluster-users] Glusterfs meta data space consumption issue

2017-04-06 Thread ABHISHEK PALIWAL
M /opt/lvmdir/c2/brick/.glusterfs/9e
13M /opt/lvmdir/c2/brick/.glusterfs/9f
11M /opt/lvmdir/c2/brick/.glusterfs/a0
5.6M/opt/lvmdir/c2/brick/.glusterfs/a1
17M /opt/lvmdir/c2/brick/.glusterfs/a2
3.3M/opt/lvmdir/c2/brick/.glusterfs/a3
18M /opt/lvmdir/c2/brick/.glusterfs/a4
21M /opt/lvmdir/c2/brick/.glusterfs/a5
13M /opt/lvmdir/c2/brick/.glusterfs/a6
18M /opt/lvmdir/c2/brick/.glusterfs/a7
5.5M/opt/lvmdir/c2/brick/.glusterfs/a8
4.5M/opt/lvmdir/c2/brick/.glusterfs/a9
4.4M/opt/lvmdir/c2/brick/.glusterfs/aa
21M /opt/lvmdir/c2/brick/.glusterfs/ab
3.7M/opt/lvmdir/c2/brick/.glusterfs/ac
13M /opt/lvmdir/c2/brick/.glusterfs/ad
1.7M/opt/lvmdir/c2/brick/.glusterfs/ae
9.1M/opt/lvmdir/c2/brick/.glusterfs/af
50M /opt/lvmdir/c2/brick/.glusterfs/b0
3.9M/opt/lvmdir/c2/brick/.glusterfs/b1
8.8M/opt/lvmdir/c2/brick/.glusterfs/b2
64M /opt/lvmdir/c2/brick/.glusterfs/b3
13M /opt/lvmdir/c2/brick/.glusterfs/b4
7.7M/opt/lvmdir/c2/brick/.glusterfs/b5
3.0M/opt/lvmdir/c2/brick/.glusterfs/b6
7.8M/opt/lvmdir/c2/brick/.glusterfs/b7
2.8M/opt/lvmdir/c2/brick/.glusterfs/b8
11M /opt/lvmdir/c2/brick/.glusterfs/b9
4.6M/opt/lvmdir/c2/brick/.glusterfs/ba
40M /opt/lvmdir/c2/brick/.glusterfs/bb
17M /opt/lvmdir/c2/brick/.glusterfs/bc
716K/opt/lvmdir/c2/brick/.glusterfs/bd
2.2M/opt/lvmdir/c2/brick/.glusterfs/be
14M /opt/lvmdir/c2/brick/.glusterfs/bf
3.2M/opt/lvmdir/c2/brick/.glusterfs/c0
11M /opt/lvmdir/c2/brick/.glusterfs/c1
18M /opt/lvmdir/c2/brick/.glusterfs/c2
8.7M/opt/lvmdir/c2/brick/.glusterfs/c3
1.1M/opt/lvmdir/c2/brick/.glusterfs/c4
4.3M/opt/lvmdir/c2/brick/.glusterfs/c5
5.0M/opt/lvmdir/c2/brick/.glusterfs/c6
44M /opt/lvmdir/c2/brick/.glusterfs/c7
1.0M/opt/lvmdir/c2/brick/.glusterfs/c8
4.5M/opt/lvmdir/c2/brick/.glusterfs/c9
2.9M/opt/lvmdir/c2/brick/.glusterfs/ca
8.4M/opt/lvmdir/c2/brick/.glusterfs/cb
3.1M/opt/lvmdir/c2/brick/.glusterfs/cc
14M /opt/lvmdir/c2/brick/.glusterfs/cd
15M /opt/lvmdir/c2/brick/.glusterfs/ce
4.7M/opt/lvmdir/c2/brick/.glusterfs/cf
12K /opt/lvmdir/c2/brick/.glusterfs/changelogs
36M /opt/lvmdir/c2/brick/.glusterfs/d0
8.8M/opt/lvmdir/c2/brick/.glusterfs/d1
6.8M/opt/lvmdir/c2/brick/.glusterfs/d2
4.4M/opt/lvmdir/c2/brick/.glusterfs/d3
2.3M/opt/lvmdir/c2/brick/.glusterfs/d4
1.1M/opt/lvmdir/c2/brick/.glusterfs/d5
16M /opt/lvmdir/c2/brick/.glusterfs/d6
48M /opt/lvmdir/c2/brick/.glusterfs/d7
6.8M/opt/lvmdir/c2/brick/.glusterfs/d8
20M /opt/lvmdir/c2/brick/.glusterfs/d9
5.7M/opt/lvmdir/c2/brick/.glusterfs/da
3.9M/opt/lvmdir/c2/brick/.glusterfs/db
788K/opt/lvmdir/c2/brick/.glusterfs/dc
3.1M/opt/lvmdir/c2/brick/.glusterfs/dd
3.5M/opt/lvmdir/c2/brick/.glusterfs/de
45M /opt/lvmdir/c2/brick/.glusterfs/df
5.2M/opt/lvmdir/c2/brick/.glusterfs/e0
4.2M/opt/lvmdir/c2/brick/.glusterfs/e1
9.2M/opt/lvmdir/c2/brick/.glusterfs/e2
2.4M/opt/lvmdir/c2/brick/.glusterfs/e3
11M /opt/lvmdir/c2/brick/.glusterfs/e4
61M /opt/lvmdir/c2/brick/.glusterfs/e5
12M /opt/lvmdir/c2/brick/.glusterfs/e6
1.1M/opt/lvmdir/c2/brick/.glusterfs/e7
5.9M/opt/lvmdir/c2/brick/.glusterfs/e8
5.3M/opt/lvmdir/c2/brick/.glusterfs/e9
1.6M/opt/lvmdir/c2/brick/.glusterfs/ea
968K/opt/lvmdir/c2/brick/.glusterfs/eb
9.5M/opt/lvmdir/c2/brick/.glusterfs/ec
13M /opt/lvmdir/c2/brick/.glusterfs/ed
15M /opt/lvmdir/c2/brick/.glusterfs/ee
13M /opt/lvmdir/c2/brick/.glusterfs/ef
4.1M/opt/lvmdir/c2/brick/.glusterfs/f0
13M /opt/lvmdir/c2/brick/.glusterfs/f1
32M /opt/lvmdir/c2/brick/.glusterfs/f2
5.8M/opt/lvmdir/c2/brick/.glusterfs/f3
5.2M/opt/lvmdir/c2/brick/.glusterfs/f4
5.9M/opt/lvmdir/c2/brick/.glusterfs/f5
14M /opt/lvmdir/c2/brick/.glusterfs/f6
2.3M/opt/lvmdir/c2/brick/.glusterfs/f7
2.5M/opt/lvmdir/c2/brick/.glusterfs/f8
12M /opt/lvmdir/c2/brick/.glusterfs/f9
432K/opt/lvmdir/c2/brick/.glusterfs/fa
15M /opt/lvmdir/c2/brick/.glusterfs/fb
37M /opt/lvmdir/c2/brick/.glusterfs/fc
3.1M/opt/lvmdir/c2/brick/.glusterfs/fd
1.9M/opt/lvmdir/c2/brick/.glusterfs/fe
4.1M/opt/lvmdir/c2/brick/.glusterfs/ff
4.0K/opt/lvmdir/c2/brick/.glusterfs/health_check
8.0K/opt/lvmdir/c2/brick/.glusterfs/indices
4.0K/opt/lvmdir/c2/brick/.glusterfs/landfill
#

In which it is clear that all the backup/metadata files created w.r.t
original files in gluster mount point taking to much spaces.

Please let us know the reason of this problem.
-- 

Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Gluster Limitation with ACL on Kernel NFS

2017-03-24 Thread ABHISHEK PALIWAL
Hi Team,

I am using gluster with kernel nfs and  found one limitation with Gluster
volume don't know whether it is Bug or expected.

Below is the scenario:

I am mounting gluster volume as well as NFS volume with '-o acl' options

I have tested gluster volume with ACLs and found that if we set the ACLs
either before or after export on gluster mount point it will get reflect on
exported NFS volume only if we mount it after ACL are applied on gluster
volume.

Also, if NFS volume is already mounted  then only first rule will get
reflect on NFS exported volume.


Could anyone tell me the possibility what is the problem here.
-- 

Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfsd/glusterfs process taking CPU load higher than usual

2016-12-04 Thread ABHISHEK PALIWAL
its really highly appreciable if some one respond on this.

On Thu, Dec 1, 2016 at 6:31 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> is there anyone who can reply on this query?
>
> On Thu, Dec 1, 2016 at 7:58 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
> wrote:
>
>> Please reply I am waiting for your response.
>> On Nov 30, 2016 2:21 PM, "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
>> wrote:
>>
>>> could you please respond on this issue?
>>>
>>> On Tue, Nov 29, 2016 at 6:56 PM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Hi Team,
>>>>
>>>> I have two board setup and on which we have one volume with two brick
>>>> on each board.
>>>>
>>>> When I was checking the cpu load I found glusterfsd/glusterfs process
>>>> taking higher CPU load then usual like below:
>>>>
>>>> PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
>>>> COMMAND
>>>>  3117 root  20   0 1429m  48m 3636 R   98  0.2 212:12.35
>>>> fpipt_main_thre
>>>> 12299 root  20   0 1139m  52m 4192 R   73  0.2 120:41.69
>>>> glusterfsd
>>>>  4517 root  20   0 1139m  52m 4192 S   72  0.2 121:01.54
>>>> glusterfsd
>>>>  1915 root  20   0 1139m  52m 4192 R   62  0.2 121:16.22
>>>> glusterfsd
>>>> 14633 root  20   0 1139m  52m 4192 S   62  0.2 120:37.13
>>>> glusterfsd
>>>>  1992 root  20   0  634m 154m 4340 S   57  0.7  68:11.18
>>>> glusterfs
>>>> 17886 root  20   0 1139m  52m 4192 R   55  0.2 120:28.57
>>>> glusterfsd
>>>>  2664 root  20   0  783m  31m 4708 S   52  0.1 100:13.12
>>>> Scc_SctpHost_pr
>>>>  1914 root  20   0 1139m  52m 4192 S   50  0.2 121:20.19
>>>> glusterfsd
>>>> 12556 root  20   0 1139m  52m 4192 S   50  0.2 120:31.38
>>>> glusterfsd
>>>>  1583 root  20   0 1139m  52m 4192 R   48  0.2 121:16.83
>>>> glusterfsd
>>>> 12112 root  20   0 1139m  52m 4192 R   43  0.2 120:58.73
>>>> glusterfsd
>>>>
>>>> Is there any way to identify the way or to reduce this high load.
>>>>
>>>> I have also collected the volume profile logs but don't know how to
>>>> understand or analyze those logs.
>>>>
>>>> I am attaching those logs here.
>>>> --
>>>> Regards
>>>> Abhishek Paliwal
>>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfsd/glusterfs process taking CPU load higher than usual

2016-12-01 Thread ABHISHEK PALIWAL
is there anyone who can reply on this query?

On Thu, Dec 1, 2016 at 7:58 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Please reply I am waiting for your response.
> On Nov 30, 2016 2:21 PM, "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> wrote:
>
>> could you please respond on this issue?
>>
>> On Tue, Nov 29, 2016 at 6:56 PM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I have two board setup and on which we have one volume with two brick on
>>> each board.
>>>
>>> When I was checking the cpu load I found glusterfsd/glusterfs process
>>> taking higher CPU load then usual like below:
>>>
>>> PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
>>> COMMAND
>>>  3117 root  20   0 1429m  48m 3636 R   98  0.2 212:12.35
>>> fpipt_main_thre
>>> 12299 root  20   0 1139m  52m 4192 R   73  0.2 120:41.69
>>> glusterfsd
>>>  4517 root  20   0 1139m  52m 4192 S   72  0.2 121:01.54
>>> glusterfsd
>>>  1915 root  20   0 1139m  52m 4192 R   62  0.2 121:16.22
>>> glusterfsd
>>> 14633 root  20   0 1139m  52m 4192 S   62  0.2 120:37.13
>>> glusterfsd
>>>  1992 root  20   0  634m 154m 4340 S   57  0.7  68:11.18
>>> glusterfs
>>> 17886 root  20   0 1139m  52m 4192 R   55  0.2 120:28.57
>>> glusterfsd
>>>  2664 root  20   0  783m  31m 4708 S   52  0.1 100:13.12
>>> Scc_SctpHost_pr
>>>  1914 root  20   0 1139m  52m 4192 S   50  0.2 121:20.19
>>> glusterfsd
>>> 12556 root  20   0 1139m  52m 4192 S   50  0.2 120:31.38
>>> glusterfsd
>>>  1583 root  20   0 1139m  52m 4192 R   48  0.2 121:16.83
>>> glusterfsd
>>> 12112 root  20   0 1139m  52m 4192 R   43  0.2 120:58.73
>>> glusterfsd
>>>
>>> Is there any way to identify the way or to reduce this high load.
>>>
>>> I have also collected the volume profile logs but don't know how to
>>> understand or analyze those logs.
>>>
>>> I am attaching those logs here.
>>> --
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfsd/glusterfs process taking CPU load higher than usual

2016-11-30 Thread ABHISHEK PALIWAL
Please reply I am waiting for your response.
On Nov 30, 2016 2:21 PM, "ABHISHEK PALIWAL" <abhishpali...@gmail.com> wrote:

> could you please respond on this issue?
>
> On Tue, Nov 29, 2016 at 6:56 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Team,
>>
>> I have two board setup and on which we have one volume with two brick on
>> each board.
>>
>> When I was checking the cpu load I found glusterfsd/glusterfs process
>> taking higher CPU load then usual like below:
>>
>> PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
>> COMMAND
>>  3117 root  20   0 1429m  48m 3636 R   98  0.2 212:12.35
>> fpipt_main_thre
>> 12299 root  20   0 1139m  52m 4192 R   73  0.2 120:41.69
>> glusterfsd
>>  4517 root  20   0 1139m  52m 4192 S   72  0.2 121:01.54
>> glusterfsd
>>  1915 root  20   0 1139m  52m 4192 R   62  0.2 121:16.22
>> glusterfsd
>> 14633 root  20   0 1139m  52m 4192 S   62  0.2 120:37.13
>> glusterfsd
>>  1992 root  20   0  634m 154m 4340 S   57  0.7  68:11.18
>> glusterfs
>> 17886 root  20   0 1139m  52m 4192 R   55  0.2 120:28.57
>> glusterfsd
>>  2664 root  20   0  783m  31m 4708 S   52  0.1 100:13.12
>> Scc_SctpHost_pr
>>  1914 root  20   0 1139m  52m 4192 S   50  0.2 121:20.19
>> glusterfsd
>> 12556 root  20   0 1139m  52m 4192 S   50  0.2 120:31.38
>> glusterfsd
>>  1583 root  20   0 1139m  52m 4192 R   48  0.2 121:16.83
>> glusterfsd
>> 12112 root  20   0 1139m  52m 4192 R   43  0.2 120:58.73
>> glusterfsd
>>
>> Is there any way to identify the way or to reduce this high load.
>>
>> I have also collected the volume profile logs but don't know how to
>> understand or analyze those logs.
>>
>> I am attaching those logs here.
>> --
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] glusterfsd/glusterfs process taking CPU load higher than usual

2016-11-30 Thread ABHISHEK PALIWAL
could you please respond on this issue?

On Tue, Nov 29, 2016 at 6:56 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Team,
>
> I have two board setup and on which we have one volume with two brick on
> each board.
>
> When I was checking the cpu load I found glusterfsd/glusterfs process
> taking higher CPU load then usual like below:
>
> PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
> COMMAND
>  3117 root  20   0 1429m  48m 3636 R   98  0.2 212:12.35
> fpipt_main_thre
> 12299 root  20   0 1139m  52m 4192 R   73  0.2 120:41.69
> glusterfsd
>  4517 root  20   0 1139m  52m 4192 S   72  0.2 121:01.54
> glusterfsd
>  1915 root  20   0 1139m  52m 4192 R   62  0.2 121:16.22
> glusterfsd
> 14633 root  20   0 1139m  52m 4192 S   62  0.2 120:37.13
> glusterfsd
>  1992 root  20   0  634m 154m 4340 S   57  0.7  68:11.18
> glusterfs
> 17886 root  20   0 1139m  52m 4192 R   55  0.2 120:28.57
> glusterfsd
>  2664 root  20   0  783m  31m 4708 S   52  0.1 100:13.12
> Scc_SctpHost_pr
>  1914 root  20   0 1139m  52m 4192 S   50  0.2 121:20.19
> glusterfsd
> 12556 root  20   0 1139m  52m 4192 S   50  0.2 120:31.38
> glusterfsd
>  1583 root  20   0 1139m  52m 4192 R   48  0.2 121:16.83
> glusterfsd
> 12112 root  20   0 1139m  52m 4192 R   43  0.2 120:58.73
> glusterfsd
>
> Is there any way to identify the way or to reduce this high load.
>
> I have also collected the volume profile logs but don't know how to
> understand or analyze those logs.
>
> I am attaching those logs here.
> --
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] glusterfsd/glusterfs process taking CPU load higher than usual

2016-11-29 Thread ABHISHEK PALIWAL
Hi Team,

I have two board setup and on which we have one volume with two brick on
each board.

When I was checking the cpu load I found glusterfsd/glusterfs process
taking higher CPU load then usual like below:

PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+
COMMAND
 3117 root  20   0 1429m  48m 3636 R   98  0.2 212:12.35
fpipt_main_thre
12299 root  20   0 1139m  52m 4192 R   73  0.2 120:41.69
glusterfsd
 4517 root  20   0 1139m  52m 4192 S   72  0.2 121:01.54
glusterfsd
 1915 root  20   0 1139m  52m 4192 R   62  0.2 121:16.22
glusterfsd
14633 root  20   0 1139m  52m 4192 S   62  0.2 120:37.13
glusterfsd
 1992 root  20   0  634m 154m 4340 S   57  0.7  68:11.18
glusterfs
17886 root  20   0 1139m  52m 4192 R   55  0.2 120:28.57
glusterfsd
 2664 root  20   0  783m  31m 4708 S   52  0.1 100:13.12
Scc_SctpHost_pr
 1914 root  20   0 1139m  52m 4192 S   50  0.2 121:20.19
glusterfsd
12556 root  20   0 1139m  52m 4192 S   50  0.2 120:31.38
glusterfsd
 1583 root  20   0 1139m  52m 4192 R   48  0.2 121:16.83
glusterfsd
12112 root  20   0 1139m  52m 4192 R   43  0.2 120:58.73
glusterfsd

Is there any way to identify the way or to reduce this high load.

I have also collected the volume profile logs but don't know how to
understand or analyze those logs.

I am attaching those logs here.
-- 
Regards
Abhishek Paliwal
Log start: 161129-130518 - 10.67.29.150 - moshell 16.0y - 
/home/emamiko/EVO8300/Issues/CPU_highLoad_C1MP/Gluster_vol_profile.txt

EVOA_8300-1> 

EVOA_8300-1> gluster volume profile c_glusterfs start

161129-13:05:26 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE 
stopfile=/tmp/15640
$ gluster volume profile c_glusterfs start
Starting volume profile on c_glusterfs has been successful 
$ 

EVOA_8300-1> 

EVOA_8300-1> # wait for a minute or two

EVOA_8300-1> wait 120 

Waiting from [2016-11-29 13:05:31] to [2016-11-29 13:07:31]...Done.

EVOA_8300-1> gluster volume profile c_glusterfs info 

161129-13:07:32 10.67.29.150 16.0y CPP_MOM-CPP-LSV203-gen2_gen2_COMPLETE 
stopfile=/tmp/15640
$ gluster volume profile c_glusterfs info
Brick: 10.32.0.48:/opt/lvmdir/c2/brick
--
Cumulative Stats:
   Block Size:  1b+   4b+   8b+ 
 No. of Reads:0 1 0 
No. of Writes:6 2 9 
 
   Block Size: 16b+  32b+  64b+ 
 No. of Reads:0 6 6 
No. of Writes:2 9 6 
 
   Block Size:128b+ 256b+ 512b+ 
 No. of Reads:1 2 7 
No. of Writes:   222667 
 
   Block Size:   1024b+2048b+4096b+ 
 No. of Reads:   14 3 5 
No. of Writes:   79   12935 
 
   Block Size:   8192b+   16384b+   32768b+ 
 No. of Reads:31412 
No. of Writes:   13 0 1 
 
   Block Size:  65536b+  131072b+ 
 No. of Reads:   16   224 
No. of Writes:   2016 
 %-latency   Avg-latency   Min-Latency   Max-Latency   No. of calls Fop
 -   ---   ---   ---   
  0.00   0.00 us   0.00 us   0.00 us  2  FORGET
  0.00   0.00 us   0.00 us   0.00 us248 RELEASE
  0.00   0.00 us   0.00 us   0.00 us 189505  RELEASEDIR
  0.00  70.33 us  63.00 us  74.00 us  3  STATFS
  0.00 172.50 us  90.00 us 255.00 us  2 READDIR
  0.00  92.67 us  46.00 us 128.00 us  6GETXATTR
  6.29 192.50 us  83.00 us4065.00 us   4443SETXATTR
  7.19  88.90 us   3.00 us1724.00 us  10996 OPENDIR
 14.39 210.96 us  33.00 us   45438.00 us   9282 INODELK
 72.13 278.64 us  44.00 us1452.00 us  35214  LOOKUP
 
Duration: 7146 seconds
   Data Read: 32013955 bytes
Data Written: 4931237 bytes
 
Interval 2 Stats:
   Block Size:  8b+ 512b+1024b+ 
 No. of Reads:0 0 0 
No. of Writes:1 5 5 
 
   Block Size:   2048b+   

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-21 Thread ABHISHEK PALIWAL
On Mon, Nov 21, 2016 at 2:28 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

>
>
> On Mon, Nov 21, 2016 at 10:00 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Hi Atin,
>>
>> System is the embedded system and these dates are before the system get
>> in timer sync.
>>
>> Yes, I have also seen these two files in peers directory on 002500 board
>> and I want to know the reason why gluster creates the second file when
>> there is old file is exist. Even when you see the content of the these file
>> are same.
>>
>> Is it possible for gluster if we fall in this situation then instead of
>> manually doing the steps which you mentioned above gluster will take care
>> of this?
>>
>
> We shouldn't have any unwanted data in /var/lib/glusterd at first place
> and that's a prerequisite of gluster installation failing which
> inconsistencies of configuration data can't be handled automatically until
> manual intervention.
>
>
it means before starting of gluster installation /var/lib/glusterd always
we empty? because in this case nothing is unwanted before installing the
glusterd.

>
>> I have some questions:
>>
>> 1. based on the logs can we find out the reason for having two peers
>> files with same contents.
>>
>
> No we can't as the log file doesn't have any entry of
> 26ae19a6-b58f-446a-b079-411d4ee57450 which indicates that this entry is a
> stale one and was (is) there since long time and the log files are the
> latest.
>

I agreed this 26ae19a6-b58f-446a-b079-411d4ee57450 entry is not there but
as we checked this file is newer in peer and
5be8603b-18d0-4333-8590-38f918a22857
is the older file

*.*
Also, below are some more logs in etc-glusterfs-glusterd.log file from
002500 board file

The message "I [MSGID: 106004]
[glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer
<10.32.0.48> (<5be8603b-18d0-4333-8590-38f918a22857>), in state , has disconnected from glusterd." repeated 3 times between
[2016-11-17 22:01:23.542556] and [2016-11-17 22:01:36.993584]
The message "W [MSGID: 106118]
[glusterd-handler.c:5087:__glusterd_peer_rpc_notify] 0-management: Lock not
released for c_glusterfs" repeated 3 times between [2016-11-17
22:01:23.542973] and [2016-11-17 22:01:36.993855]
[2016-11-17 22:01:48.860555] I [MSGID: 106487]
[glusterd-handler.c:1411:__glusterd_handle_cli_list_friends] 0-glusterd:
Received cli list req
[2016-11-17 22:01:49.137733] I [MSGID: 106163]
[glusterd-handshake.c:1193:__glusterd_mgmt_hndsk_versions_ack]
0-management: using the op-version 30706
[2016-11-17 22:01:49.240986] I [MSGID: 106493]
[glusterd-rpc-ops.c:694:__glusterd_friend_update_cbk] 0-management:
Received ACC from uuid: 5be8603b-18d0-4333-8590-38f918a22857
[2016-11-17 22:11:58.658884] E [rpc-clnt.c:201:call_bail] 0-management:
bailing out frame type(glusterd mgmt) op(--(3)) xid = 0x15 sent =
2016-11-17 22:01:48.945424. timeout = 600 for 10.32.0.48:24007
[2016-11-17 22:11:58.658987] E [MSGID: 106153]
[glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on
10.32.0.48. Please check log file for details.
[2016-11-17 22:11:58.659243] I [socket.c:3382:socket_submit_reply]
0-socket.management: not connected (priv->connected = 255)
[2016-11-17 22:11:58.659265] E [rpcsvc.c:1314:rpcsvc_submit_generic]
0-rpc-service: failed to submit message (XID: 0x1, Program: GlusterD svc
cli, ProgVers: 2, Proc: 27) to rpc-transport (socket.management)
[2016-11-17 22:11:58.659305] E [MSGID: 106430]
[glusterd-utils.c:400:glusterd_submit_reply] 0-glusterd: Reply submission
failed
[2016-11-17 22:13:58.674343] E [rpc-clnt.c:201:call_bail] 0-management:
bailing out frame type(glusterd mgmt) op(--(3)) xid = 0x11 sent =
2016-11-17 22:03:50.268751. timeout = 600 for 10.32.0.48:24007
[2016-11-17 22:13:58.674414] E [MSGID: 106153]
[glusterd-syncop.c:113:gd_collate_errors] 0-glusterd: Staging failed on
10.32.0.48. Please check log file for details.
[2016-11-17 22:13:58.674604] I [socket.c:3382:socket_submit_reply]
0-socket.management: not connected (priv->connected = 255)
[2016-11-17 22:13:58.674627] E [rpcsvc.c:1314:rpcsvc_submit_generic]
0-rpc-service: failed to submit message (XID: 0x1, Program: GlusterD svc
cli, ProgVers: 2, Proc: 27) to rpc-transport (socket.management)
[2016-11-17 22:13:58.674667] E [MSGID: 106430]
[glusterd-utils.c:400:glusterd_submit_reply] 0-glusterd: Reply submission
failed
[2016-11-17 22:15:58.687737] E [rpc-clnt.c:201:call_bail] 0-management:
bailing out frame type(glusterd mgmt) op(--(3)) xid = 0x17 sent =
2016-11-17 22:05:51.341614. timeout = 600 for 10.32.0.48:24007

is these logs causing duplicate UUID or duplicate UUID causing this?

>
> 2. is there any way to do it from gluster code.
>>
>
> Ditto as above.
>
>
>>
>> Regards,
>> Abhishek

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hi Atin,

I will be waiting for your response.

On Mon, Nov 21, 2016 at 10:00 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Atin,
>
> System is the embedded system and these dates are before the system get in
> timer sync.
>
> Yes, I have also seen these two files in peers directory on 002500 board
> and I want to know the reason why gluster creates the second file when
> there is old file is exist. Even when you see the content of the these file
> are same.
>
> Is it possible for gluster if we fall in this situation then instead of
> manually doing the steps which you mentioned above gluster will take care
> of this?
>
> I have some questions:
>
> 1. based on the logs can we find out the reason for having two peers files
> with same contents.
> 2. is there any way to do it from gluster code.
>
> Regards,
> Abhishek
>
> Regards,
> Abhishek
>
> On Mon, Nov 21, 2016 at 9:52 AM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
>
>> atin@dhcp35-96:~/Downloads/gluster_users/abhishek_dup_uuid/
>> duplicate_uuid/glusterd_2500/peers$ ls -lrt
>> total 8
>> -rw---. 1 atin wheel 71 *Jan  1  1970* 5be8603b-18d0-4333-8590-38f918
>> a22857
>> -rw---. 1 atin wheel 71 Nov 18 03:31 26ae19a6-b58f-446a-b079-411d4e
>> e57450
>>
>> In board 2500 look at the date of the file 
>> 5be8603b-18d0-4333-8590-38f918a22857
>> (marked in bold). Not sure how did you end up having this file in such time
>> stamp. I am guessing this could be because of the set up been not cleaned
>> properly at the time of re-installation.
>>
>> Here is the steps what I'd recommend for now:
>>
>> 1. rename 26ae19a6-b58f-446a-b079-411d4ee57450 to
>> 5be8603b-18d0-4333-8590-38f918a22857, you should have only one entry in
>> the peers folder in board 2500.
>> 2. Bring down both glusterd instances
>> 3. Bring back one by one
>>
>> And then restart glusterd to see if the issue persists.
>>
>>
>>
>> On Mon, Nov 21, 2016 at 9:34 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hope you will see in the logs..
>>>
>>> On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
>>>> Hi Atin,
>>>>
>>>> It is not getting wipe off we have changed the configuration path from
>>>> /var/lib/glusterd to /system/glusterd.
>>>>
>>>> So, they will remain as same as previous.
>>>>
>>>> On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee <amukh...@redhat.com>
>>>> wrote:
>>>>
>>>>> Abhishek,
>>>>>
>>>>> rebooting the board does wipe of /var/lib/glusterd contents in your
>>>>> set up right (as per my earlier conversation with you) ? In that case, how
>>>>> are you ensuring that the same node gets back the older UUID? If you don't
>>>>> then this is bound to happen.
>>>>>
>>>>> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
>>>>> abhishpali...@gmail.com> wrote:
>>>>>
>>>>>> Hi Team,
>>>>>>
>>>>>> Please lookinto this problem as this is very widely seen problem in
>>>>>> our system.
>>>>>>
>>>>>> We are having the setup of replicate volume setup with two brick but
>>>>>> after restarting the second board I am getting the duplicate entry in
>>>>>> "gluster peer status" command like below:
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>>>>>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>>>>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>>>>>> Peer in Cluster (Connected) # *
>>>>>>
>>>>>> I am attaching all logs from both the boards and the command outputs
>>>>>> as well.
>>>>>>
>>>>>> So could you please check what is the reason to get in this situation
>>>>>> as it is very frequent in multiple case.
>>>>>>
>>>>>> Also, we are not replacing any board from setup just rebooting.
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Regards
>>>>>> Abhishek Paliwal
>>>>>>
>>>>>> ___
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users@gluster.org
>>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> ~ Atin (atinm)
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>>
>>>>
>>>>
>>>> Regards
>>>> Abhishek Paliwal
>>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hi Atin,

System is the embedded system and these dates are before the system get in
timer sync.

Yes, I have also seen these two files in peers directory on 002500 board
and I want to know the reason why gluster creates the second file when
there is old file is exist. Even when you see the content of the these file
are same.

Is it possible for gluster if we fall in this situation then instead of
manually doing the steps which you mentioned above gluster will take care
of this?

I have some questions:

1. based on the logs can we find out the reason for having two peers files
with same contents.
2. is there any way to do it from gluster code.

Regards,
Abhishek

Regards,
Abhishek

On Mon, Nov 21, 2016 at 9:52 AM, Atin Mukherjee <amukh...@redhat.com> wrote:

> atin@dhcp35-96:~/Downloads/gluster_users/abhishek_dup_
> uuid/duplicate_uuid/glusterd_2500/peers$ ls -lrt
> total 8
> -rw---. 1 atin wheel 71 *Jan  1  1970* 5be8603b-18d0-4333-8590-
> 38f918a22857
> -rw---. 1 atin wheel 71 Nov 18 03:31 26ae19a6-b58f-446a-b079-
> 411d4ee57450
>
> In board 2500 look at the date of the file 
> 5be8603b-18d0-4333-8590-38f918a22857
> (marked in bold). Not sure how did you end up having this file in such time
> stamp. I am guessing this could be because of the set up been not cleaned
> properly at the time of re-installation.
>
> Here is the steps what I'd recommend for now:
>
> 1. rename 26ae19a6-b58f-446a-b079-411d4ee57450 to 
> 5be8603b-18d0-4333-8590-38f918a22857,
> you should have only one entry in the peers folder in board 2500.
> 2. Bring down both glusterd instances
> 3. Bring back one by one
>
> And then restart glusterd to see if the issue persists.
>
>
>
> On Mon, Nov 21, 2016 at 9:34 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hope you will see in the logs..
>>
>> On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Atin,
>>>
>>> It is not getting wipe off we have changed the configuration path from
>>> /var/lib/glusterd to /system/glusterd.
>>>
>>> So, they will remain as same as previous.
>>>
>>> On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee <amukh...@redhat.com>
>>> wrote:
>>>
>>>> Abhishek,
>>>>
>>>> rebooting the board does wipe of /var/lib/glusterd contents in your set
>>>> up right (as per my earlier conversation with you) ? In that case, how are
>>>> you ensuring that the same node gets back the older UUID? If you don't then
>>>> this is bound to happen.
>>>>
>>>> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
>>>> abhishpali...@gmail.com> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> Please lookinto this problem as this is very widely seen problem in
>>>>> our system.
>>>>>
>>>>> We are having the setup of replicate volume setup with two brick but
>>>>> after restarting the second board I am getting the duplicate entry in
>>>>> "gluster peer status" command like below:
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>>>>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>>>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>>>>> Peer in Cluster (Connected) # *
>>>>>
>>>>> I am attaching all logs from both the boards and the command outputs
>>>>> as well.
>>>>>
>>>>> So could you please check what is the reason to get in this situation
>>>>> as it is very frequent in multiple case.
>>>>>
>>>>> Also, we are not replacing any board from setup just rebooting.
>>>>>
>>>>> --
>>>>>
>>>>> Regards
>>>>> Abhishek Paliwal
>>>>>
>>>>> ___
>>>>> Gluster-users mailing list
>>>>> Gluster-users@gluster.org
>>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> ~ Atin (atinm)
>>>>
>>>
>>>
>>>
>>> --
>>>
>>>
>>>
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hope you will see in the logs..

On Mon, Nov 21, 2016 at 9:17 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Atin,
>
> It is not getting wipe off we have changed the configuration path from
> /var/lib/glusterd to /system/glusterd.
>
> So, they will remain as same as previous.
>
> On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
>
>> Abhishek,
>>
>> rebooting the board does wipe of /var/lib/glusterd contents in your set
>> up right (as per my earlier conversation with you) ? In that case, how are
>> you ensuring that the same node gets back the older UUID? If you don't then
>> this is bound to happen.
>>
>> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> Please lookinto this problem as this is very widely seen problem in our
>>> system.
>>>
>>> We are having the setup of replicate volume setup with two brick but
>>> after restarting the second board I am getting the duplicate entry in
>>> "gluster peer status" command like below:
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>>> Peer in Cluster (Connected) # *
>>>
>>> I am attaching all logs from both the boards and the command outputs as
>>> well.
>>>
>>> So could you please check what is the reason to get in this situation as
>>> it is very frequent in multiple case.
>>>
>>> Also, we are not replacing any board from setup just rebooting.
>>>
>>> --
>>>
>>> Regards
>>> Abhishek Paliwal
>>>
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>>
>>
>>
>>
>> --
>>
>> ~ Atin (atinm)
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate UUID entries in "gluster peer status" command

2016-11-20 Thread ABHISHEK PALIWAL
Hi Atin,

It is not getting wipe off we have changed the configuration path from
/var/lib/glusterd to /system/glusterd.

So, they will remain as same as previous.

On Mon, Nov 21, 2016 at 9:15 AM, Atin Mukherjee <amukh...@redhat.com> wrote:

> Abhishek,
>
> rebooting the board does wipe of /var/lib/glusterd contents in your set up
> right (as per my earlier conversation with you) ? In that case, how are you
> ensuring that the same node gets back the older UUID? If you don't then
> this is bound to happen.
>
> On Mon, Nov 21, 2016 at 9:11 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Team,
>>
>> Please lookinto this problem as this is very widely seen problem in our
>> system.
>>
>> We are having the setup of replicate volume setup with two brick but
>> after restarting the second board I am getting the duplicate entry in
>> "gluster peer status" command like below:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *# gluster peer status Number of Peers: 2  Hostname: 10.32.0.48 Uuid:
>> 5be8603b-18d0-4333-8590-38f918a22857 State: Peer in Cluster (Connected)
>>  Hostname: 10.32.0.48 Uuid: 5be8603b-18d0-4333-8590-38f918a22857 State:
>> Peer in Cluster (Connected) # *
>>
>> I am attaching all logs from both the boards and the command outputs as
>> well.
>>
>> So could you please check what is the reason to get in this situation as
>> it is very frequent in multiple case.
>>
>> Also, we are not replacing any board from setup just rebooting.
>>
>> --
>>
>> Regards
>> Abhishek Paliwal
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
>
> ~ Atin (atinm)
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] getting "Transport endpoint is not connected" in glusterfs mount log file.

2016-11-10 Thread ABHISHEK PALIWAL
Hi Pranith,

Could you please tell tell me the logs showing that the mount is not
available to connect to both the bricks.

On Fri, Nov 11, 2016 at 12:05 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:

> As per the logs, the mount is not able to connect to both the bricks. Are
> the connections fine?
>
> On Fri, Nov 11, 2016 at 10:20 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Hi,
>>
>> Its an urgent case.
>>
>> Atleast provide your views on this
>>
>> On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> We could see that sync is getting failed to sync the GlusterFS bricks
>>> due to error trace "Transport endpoint is not connected "
>>>
>>> [2016-10-31 04:06:03.627395] E [MSGID: 114031]
>>> [client-rpc-fops.c:1673:client3_3_finodelk_cbk] 0-c_glusterfs-client-9:
>>> remote operation failed [Transport endpoint is not connected]
>>> [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
>>> 0-c_glusterfs-client-9: not connected (priv->connected = 0)
>>> [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
>>> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
>>> GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
>>> (c_glusterfs-client-9)
>>> [2016-10-31 04:06:03.628466] E [MSGID: 114031]
>>> [client-rpc-fops.c:1673:client3_3_finodelk_cbk] 0-c_glusterfs-client-9:
>>> remote operation failed [Transport endpoint is not connected]
>>> [2016-10-31 04:06:03.628475] I [MSGID: 108019]
>>> [afr-lk-common.c:1086:afr_lock_blocking] 0-c_glusterfs-replicate-0:
>>> unable to lock on even one child
>>> [2016-10-31 04:06:03.628539] I [MSGID: 108019]
>>> [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
>>> 0-c_glusterfs-replicate-0: Blocking inodelks failed.
>>> [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
>>> 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
>>> connected)
>>> [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind]
>>> (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58]
>>> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90]
>>> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10]
>>> (--> 
>>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
>>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808]
>>> ) 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
>>> op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
>>> [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
>>> 0-c_glusterfs-client-9: changing port to 49391 (from 0)
>>> [2016-10-31 04:06:03.629210] W [MSGID: 114031]
>>> [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-9:
>>> remote operation failed. Path: 
>>> /loadmodules_norepl/CXC1725605_P93A001/cello/emasviews
>>> (b0e5a94e-a432-4dce-b86f-a551555780a2) [Transport endpoint is not
>>> connected]
>>>
>>>
>>> Could you please tell us the reason why we are getting these trace and
>>> how to resolve this.
>>>
>>> Logs are attached here please share your analysis.
>>>
>>> Thanks in advanced
>>>
>>> --
>>> Regards
>>> Abhishek Paliwal
>>>
>>
>>
>>
>> --
>>
>>
>>
>>
>> Regards
>> Abhishek Paliwal
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>
>
>
> --
> Pranith
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] getting "Transport endpoint is not connected" in glusterfs mount log file.

2016-11-10 Thread ABHISHEK PALIWAL
Hi Rafi KC,

I have already attached all the logs in my first mail and I am getting
these logs on 25th board.

You can find the logs at
logs/d/usr/002500_glusterfiles/varlog_glusterfs/brick/


//Abhishek

On Fri, Nov 11, 2016 at 11:50 AM, Mohammed Rafi K C <rkavu...@redhat.com>
wrote:

> Hi Abhishek,
>
> Could you please see if you are bricks are healthy or not, may be you can
> do a gluster volume status or you can look into the logs. If bricks are not
> running can you please attach the bricks logs in /var/log/gluster/bricks/ .
>
>
> Rafi KC
>
> On 11/11/2016 10:20 AM, ABHISHEK PALIWAL wrote:
>
> Hi,
>
> Its an urgent case.
>
> Atleast provide your views on this
>
> On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL <
> <abhishpali...@gmail.com>abhishpali...@gmail.com> wrote:
>
>> Hi,
>>
>> We could see that sync is getting failed to sync the GlusterFS bricks due
>> to error trace "Transport endpoint is not connected "
>>
>> [2016-10-31 04:06:03.627395] E [MSGID: 114031]
>> [client-rpc-fops.c:1673:client3_3_finodelk_cbk] 0-c_glusterfs-client-9:
>> remote operation failed [Transport endpoint is not connected]
>> [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
>> 0-c_glusterfs-client-9: not connected (priv->connected = 0)
>> [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
>> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
>> GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
>> (c_glusterfs-client-9)
>> [2016-10-31 04:06:03.628466] E [MSGID: 114031]
>> [client-rpc-fops.c:1673:client3_3_finodelk_cbk] 0-c_glusterfs-client-9:
>> remote operation failed [Transport endpoint is not connected]
>> [2016-10-31 04:06:03.628475] I [MSGID: 108019]
>> [afr-lk-common.c:1086:afr_lock_blocking] 0-c_glusterfs-replicate-0:
>> unable to lock on even one child
>> [2016-10-31 04:06:03.628539] I [MSGID: 108019]
>> [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
>> 0-c_glusterfs-replicate-0: Blocking inodelks failed.
>> [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
>> 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
>> connected)
>> [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind] (-->
>> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58]
>> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90]
>> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10]
>> (--> 
>> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
>> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808]
>> ) 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
>> op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
>> [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
>> 0-c_glusterfs-client-9: changing port to 49391 (from 0)
>> [2016-10-31 04:06:03.629210] W [MSGID: 114031]
>> [client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-9:
>> remote operation failed. Path: 
>> /loadmodules_norepl/CXC1725605_P93A001/cello/emasviews
>> (b0e5a94e-a432-4dce-b86f-a551555780a2) [Transport endpoint is not
>> connected]
>>
>>
>> Could you please tell us the reason why we are getting these trace and
>> how to resolve this.
>>
>> Logs are attached here please share your analysis.
>>
>> Thanks in advanced
>>
>> --
>> Regards
>> Abhishek Paliwal
>>
>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://www.gluster.org/mailman/listinfo/gluster-users
>
>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] getting "Transport endpoint is not connected" in glusterfs mount log file.

2016-11-09 Thread ABHISHEK PALIWAL
could anyone reply on this.

On Wed, Nov 9, 2016 at 11:08 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi,
>
> We could see that sync is getting failed to sync the GlusterFS bricks due
> to error trace "Transport endpoint is not connected "
>
> [2016-10-31 04:06:03.627395] E [MSGID: 114031] 
> [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
> 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
> connected]
> [2016-10-31 04:06:03.628381] I [socket.c:3308:socket_submit_request]
> 0-c_glusterfs-client-9: not connected (priv->connected = 0)
> [2016-10-31 04:06:03.628432] W [rpc-clnt.c:1586:rpc_clnt_submit]
> 0-c_glusterfs-client-9: failed to submit rpc-request (XID: 0x7f5f Program:
> GlusterFS 3.3, ProgVers: 330, Proc: 30) to rpc-transport
> (c_glusterfs-client-9)
> [2016-10-31 04:06:03.628466] E [MSGID: 114031] 
> [client-rpc-fops.c:1673:client3_3_finodelk_cbk]
> 0-c_glusterfs-client-9: remote operation failed [Transport endpoint is not
> connected]
> [2016-10-31 04:06:03.628475] I [MSGID: 108019] 
> [afr-lk-common.c:1086:afr_lock_blocking]
> 0-c_glusterfs-replicate-0: unable to lock on even one child
> [2016-10-31 04:06:03.628539] I [MSGID: 108019] 
> [afr-transaction.c:1224:afr_post_blocking_inodelk_cbk]
> 0-c_glusterfs-replicate-0: Blocking inodelks failed.
> [2016-10-31 04:06:03.628630] W [fuse-bridge.c:1282:fuse_err_cbk]
> 0-glusterfs-fuse: 20790: FLUSH() ERR => -1 (Transport endpoint is not
> connected)
> [2016-10-31 04:06:03.629149] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn-0xb5c80)[0x3fff8ab79f58]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind-0x1b7a0)[0x3fff8ab1dc90]
> (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy-0x1b638)[0x3fff8ab1de10]
> (--> 
> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup-0x19af8)[0x3fff8ab1fb18]
> (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify-0x18e68)[0x3fff8ab20808]
> ) 0-c_glusterfs-client-9: forced unwinding frame type(GlusterFS 3.3)
> op(LOOKUP(27)) called at 2016-10-31 04:06:03.624346 (xid=0x7f5a)
> [2016-10-31 04:06:03.629183] I [rpc-clnt.c:1847:rpc_clnt_reconfig]
> 0-c_glusterfs-client-9: changing port to 49391 (from 0)
> [2016-10-31 04:06:03.629210] W [MSGID: 114031] 
> [client-rpc-fops.c:2971:client3_3_lookup_cbk]
> 0-c_glusterfs-client-9: remote operation failed. Path: /loadmodules_norepl/
> CXC1725605_P93A001/cello/emasviews (b0e5a94e-a432-4dce-b86f-a551555780a2)
> [Transport endpoint is not connected]
>
>
> Could you please tell us the reason why we are getting these trace and how
> to resolve this.
>
> Logs are attached here please share your analysis.
>
> Thanks in advanced
>
> --
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] posix_acl_access [Invalid argument] will cause any issue due to timeout

2016-11-08 Thread ABHISHEK PALIWAL
Hi,

I am getting the below message in file is flooded with these logs


[2016-09-22 20:25:33.102737] I [dict.c:473:dict_get]
(-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x166)
[0x2ace40d9c816]
-->/usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x289)
[0x2ace40fba4a9] -->/usr/lib64/libglusterfs.so.0(dict_get+0xbb)
[0x3e9041e38b] ) 0-dict: !this || key=system.posix_acl_default [Invalid
argument]

[2016-09-22 20:25:33.778950] I [dict.c:473:dict_get]
(-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x166)
[0x2ace40d9c816]
-->/usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x22f)
[0x2ace40fba44f] -->/usr/lib64/libglusterfs.so.0(dict_get+0xbb)
[0x3e9041e38b] ) 0-dict: !this || key=system.posix_acl_access [Invalid
argument]

while searching on net I found community is working on that. May I know is
these message caused any problem in gluster behavior due to some timeout?

Below is the link which I found w.r.t this. May I know when can I expect
the solution for this.

https://access.redhat.com/node/2705301

-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Check the possibility to incorporate DEBUG info permanently in build

2016-10-17 Thread ABHISHEK PALIWAL
Hi Vijay,

It is quite difficult to provide the exact instances but below are the two
mostly occurred cases.

1. Get duplicate peer entries in 'peer status' command
2. We lost sync between two boards due to gluster mount point is not
present at one of the board.


Regards,
Abhisehk

On Mon, Oct 17, 2016 at 6:40 AM, Vijay Bellur <vbel...@redhat.com> wrote:

> On 10/14/2016 04:30 AM, ABHISHEK PALIWAL wrote:
>
>> Hi Team,
>>
>> As we are seeing many issues in gluster. And we are failing to address
>> most of the gluster issues due to lack of information for fault analysis.
>>
>> And for the many issue unfortunately with the initial gluster logs we
>> get a very limited information which is not at all possible to find the
>> root cause/conclude the issue.
>> Every time enabling the LOG_LEVEL to DEBUG is not feasible and few of
>> the cases are very rarely seen.
>>
>> Hence, I request you to check if there is a possibility  to incorporate
>> the debug information in build or check if its possible to introduce a
>> new debug level that can always be activated.
>>
>> Please come back on this!
>>
>
> Abhishek - please provide specific instances of the nature of logs that
> could have helped you better. The query posted by you is very broad based
> and such broad queries seldom helps us in achieving the desired outcome.
>
> Regards,
> Vijay
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Check the possibility to incorporate DEBUG info permanently in build

2016-10-14 Thread ABHISHEK PALIWAL
Hi Team,

As we are seeing many issues in gluster. And we are failing to address most
of the gluster issues due to lack of information for fault analysis.

And for the many issue unfortunately with the initial gluster logs we get a
very limited information which is not at all possible to find the root
cause/conclude the issue.
Every time enabling the LOG_LEVEL to DEBUG is not feasible and few of the
cases are very rarely seen.

Hence, I request you to check if there is a possibility  to incorporate the
debug information in build or check if its possible to introduce a new
debug level that can always be activated.

Please come back on this!

-- 

Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Duplicate entry in "gluster peer status" command

2016-10-04 Thread ABHISHEK PALIWAL
Please comment your analysis it is very urgent

On Tue, Oct 4, 2016 at 7:10 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi,
>
>
> Again I am getting the duplicate peer entries in the peer status command
> while restarting system which causing sync failure in the gluster volume
> between both boards.
>
> I am attaching logs from both the node could you please check this and
> help me in resolve the issue.
>
> In logs we have BoardA and BoardB where BoardB showing duplicate entries
> in "gluster peer status" command.
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Error log in brick file : mkdir of /opt/lvmdir/c2/brick/.trashcan/ failed [File exists]

2016-09-28 Thread ABHISHEK PALIWAL
Even having some problem in mnt-c.log file as well

[2016-09-27 13:01:00.455588] I [dict.c:473:dict_get]
(-->/usr/lib64/glusterfs/3.7.6/xlator/debug/io-stats.so(io_stats_lookup_cbk-0x1d7dc)
[0x3fff80c2a574]
-->/usr/lib64/glusterfs/3.7.6/xlator/system/posix-acl.so(posix_acl_lookup_cbk-0x15b5c)
[0x3fff80c00944] -->/usr/lib64/libglusterfs.so.0(dict_get-0xc10f4)
[0x3fff84c8dc2c] ) 0-dict: !this || key=system.posix_acl_default [Invalid
argument]
[2016-09-27 13:01:22.388314] W [MSGID: 114031]
[client-rpc-fops.c:2971:client3_3_lookup_cbk] 0-c_glusterfs-client-0:
remote operation failed. Path: /loadmodules/CXC1733370_P91A033
(a1d7c756-a9ba-4525-af5c-a8b7ebcbbb1a) [No such file or directory]
[2016-09-27 13:01:22.388403] E [fuse-bridge.c:2117:fuse_open_resume]
0-glusterfs-fuse: 8716: OPEN a1d7c756-a9ba-4525-af5c-a8b7ebcbbb1a
resolution failed

Could you please let me know the possible reason for it



On Wed, Sep 28, 2016 at 3:58 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi,
>
> I am getting some unwanted errors when created the distributed volume and
> not able to access file from the gluster mount.
>
> Could you please let me know the reason behind these errors.
>
> Also please let me know why gluster is calling "posix_mkdir" when file is
> exist.
>
> Please find the attached log for more details.
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Error log in brick file : mkdir of /opt/lvmdir/c2/brick/.trashcan/ failed [File exists]

2016-09-28 Thread ABHISHEK PALIWAL
Hi,

I am getting some unwanted errors when created the distributed volume and
not able to access file from the gluster mount.

Could you please let me know the reason behind these errors.

Also please let me know why gluster is calling "posix_mkdir" when file is
exist.

Please find the attached log for more details.

-- 




Regards
Abhishek Paliwal


gluster.tar
Description: Unix tar archive
Log start: 160928-103437 - 10.220.32.69 - moshell 16.0s - /home/eandmle/tmp/file_permission/gluster_logs.log

STP69> ls /system/glusterd 

160928-10:35:49 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ ls /system/glusterd
bitd	   glustershd  hooks  options  quotad  snaps
glusterd.info  groups	   nfs	  peersscrub   vols
$ 

STP69> lhsh 000300 ls /system/glusterd

160928-10:36:01 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 ls /system/glusterd
bitd
glusterd.info
glustershd
groups
hooks
nfs
options
peers
quotad
scrub
snaps
vols
$ 

STP69> lhsh 000300/d1 ls /system/glusterd

160928-10:36:06 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 ls /system/glusterd
bitd
glusterd.info
glustershd
groups
hooks
nfs
options
peers
quotad
scrub
snaps
vols
$ 

STP69> 

STP69> lhsh 000300 gluster volume info

160928-10:37:40 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 gluster volume info
 
Volume Name: c_glusterfs
Type: Distribute
Volume ID: caed5dc4-1c56-4b92-af1f-99ae8271d99c
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: 10.32.0.48:/opt/lvmdir/c2/brick
Options Reconfigured:
nfs.disable: on
network.ping-timeout: 4
performance.readdir-ahead: on
$ 

STP69> lhsh 000300/d1 gluster volume info

160928-10:37:42 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 gluster volume info
rcmd: unknown command 'gluster'
$ 

STP69> bo

160928-10:37:56 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223


00M BoardType DevsSwAllocation

01  SMXB2OE   SMXB
03  EPB2  EPB_C1  
04 1041 EPB2  PCD EPB_BLADE_A 
05 1051 EPB2  PCD EPB_BLADE_A 
27  SMXB2OE   SMXB

STP69> 

STP69> lhsh 000300 gluster volume status

160928-10:38:05 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 gluster volume status
Status of volume: c_glusterfs
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick 10.32.0.48:/opt/lvmdir/c2/brick   49152 0  Y   1481 
 
Task Status of Volume c_glusterfs
--
There are no active volume tasks
 
$ 

STP69> lhsh 000300/d1 gluster volume status

160928-10:38:07 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 gluster volume status
rcmd: unknown command 'gluster'
$ 

STP69> 

STP69> lhsh 000300 gluster peer status

160928-10:38:12 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 gluster peer status
Number of Peers: 0
$ 

STP69> lhsh 000300/d1 gluster peer status

160928-10:38:13 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300/d1 gluster peer status
rcmd: unknown command 'gluster'
$ 

STP69> 

STP69> lhsh 000300 tar -zcvf /d/glusterd_PIU.tar /system/glusterd

160928-10:38:19 10.220.32.69 16.0s RNC_NODE_MODEL_V_7_1551_COMPLETE stopfile=/tmp/23223
$ lhsh 000300 tar -zcvf /d/glusterd_PIU.tar /system/glusterd
tar: Removing leading `/' from member names
/system/glusterd/
/system/glusterd/hooks/
/system/glusterd/hooks/1/
/system/glusterd/hooks/1/remove-brick/
/system/glusterd/hooks/1/remove-brick/pre/
/system/glusterd/hooks/1/remove-brick/post/
/system/glusterd/hooks/1/create/
/system/glusterd/hooks/1/create/pre/
/system/glusterd/hooks/1/create/post/
/system/glusterd/hooks/1/start/
/system/glusterd/hooks/1/start/pre/
/system/glusterd/hooks/1/start/post/
/system/glusterd/hooks/1/add-brick/
/system/glusterd/hooks/1/add-brick/pre/
/system/glusterd/hooks/1/add-brick/post/
/system/glusterd/hooks/1/delete/
/system/glusterd/hooks/1/delete/pre/
/system/glusterd/hooks/1/delete/post/
/system/glusterd/hooks/1/set/
/system/glusterd/hooks/1/set/pre/
/system/glusterd/hooks/1/set/post/
/system/glusterd/hooks/1/stop/
/system/glusterd/hooks/1/stop/pre/
/system/glusterd/hooks/1/stop/post/
/system/glusterd/hooks/1/reset/
/system/glusterd/hooks/1/

[Gluster-users] Duplicati UUID entries

2016-07-29 Thread ABHISHEK PALIWAL
Hi,

After a long time I am posting one more issue here.

We have two board and glusterfs in sync on both of them, and our test case
to restart one board continuously but in this TestCase we are getting
duplicate entries of UUID in "gluster peer status" command and it is very
rarely seen.


So, I just want to know the possible reason behind this problem.

There is no possibility of deletion of glusterd.info file and I think if
this is not present then only gluster will generate new UUID for the peer.


Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Query!

2016-06-17 Thread ABHISHEK PALIWAL
Hi,

I am using Gluster 3.7.6 and performing plug in plug out of the board but
getting following brick logs after plug in board again:

[2016-06-17 07:14:36.122421] W [trash.c:1858:trash_mkdir]
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.122487] E [MSGID: 115056]
[server-rpc-fops.c:509:server_mkdir_cbk] 0-c_glusterfs-server: 9705: MKDIR
/.trashcan (----0001/.trashcan) ==> (Operation
not permitted) [Operation not permitted]
[2016-06-17 07:14:36.139773] W [trash.c:1858:trash_mkdir]
0-c_glusterfs-trash: mkdir issued on /.trashcan/, which is not permitted
[2016-06-17 07:14:36.139861] E [MSGID: 115056]
[server-rpc-fops.c:509:server_mkdir_cbk] 0-c_glusterfs-server: 9722: MKDIR
/.trashcan (----0001/.trashcan) ==> (Operation
not permitted) [Operation not permitted]


Could any one tell me the reason behind this failure like when and why
these log occurs.
I have already pushed same query previously but did not get any response.

-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-06-06 Thread ABHISHEK PALIWAL
i am still facing this issue any suggestion

On Fri, May 27, 2016 at 10:48 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> any hint from the logs..
>
> On Thu, May 26, 2016 at 11:59 AM, ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>>
>>
>> On Thu, May 26, 2016 at 11:54 AM, Lindsay Mathieson <
>> lindsay.mathie...@gmail.com> wrote:
>>
>>> On 25 May 2016 at 20:25, ABHISHEK PALIWAL <abhishpali...@gmail.com>
>>> wrote:
>>> > [2016-05-24 12:10:20.091267] E [MSGID: 113039]
>>> [posix.c:2570:posix_open]
>>> > 0-c_glusterfs-posix: open on
>>> >
>>> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
>>> > flags: 2 [No such file or directory]
>>> > [2016-05-24 12:13:17.305773] E [MSGID: 113039]
>>> [posix.c:2570:posix_open]
>>> > 0-c_glusterfs-posix: open on
>>> >
>>> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
>>> > flags: 2 [No such file or directory]
>>>
>>> does /opt/lvmdir/c2/brick contain anything? dies it have a .glusterfd
>>> dir?
>>>
>> Yes .glusterfs directory is present and /opt/lvmdir/c2/brick containing
>> files.
>>
>>>
>>> Could the underlying file system mount for that brick have failed?
>>>
>> mount is successful
>>
>> I have doubt on the following logs
>> [2016-05-24 10:40:34.177887] W [MSGID: 113006] [posix.c:3049:posix_flush]
>> 0-c_glusterfs-posix: pfd is NULL on fd=0x3fff8c003e4c [Operation not
>> permitted]
>> [2016-05-24 10:40:34.178022] E [MSGID: 115065]
>> [server-rpc-fops.c:1354:server_flush_cbk] 0-c_glusterfs-server: 3684: FLUSH
>> -2 (a5a5e596-e786-4f48-8f04-431712d98a6b) ==> (Operation not permitted)
>> [Operation not permitted]
>>
>> Like who will call this posix_flush and in which circumstances?
>>
>> Regards,
>> Abhishek
>>
>>>
>>>
>>> --
>>> Lindsay
>>>
>>
>>
>>
>>
>
>
> --
>
>
>
>
> Regards
> Abhishek Paliwal
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-26 Thread ABHISHEK PALIWAL
any hint from the logs..

On Thu, May 26, 2016 at 11:59 AM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

>
>
> On Thu, May 26, 2016 at 11:54 AM, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 25 May 2016 at 20:25, ABHISHEK PALIWAL <abhishpali...@gmail.com>
>> wrote:
>> > [2016-05-24 12:10:20.091267] E [MSGID: 113039] [posix.c:2570:posix_open]
>> > 0-c_glusterfs-posix: open on
>> >
>> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
>> > flags: 2 [No such file or directory]
>> > [2016-05-24 12:13:17.305773] E [MSGID: 113039] [posix.c:2570:posix_open]
>> > 0-c_glusterfs-posix: open on
>> >
>> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
>> > flags: 2 [No such file or directory]
>>
>> does /opt/lvmdir/c2/brick contain anything? dies it have a .glusterfd dir?
>>
> Yes .glusterfs directory is present and /opt/lvmdir/c2/brick containing
> files.
>
>>
>> Could the underlying file system mount for that brick have failed?
>>
> mount is successful
>
> I have doubt on the following logs
> [2016-05-24 10:40:34.177887] W [MSGID: 113006] [posix.c:3049:posix_flush]
> 0-c_glusterfs-posix: pfd is NULL on fd=0x3fff8c003e4c [Operation not
> permitted]
> [2016-05-24 10:40:34.178022] E [MSGID: 115065]
> [server-rpc-fops.c:1354:server_flush_cbk] 0-c_glusterfs-server: 3684: FLUSH
> -2 (a5a5e596-e786-4f48-8f04-431712d98a6b) ==> (Operation not permitted)
> [Operation not permitted]
>
> Like who will call this posix_flush and in which circumstances?
>
> Regards,
> Abhishek
>
>>
>>
>> --
>> Lindsay
>>
>
>
>
>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-26 Thread ABHISHEK PALIWAL
On Thu, May 26, 2016 at 11:54 AM, Lindsay Mathieson <
lindsay.mathie...@gmail.com> wrote:

> On 25 May 2016 at 20:25, ABHISHEK PALIWAL <abhishpali...@gmail.com> wrote:
> > [2016-05-24 12:10:20.091267] E [MSGID: 113039] [posix.c:2570:posix_open]
> > 0-c_glusterfs-posix: open on
> >
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> > flags: 2 [No such file or directory]
> > [2016-05-24 12:13:17.305773] E [MSGID: 113039] [posix.c:2570:posix_open]
> > 0-c_glusterfs-posix: open on
> >
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> > flags: 2 [No such file or directory]
>
> does /opt/lvmdir/c2/brick contain anything? dies it have a .glusterfd dir?
>
Yes .glusterfs directory is present and /opt/lvmdir/c2/brick containing
files.

>
> Could the underlying file system mount for that brick have failed?
>
mount is successful

I have doubt on the following logs
[2016-05-24 10:40:34.177887] W [MSGID: 113006] [posix.c:3049:posix_flush]
0-c_glusterfs-posix: pfd is NULL on fd=0x3fff8c003e4c [Operation not
permitted]
[2016-05-24 10:40:34.178022] E [MSGID: 115065]
[server-rpc-fops.c:1354:server_flush_cbk] 0-c_glusterfs-server: 3684: FLUSH
-2 (a5a5e596-e786-4f48-8f04-431712d98a6b) ==> (Operation not permitted)
[Operation not permitted]

Like who will call this posix_flush and in which circumstances?

Regards,
Abhishek

>
>
> --
> Lindsay
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-25 Thread ABHISHEK PALIWAL
please reply

On Wed, May 25, 2016 at 3:55 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi,
>
> I am using replicated volume on board gluster volume mount point is
> working fine but on other board it is mounted as well as file are present
> but it is not displaying them.
>
> When I checked the brick file of this board found following error logs:
>
> [2016-05-24 10:40:34.177887] W [MSGID: 113006] [posix.c:3049:posix_flush]
> 0-c_glusterfs-posix: pfd is NULL on fd=0x3fff8c003e4c [Operation not
> permitted]
> [2016-05-24 10:40:34.178022] E [MSGID: 115065]
> [server-rpc-fops.c:1354:server_flush_cbk] 0-c_glusterfs-server: 3684: FLUSH
> -2 (a5a5e596-e786-4f48-8f04-431712d98a6b) ==> (Operation not permitted)
> [Operation not permitted]
> [2016-05-24 10:40:43.476937] I [login.c:81:gf_auth] 0-auth/login: allowed
> user names: 9f93c42d-ae4e-42c9-aef0-6281e225784f
> [2016-05-24 10:40:43.476994] I [MSGID: 115029]
> [server-handshake.c:612:server_setvolume] 0-c_glusterfs-server: accepted
> client from
> 002500-24580-2016/05/24-10:40:43:430593-c_glusterfs-client-2-0-0 (version:
> 3.7.6)
> [2016-05-24 10:40:43.670540] I [login.c:81:gf_auth] 0-auth/login: allowed
> user names: 9f93c42d-ae4e-42c9-aef0-6281e225784f
> [2016-05-24 10:40:43.670589] I [MSGID: 115029]
> [server-handshake.c:612:server_setvolume] 0-c_glusterfs-server: accepted
> client from 000300-8489-2016/05/24-10:40:43:522975-c_glusterfs-client-2-0-0
> (version: 3.7.6)
> [2016-05-24 10:40:46.830027] I [MSGID: 115036]
> [server.c:552:server_rpc_notify] 0-c_glusterfs-server: disconnecting
> connection from
> 002500-24580-2016/05/24-10:40:43:430593-c_glusterfs-client-2-0-0
> [2016-05-24 10:40:46.830107] I [MSGID: 101055]
> [client_t.c:419:gf_client_unref] 0-c_glusterfs-server: Shutting down
> connection 002500-24580-2016/05/24-10:40:43:430593-c_glusterfs-client-2-0-0
> [2016-05-24 10:42:33.710782] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 10:50:39.257000] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 10:50:42.165933] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 11:00:43.003797] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 11:10:44.003624] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 11:20:45.003890] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 11:30:46.004248] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 11:40:47.003959] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 11:50:48.003996] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 12:00:49.003791] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 12:10:20.091267] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
> [2016-05-24 12:13:17.305773] E [MSGID: 113039] [posix.c:2570:posix_open]
> 0-c_glusterfs-posix: open on
> /opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
> flags: 2 [No such file or directory]
>
> And following behavior is observed when trying to access brick from mount
> point:
>
> 000300> ls -lart  /c/loadmodules
> total 0  ===> size is 0
> 000300> ls -lart c

[Gluster-users] Gluster Volume mounted but not able to show the files from mount point

2016-05-25 Thread ABHISHEK PALIWAL
Hi,

I am using replicated volume on board gluster volume mount point is working
fine but on other board it is mounted as well as file are present but it is
not displaying them.

When I checked the brick file of this board found following error logs:

[2016-05-24 10:40:34.177887] W [MSGID: 113006] [posix.c:3049:posix_flush]
0-c_glusterfs-posix: pfd is NULL on fd=0x3fff8c003e4c [Operation not
permitted]
[2016-05-24 10:40:34.178022] E [MSGID: 115065]
[server-rpc-fops.c:1354:server_flush_cbk] 0-c_glusterfs-server: 3684: FLUSH
-2 (a5a5e596-e786-4f48-8f04-431712d98a6b) ==> (Operation not permitted)
[Operation not permitted]
[2016-05-24 10:40:43.476937] I [login.c:81:gf_auth] 0-auth/login: allowed
user names: 9f93c42d-ae4e-42c9-aef0-6281e225784f
[2016-05-24 10:40:43.476994] I [MSGID: 115029]
[server-handshake.c:612:server_setvolume] 0-c_glusterfs-server: accepted
client from
002500-24580-2016/05/24-10:40:43:430593-c_glusterfs-client-2-0-0 (version:
3.7.6)
[2016-05-24 10:40:43.670540] I [login.c:81:gf_auth] 0-auth/login: allowed
user names: 9f93c42d-ae4e-42c9-aef0-6281e225784f
[2016-05-24 10:40:43.670589] I [MSGID: 115029]
[server-handshake.c:612:server_setvolume] 0-c_glusterfs-server: accepted
client from 000300-8489-2016/05/24-10:40:43:522975-c_glusterfs-client-2-0-0
(version: 3.7.6)
[2016-05-24 10:40:46.830027] I [MSGID: 115036]
[server.c:552:server_rpc_notify] 0-c_glusterfs-server: disconnecting
connection from
002500-24580-2016/05/24-10:40:43:430593-c_glusterfs-client-2-0-0
[2016-05-24 10:40:46.830107] I [MSGID: 101055]
[client_t.c:419:gf_client_unref] 0-c_glusterfs-server: Shutting down
connection 002500-24580-2016/05/24-10:40:43:430593-c_glusterfs-client-2-0-0
[2016-05-24 10:42:33.710782] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 10:50:39.257000] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 10:50:42.165933] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 11:00:43.003797] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 11:10:44.003624] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 11:20:45.003890] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 11:30:46.004248] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 11:40:47.003959] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 11:50:48.003996] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 12:00:49.003791] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 12:10:20.091267] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]
[2016-05-24 12:13:17.305773] E [MSGID: 113039] [posix.c:2570:posix_open]
0-c_glusterfs-posix: open on
/opt/lvmdir/c2/brick/.glusterfs/fb/14/fb147cca-ec09-4259-9dfe-df883219e6a6,
flags: 2 [No such file or directory]

And following behavior is observed when trying to access brick from mount
point:

000300> ls -lart  /c/loadmodules
total 0  ===> size is 0
000300> ls -lart c/loadmodules/CXC1736518_P92A184
-rw-rw-rw- 1 root root 1520582 May 18 14:08 c/loadmodule/CXC1736518_P92A184
===> but showing file properties
000300> ls -lart  /c/loadmoduless
ls: cannot access /c/loadmoduless: No such file or directory ==> if
file/dir not present shows an error.
000300>


Could any one help me why this abnormal behavior is reported?
-- 

Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-22 Thread ABHISHEK PALIWAL
Hi Atin,

Thanks for your reply. But we fall in this situation then what is the
solution to recover from here I have already remove /var/log/glusterd from
one peer, should I need to remove /var/log/glusterd from both of the peers.

Regards,
Abhishek

On Fri, May 20, 2016 at 8:39 PM, Atin Mukherjee <atin.mukherje...@gmail.com>
wrote:

> -Atin
> Sent from one plus one
> On 20-May-2016 5:34 PM, "ABHISHEK PALIWAL" <abhishpali...@gmail.com>
> wrote:
> >
> > Actually we have some other files related to system initial
> configuration for that we
> > need to format the volume where these bricks are also created and after
> this we are
> > facing some abnormal behavior in gluster and some failure logs like
> volume ID mismatch something.
> >
> > That is why I am asking this is the right way to format volume where
> bricks are created.
>
> No certainly not. If you format your brick, you loose the data and so as
> all the extended attributes. In this case your volume would bound to behave
> abnormally.
>
> >
> > and also is there any link between /var/lib/glusterd and xattr stored in
> .glusterfs directory at brick path.
> >
> > Regards,
> > Abhishek
> >
> > On Fri, May 20, 2016 at 5:25 PM, Atin Mukherjee <amukh...@redhat.com>
> wrote:
> >>
> >> And most importantly why would you do that? What's your use case
> Abhishek?
> >>
> >> On 05/20/2016 05:03 PM, Lindsay Mathieson wrote:
> >> > On 20/05/2016 8:37 PM, ABHISHEK PALIWAL wrote:
> >> >> I am not getting any failure and after restart the glusterd when I
> run
> >> >> volume info command it creates the brick directory
> >> >> as well as .glsuterfs (xattrs).
> >> >>
> >> >> but some time even after restart the glusterd, volume info command
> >> >> showing no volume present.
> >> >>
> >> >> Could you please tell me why this unpredictable problem is occurring.
> >> >>
> >> >
> >> > Because as stated earlier you erase all the information about the
> >> > brick?  How is this unpredictable?
> >> >
> >> >
> >> > If you want to delete and recreate a brick you should have used the
> >> > remove-brick/add-brick commands.
> >> >
> >> > --
> >> > Lindsay Mathieson
> >> >
> >> >
> >> >
> >> > ___
> >> > Gluster-users mailing list
> >> > Gluster-users@gluster.org
> >> > http://www.gluster.org/mailman/listinfo/gluster-users
> >> >
> >
> >
> >
> >
> > --
> >
> >
> >
> >
> > Regards
> > Abhishek Paliwal
> >
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
>
> -Atin
> Sent from one plus one
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread ABHISHEK PALIWAL
Actually we have some other files related to system initial configuration
for that we
need to format the volume where these bricks are also created and after
this we are
facing some abnormal behavior in gluster and some failure logs like volume
ID mismatch something.

That is why I am asking this is the right way to format volume where bricks
are created.

and also is there any link between /var/lib/glusterd and xattr stored in
.glusterfs directory at brick path.

Regards,
Abhishek

On Fri, May 20, 2016 at 5:25 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

> And most importantly why would you do that? What's your use case Abhishek?
>
> On 05/20/2016 05:03 PM, Lindsay Mathieson wrote:
> > On 20/05/2016 8:37 PM, ABHISHEK PALIWAL wrote:
> >> I am not getting any failure and after restart the glusterd when I run
> >> volume info command it creates the brick directory
> >> as well as .glsuterfs (xattrs).
> >>
> >> but some time even after restart the glusterd, volume info command
> >> showing no volume present.
> >>
> >> Could you please tell me why this unpredictable problem is occurring.
> >>
> >
> > Because as stated earlier you erase all the information about the
> > brick?  How is this unpredictable?
> >
> >
> > If you want to delete and recreate a brick you should have used the
> > remove-brick/add-brick commands.
> >
> > --
> > Lindsay Mathieson
> >
> >
> >
> > _______
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-users
> >
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread ABHISHEK PALIWAL
I am not getting any failure and after restart the glusterd when I run
volume info command it creates the brick directory
as well as .glsuterfs (xattrs).

but some time even after restart the glusterd, volume info command showing
no volume present.

Could you please tell me why this unpredictable problem is occurring.

Regards,
Abhishek

On Fri, May 20, 2016 at 3:50 PM, Kaushal M <kshlms...@gmail.com> wrote:

> This would erase the xattrs set on the brick root (volume-id), which
> identify it as a brick. Brick processes will fail to start when this
> xattr isn't present.
>
>
> On Fri, May 20, 2016 at 3:42 PM, ABHISHEK PALIWAL
> <abhishpali...@gmail.com> wrote:
> > Hi
> >
> > What will happen if we format the volume where the bricks of replicate
> > gluster volume's are created and restart the glusterd on both node.
> >
> > It will work fine or in this case need to remove /var/lib/glusterd
> directory
> > as well.
> >
> > --
> > Regards
> > Abhishek Paliwal
> >
> > ___
> > Gluster-devel mailing list
> > gluster-de...@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
>



-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Query!

2016-05-20 Thread ABHISHEK PALIWAL
Hi

What will happen if we format the volume where the bricks of replicate
gluster volume's are created and restart the glusterd on both node.

It will work fine or in this case need to remove /var/lib/glusterd
directory as well.

-- 
Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Gluster Brick Offline after reboot!!

2016-05-04 Thread ABHISHEK PALIWAL
I am talking about the time taken by the GlusterD to mark the process
offline because
here GlusterD is responsible to making brick online/offline.

is it configurable?

On Wed, May 4, 2016 at 5:53 PM, Atin Mukherjee <amukh...@redhat.com> wrote:

> Abhishek,
>
> See the response inline.
>
>
> On 05/04/2016 05:43 PM, ABHISHEK PALIWAL wrote:
> > Hi Atin,
> >
> > please reply, is there any configurable time out parameter for brick
> > process to go offline which we can increase?
> >
> > Regards,
> > Abhishek
> >
> > On Thu, Apr 21, 2016 at 12:34 PM, ABHISHEK PALIWAL
> > <abhishpali...@gmail.com <mailto:abhishpali...@gmail.com>> wrote:
> >
> > Hi Atin,
> >
> > Please answer following doubts as well:
> >
> > 1 .If there is a temporary glitch in the network , will that affect
> > the gluster brick process in anyway, Is there any timeout for the
> > brick process to go offline in case of the glitch in the network.
>   If there is disconnection, GlusterD will receive it and mark the
> brick as disconnected even if the brick process is online. So answer to
> this question is both yes and no. From process perspective they are
> still up but not to the other components/layers and that may impact the
> operations (both mgmt & I/O given there is a disconnect between client
> and brick processes too)
> >
> > 2. Is there is any configurable time out parameter which we can
> > increase ?
> I don't get this question. What time out are you talking about?
> >
> > 3.Brick and glusterd connected by unix domain socket.It is just a
> > local socket then why it is disconnect in below logs:
>   This is not true, its over TCP socket.
> >
> >  1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
> > [glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:
> > Brick 10.32.   1.144:/opt/lvmdir/c2/brick has disconnected from
> > glusterd.
> >  1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]
> > [glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd: Setting
> > brick 10.32.1.144:/opt/lvmdir/c2/brick status to stopped
> >
> > Regards,
> > Abhishek
> >
> >
> > On Tue, Apr 19, 2016 at 1:12 PM, ABHISHEK PALIWAL
> > <abhishpali...@gmail.com <mailto:abhishpali...@gmail.com>> wrote:
> >
> > Hi Atin,
> >
> > Thanks.
> >
> > Have more doubts here.
> >
> > Brick and glusterd connected by unix domain socket.It is just a
> > local socket then why it is disconnect in below logs:
> >
> >  1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
> > [glusterd-handler.c:4908:__glusterd_brick_rpc_notify]
> 0-management:
> > Brick 10.32.   1.144:/opt/lvmdir/c2/brick has disconnected
> from
> > glusterd.
> >  1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]
> > [glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd:
> > Setting
> > brick 10.32.1.144:/opt/lvmdir/c2/brick status to stopped
> >
> >
> > Regards,
> > Abhishek
> >
> >
> > On Fri, Apr 15, 2016 at 9:14 AM, Atin Mukherjee
> > <amukh...@redhat.com <mailto:amukh...@redhat.com>> wrote:
> >
> >
> >
> > On 04/14/2016 04:07 PM, ABHISHEK PALIWAL wrote:
> > >
> > >
> > > On Thu, Apr 14, 2016 at 2:33 PM, Atin Mukherjee <
> amukh...@redhat.com <mailto:amukh...@redhat.com>
> > > <mailto:amukh...@redhat.com <mailto:amukh...@redhat.com>>>
> wrote:
> > >
> > >
> > >
> >     >     On 04/05/2016 03:35 PM, ABHISHEK PALIWAL wrote:
> > > >
> > > >
> > > > On Tue, Apr 5, 2016 at 2:22 PM, Atin Mukherjee <
> amukh...@redhat.com <mailto:amukh...@redhat.com>
> > <mailto:amukh...@redhat.com <mailto:amukh...@redhat.com>>
> > > > <mailto:amukh...@redhat.com
> > <mailto:amukh...@redhat.com> <mailto:amukh...@redhat.com
> > <mailto:amukh...@redhat.com>>>> wrote:
> > > >
> > > >
> > > >
> > > > On 04/05/2016 01:04 PM, ABHISHEK P

Re: [Gluster-users] [Gluster-devel] Gluster Brick Offline after reboot!!

2016-05-04 Thread ABHISHEK PALIWAL
Hi Atin,

please reply, is there any configurable time out parameter for brick
process to go offline which we can increase?

Regards,
Abhishek

On Thu, Apr 21, 2016 at 12:34 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com>
wrote:

> Hi Atin,
>
> Please answer following doubts as well:
>
> 1 .If there is a temporary glitch in the network , will that affect the
> gluster brick process in anyway, Is there any timeout for the brick process
> to go offline in case of the glitch in the network.
>
> 2. Is there is any configurable time out parameter which we can increase ?
>
> 3.Brick and glusterd connected by unix domain socket.It is just a local
> socket then why it is disconnect in below logs:
>
>  1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
> [glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:
> Brick 10.32.   1.144:/opt/lvmdir/c2/brick has disconnected from
> glusterd.
>  1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]
> [glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd: Setting
> brick 10.32.1.144:/opt/lvmdir/c2/brick status to stopped
>
> Regards,
> Abhishek
>
>
> On Tue, Apr 19, 2016 at 1:12 PM, ABHISHEK PALIWAL <abhishpali...@gmail.com
> > wrote:
>
>> Hi Atin,
>>
>> Thanks.
>>
>> Have more doubts here.
>>
>> Brick and glusterd connected by unix domain socket.It is just a local
>> socket then why it is disconnect in below logs:
>>
>>  1667 [2016-04-03 10:12:32.984331] I [MSGID: 106005]
>> [glusterd-handler.c:4908:__glusterd_brick_rpc_notify] 0-management:
>> Brick 10.32.   1.144:/opt/lvmdir/c2/brick has disconnected from
>> glusterd.
>>  1668 [2016-04-03 10:12:32.984366] D [MSGID: 0]
>> [glusterd-utils.c:4872:glusterd_set_brick_status] 0-glusterd: Setting
>> brick 10.32.1.144:/opt/lvmdir/c2/brick status to stopped
>>
>>
>> Regards,
>> Abhishek
>>
>>
>> On Fri, Apr 15, 2016 at 9:14 AM, Atin Mukherjee <amukh...@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On 04/14/2016 04:07 PM, ABHISHEK PALIWAL wrote:
>>> >
>>> >
>>> > On Thu, Apr 14, 2016 at 2:33 PM, Atin Mukherjee <amukh...@redhat.com
>>> > <mailto:amukh...@redhat.com>> wrote:
>>> >
>>> >
>>> >
>>> > On 04/05/2016 03:35 PM, ABHISHEK PALIWAL wrote:
>>> > >
>>> > >
>>> > > On Tue, Apr 5, 2016 at 2:22 PM, Atin Mukherjee <
>>> amukh...@redhat.com <mailto:amukh...@redhat.com>
>>> > > <mailto:amukh...@redhat.com <mailto:amukh...@redhat.com>>>
>>> wrote:
>>> > >
>>> > >
>>> > >
>>> > > On 04/05/2016 01:04 PM, ABHISHEK PALIWAL wrote:
>>> > > > Hi Team,
>>> > > >
>>> > > > We are using Gluster 3.7.6 and facing one problem in which
>>> > brick is not
>>> > > > comming online after restart the board.
>>> > > >
>>> > > > To understand our setup, please look the following steps:
>>> > > > 1. We have two boards A and B on which Gluster volume is
>>> > running in
>>> > > > replicated mode having one brick on each board.
>>> > > > 2. Gluster mount point is present on the Board A which is
>>> > sharable
>>> > > > between number of processes.
>>> > > > 3. Till now our volume is in sync and everthing is working
>>> fine.
>>> > > > 4. Now we have test case in which we'll stop the glusterd,
>>> > reboot the
>>> > > > Board B and when this board comes up, starts the glusterd
>>> > again on it.
>>> > > > 5. We repeated Steps 4 multiple times to check the
>>> > reliability of system.
>>> > > > 6. After the Step 4, sometimes system comes in working
>>> state
>>> > (i.e. in
>>> > > > sync) but sometime we faces that brick of Board B is
>>> present in
>>> > > > “gluster volume status” command but not be online even
>>> > waiting for
>>> > > > more than a minute.
>>> > > As I mentioned in another email thread until and unless the
>>> > log shows
>>> > > the evidence that there was

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread ABHISHEK PALIWAL
I have restarted multiple times but it is not changing I am not able to see
nfs_acl in my
environment.

To file the bug what I need to do?

Regards,
Abhishek

On Wed, May 4, 2016 at 1:28 PM, Soumya Koduri <skod...@redhat.com> wrote:

> Even on my setup if I change nfs.port, all the other services also started
> registering on those ports. Can you please file a bug for it.
> That seems like a bug (or is it intentional..Niels?).
>
> 153   tcp   2049  mountd
> 151   tcp   2049  mountd
> 133   tcp   2049  nfs
> 1000214   tcp   2049  nlockmgr
> 1002273   tcp   2049  nfs_acl
>
> However, as you can see NFSACL did come up. Try restarting volume
> " gluster v start  force" and check your logs again.
>
> Thanks,
> Soumya
>
>
> On 05/04/2016 12:29 PM, ABHISHEK PALIWAL wrote:
>
>> i am changing the nfs.port using gluster volume set gv0 nfs.port 2049
>> but it automatically changing MOUNTD as well
>>
>> and nfs.disable is also off
>>
>> gluster volume get gv0 nfs.disable
>> Option  Value
>> --      -
>> nfs.disable off
>>
>>
>>
>>


-- 




Regards
Abhishek Paliwal
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] Exporting Gluster Volume

2016-05-04 Thread ABHISHEK PALIWAL
i am changing the nfs.port using gluster volume set gv0 nfs.port 2049 but
it automatically changing MOUNTD as well

and nfs.disable is also off

gluster volume get gv0 nfs.disable
Option
Value
--
-
nfs.disable off



On Wed, May 4, 2016 at 12:25 PM, Soumya Koduri <skod...@redhat.com> wrote:

> Not sure why mountd  is registering on 2049 port number. Please try
> resetting volume options (if its fine) using "gluster v reset 
> force" and enable only nfs and verify the behavior.
>
> "gluster v set  nfs.disable off"
>
> Thanks,
> Soumya
>
> On 05/04/2016 12:17 PM, ABHISHEK PALIWAL wrote:
>
>> still having the same problem not able to see NETFACL option in rpcinfo -p
>>
>> rpcinfo -p
>> program vers proto   port  service
>>  104   tcp111  portmapper
>>  103   tcp111  portmapper
>>  102   tcp111  portmapper
>>  104   udp111  portmapper
>>  103   udp111  portmapper
>>  102   udp111  portmapper
>>  153   tcp   2049  mountd
>>  151   tcp   2049  mountd
>>  133   tcp   2049  nfs
>>  1002273   tcp   2049
>>
>>
>>
>>
>> On Wed, May 4, 2016 at 12:09 PM, Soumya Koduri <skod...@redhat.com
>> <mailto:skod...@redhat.com>> wrote:
>>
>>
>>
>> On 05/04/2016 12:04 PM, ABHISHEK PALIWAL wrote:
>>
>> but gluster mentioned that Gluster NFS works on 38465-38469 that
>> why I
>> took these ports.
>>
>>
>> yes. Gluster-NFS  uses ports 38465-38469 to register the side-band
>> protocols it needs like MOUNTD, NLM, NFSACL etc.. I am not sure, but
>> maybe the wording in that documentation was misleading.
>>
>> Thanks,
>> Soumya
>>
>>
>> Regards,
>> Abhishek
>>
>> On Wed, May 4, 2016 at 12:00 PM, Soumya Koduri
>> <skod...@redhat.com <mailto:skod...@redhat.com>
>> <mailto:skod...@redhat.com <mailto:skod...@redhat.com>>> wrote:
>>
>>   > nfs.port
>>   > 38465
>>
>>  Is there any reason behind choosing this port for NFS
>> protocol? This
>>  port number seems to be used for MOUNT (V3) protocol as
>> well and
>>  that may have resulted in port registration failures. Could
>> you
>>  please change nfs port to default (2049) or some other port
>>     (other
>>  than 38465-38469). Also please make sure kernel-nfs is
>> stopeed.
>>
>>  Thanks,
>>  Soumya
>>
>>
>>  On 05/04/2016 11:49 AM, ABHISHEK PALIWAL wrote:
>>
>>  Hi Soumya,
>>
>>
>>  Please find attached nfs.log file.
>>
>>  Regards,
>>  Abhishek
>>
>>  On Wed, May 4, 2016 at 11:45 AM, ABHISHEK PALIWAL
>>  <abhishpali...@gmail.com
>> <mailto:abhishpali...@gmail.com> <mailto:abhishpali...@gmail.com
>> <mailto:abhishpali...@gmail.com>>
>>  <mailto:abhishpali...@gmail.com
>> <mailto:abhishpali...@gmail.com>
>>  <mailto:abhishpali...@gmail.com
>> <mailto:abhishpali...@gmail.com>>>> wrote:
>>
>>   HI Soumya,
>>
>>   Thanks for reply.
>>
>>   Yes, I am getting following error in
>>  /var/log/glusterfs/nfs.log file
>>
>>   [2016-04-25 06:27:23.721851] E [MSGID: 112109]
>>  [nfs.c:1482:init]
>>   0-nfs: Failed to initialize protocols
>>
>>   Please suggest me how can I resolve it.
>>
>>   Regards,
>>   Abhishek
>>
>>   On Wed, May 4, 2016 at 11:33 AM, Soumya Koduri
>>  <skod...@redhat.com <mailto:skod...@redhat.com>
>> <mailto:skod...@redhat.com <mailto:skod...@redhat.com>>
>>       <mailto:skod...@redhat.com
>> <mailto:skod...@redhat.com> <mailto:skod...@redhat.com
>> <mailto:skod...@redhat.com>>>> wrote:
>>
>> 

  1   2   >