Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread Amar Tumballi Suryanarayan
We recommend to use 'tirpc' in the later releases. use '--with-tirpc' while
running ./configure

On Wed, Mar 13, 2019 at 10:55 AM ABHISHEK PALIWAL 
wrote:

> Hi Amar,
>
> this problem seems to be configuration issue due to librpc.
>
> Could you please let me know what should be configuration I need to use?
>
> Regards,
> Abhishek
>
> On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL 
> wrote:
>
>> logs for libgfrpc.so
>>
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
>> not a dynamic executable
>> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
>> not a dynamic executable
>>
>>
>> On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL <
>> abhishpali...@gmail.com> wrote:
>>
>>> Here are the logs:
>>>
>>>
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
>>> not a dynamic executable
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
>>> not a dynamic executable
>>> pabhishe@arn-build3$ldd
>>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
>>> not a dynamic executable
>>>
>>>
>>> For backtraces I have attached the core_logs.txt file.
>>>
>>> Regards,
>>> Abhishek
>>>
>>> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
>>> atumb...@redhat.com> wrote:
>>>
 Hi Abhishek,

 Few more questions,


> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
> abhishpali...@gmail.com> wrote:
>
>> Hi Amar,
>>
>> Below are the requested logs
>>
>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>> not a dynamic executable
>>
>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>> not a dynamic executable
>>
>>
 Can you please add a * at the end, so it gets the linked library list
 from the actual files (ideally this is a symlink, but I expected it to
 resolve like in Fedora).



> root@128:/# gdb /usr/sbin/glusterd core.1099
>> GNU gdb (GDB) 7.10.1
>> Copyright (C) 2015 Free Software Foundation, Inc.
>> License GPLv3+: GNU GPL version 3 or later <
>> http://gnu.org/licenses/gpl.html>
>> This is free software: you are free to change and redistribute it.
>> There is NO WARRANTY, to the extent permitted by law.  Type "show
>> copying"
>> and "show warranty" for details.
>> This GDB was configured as "powerpc64-wrs-linux".
>> Type "show configuration" for configuration details.
>> For bug reporting instructions, please see:
>> .
>> Find the GDB manual and other documentation resources online at:
>> .
>> For help, type "help".
>> Type "apropos word" to search for commands related to "word"...
>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>> found)...done.
>> [New LWP 1109]
>> [New LWP 1101]
>> [New LWP 1105]
>> [New LWP 1110]
>> [New LWP 1099]
>> [New LWP 1107]
>> [New LWP 1119]
>> [New LWP 1103]
>> [New LWP 1112]
>> [New LWP 1116]
>> [New LWP 1104]
>> [New LWP 1239]
>> [New LWP 1106]
>> [New LWP ]
>> [New LWP 1108]
>> [New LWP 1117]
>> [New LWP 1102]
>> [New LWP 1118]
>> [New LWP 1100]
>> [New LWP 1114]
>> [New LWP 1113]
>> [New LWP 1115]
>>
>> warning: Could not load shared library symbols for linux-vdso64.so.1.
>> Do you need "set solib-search-path" or "set sysroot"?
>> [Thread debugging using libthread_db enabled]
>> Using host libthread_db library "/lib64/libthread_db.so.1".
>> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
>> --volfile-id gv0.128.224.95.140.tmp-bric'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
>> bytes=bytes@entry=36) at malloc.c:3327
>> 3327 {
>> [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
>> (gdb) bt full
>>
>
 This is backtrace of one particular thread. I need output of command

 (gdb) thread apply all bt full


 Also, considering this is a crash in the malloc library call itself,
 would like to know the details of OS, Kernel version and gcc versions.

 Regards,
 Amar

 #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
>> bytes=bytes@entry=36) at malloc.c:3327
>> nb = 
>> idx = 
>> bin = 
>> victim = 
>> size = 
>> victim_index = 
>> remainder = 
>> remainder_size = 
>> block = 
>> bit = 
>> map = 
>> fwd = 
>> bck = 
>> errstr = 0x0
>> __func__ = 

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
Hi Amar,

this problem seems to be configuration issue due to librpc.

Could you please let me know what should be configuration I need to use?

Regards,
Abhishek

On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL 
wrote:

> logs for libgfrpc.so
>
> pabhishe@arn-build3$ldd
> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
> not a dynamic executable
> ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
> not a dynamic executable
>
>
> On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL 
> wrote:
>
>> Here are the logs:
>>
>>
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
>> not a dynamic executable
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
>> not a dynamic executable
>> pabhishe@arn-build3$ldd
>> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
>> not a dynamic executable
>>
>>
>> For backtraces I have attached the core_logs.txt file.
>>
>> Regards,
>> Abhishek
>>
>> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
>> atumb...@redhat.com> wrote:
>>
>>> Hi Abhishek,
>>>
>>> Few more questions,
>>>
>>>
 On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
 abhishpali...@gmail.com> wrote:

> Hi Amar,
>
> Below are the requested logs
>
> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
> not a dynamic executable
>
> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
> not a dynamic executable
>
>
>>> Can you please add a * at the end, so it gets the linked library list
>>> from the actual files (ideally this is a symlink, but I expected it to
>>> resolve like in Fedora).
>>>
>>>
>>>
 root@128:/# gdb /usr/sbin/glusterd core.1099
> GNU gdb (GDB) 7.10.1
> Copyright (C) 2015 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show
> copying"
> and "show warranty" for details.
> This GDB was configured as "powerpc64-wrs-linux".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> .
> Find the GDB manual and other documentation resources online at:
> .
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
> found)...done.
> [New LWP 1109]
> [New LWP 1101]
> [New LWP 1105]
> [New LWP 1110]
> [New LWP 1099]
> [New LWP 1107]
> [New LWP 1119]
> [New LWP 1103]
> [New LWP 1112]
> [New LWP 1116]
> [New LWP 1104]
> [New LWP 1239]
> [New LWP 1106]
> [New LWP ]
> [New LWP 1108]
> [New LWP 1117]
> [New LWP 1102]
> [New LWP 1118]
> [New LWP 1100]
> [New LWP 1114]
> [New LWP 1113]
> [New LWP 1115]
>
> warning: Could not load shared library symbols for linux-vdso64.so.1.
> Do you need "set solib-search-path" or "set sysroot"?
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
> --volfile-id gv0.128.224.95.140.tmp-bric'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
> bytes=bytes@entry=36) at malloc.c:3327
> 3327 {
> [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
> (gdb) bt full
>

>>> This is backtrace of one particular thread. I need output of command
>>>
>>> (gdb) thread apply all bt full
>>>
>>>
>>> Also, considering this is a crash in the malloc library call itself,
>>> would like to know the details of OS, Kernel version and gcc versions.
>>>
>>> Regards,
>>> Amar
>>>
>>> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
> bytes=bytes@entry=36) at malloc.c:3327
> nb = 
> idx = 
> bin = 
> victim = 
> size = 
> victim_index = 
> remainder = 
> remainder_size = 
> block = 
> bit = 
> map = 
> fwd = 
> bck = 
> errstr = 0x0
> __func__ = "_int_malloc"
> #1  0x3fffb76a93dc in __GI___libc_malloc (bytes=36) at
> malloc.c:2921
> ar_ptr = 0x3fffa820
> victim = 
> hook = 
> __func__ = "__libc_malloc"
> #2  0x3fffb7764fd0 in x_inline (xdrs=0x3fffb1686d20,
> len=) at xdr_sizeof.c:89
> len = 36

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
logs for libgfrpc.so

pabhishe@arn-build3$ldd
./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.*
./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0:
not a dynamic executable
./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1:
not a dynamic executable


On Wed, Mar 13, 2019 at 10:02 AM ABHISHEK PALIWAL 
wrote:

> Here are the logs:
>
>
> pabhishe@arn-build3$ldd
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.*
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0:
> not a dynamic executable
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1:
> not a dynamic executable
> pabhishe@arn-build3$ldd
> ./5.4-r0/sysroot-destdir/usr/lib64/libglusterfs.so.0.0.1
> not a dynamic executable
>
>
> For backtraces I have attached the core_logs.txt file.
>
> Regards,
> Abhishek
>
> On Wed, Mar 13, 2019 at 9:51 AM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>> Hi Abhishek,
>>
>> Few more questions,
>>
>>
>>> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL <
>>> abhishpali...@gmail.com> wrote:
>>>
 Hi Amar,

 Below are the requested logs

 pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
 not a dynamic executable

 pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
 not a dynamic executable


>> Can you please add a * at the end, so it gets the linked library list
>> from the actual files (ideally this is a symlink, but I expected it to
>> resolve like in Fedora).
>>
>>
>>
>>> root@128:/# gdb /usr/sbin/glusterd core.1099
 GNU gdb (GDB) 7.10.1
 Copyright (C) 2015 Free Software Foundation, Inc.
 License GPLv3+: GNU GPL version 3 or later <
 http://gnu.org/licenses/gpl.html>
 This is free software: you are free to change and redistribute it.
 There is NO WARRANTY, to the extent permitted by law.  Type "show
 copying"
 and "show warranty" for details.
 This GDB was configured as "powerpc64-wrs-linux".
 Type "show configuration" for configuration details.
 For bug reporting instructions, please see:
 .
 Find the GDB manual and other documentation resources online at:
 .
 For help, type "help".
 Type "apropos word" to search for commands related to "word"...
 Reading symbols from /usr/sbin/glusterd...(no debugging symbols
 found)...done.
 [New LWP 1109]
 [New LWP 1101]
 [New LWP 1105]
 [New LWP 1110]
 [New LWP 1099]
 [New LWP 1107]
 [New LWP 1119]
 [New LWP 1103]
 [New LWP 1112]
 [New LWP 1116]
 [New LWP 1104]
 [New LWP 1239]
 [New LWP 1106]
 [New LWP ]
 [New LWP 1108]
 [New LWP 1117]
 [New LWP 1102]
 [New LWP 1118]
 [New LWP 1100]
 [New LWP 1114]
 [New LWP 1113]
 [New LWP 1115]

 warning: Could not load shared library symbols for linux-vdso64.so.1.
 Do you need "set solib-search-path" or "set sysroot"?
 [Thread debugging using libthread_db enabled]
 Using host libthread_db library "/lib64/libthread_db.so.1".
 Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
 --volfile-id gv0.128.224.95.140.tmp-bric'.
 Program terminated with signal SIGSEGV, Segmentation fault.
 #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
 bytes=bytes@entry=36) at malloc.c:3327
 3327 {
 [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
 (gdb) bt full

>>>
>> This is backtrace of one particular thread. I need output of command
>>
>> (gdb) thread apply all bt full
>>
>>
>> Also, considering this is a crash in the malloc library call itself,
>> would like to know the details of OS, Kernel version and gcc versions.
>>
>> Regards,
>> Amar
>>
>> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
 bytes=bytes@entry=36) at malloc.c:3327
 nb = 
 idx = 
 bin = 
 victim = 
 size = 
 victim_index = 
 remainder = 
 remainder_size = 
 block = 
 bit = 
 map = 
 fwd = 
 bck = 
 errstr = 0x0
 __func__ = "_int_malloc"
 #1  0x3fffb76a93dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
 ar_ptr = 0x3fffa820
 victim = 
 hook = 
 __func__ = "__libc_malloc"
 #2  0x3fffb7764fd0 in x_inline (xdrs=0x3fffb1686d20, len=>>> out>) at xdr_sizeof.c:89
 len = 36
 xdrs = 0x3fffb1686d20
 #3  0x3fffb7842488 in .xdr_gfx_iattx () from
 /usr/lib64/libgfxdr.so.0
 No symbol table info available.
 #4  0x3fffb7842e84 in .xdr_gfx_dirplist () from
 /usr/lib64/libgfxdr.so.0
 No symbol table info available.
 #5  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
 pp=0x3fffa81099f0, size=, proc=) a

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread Amar Tumballi Suryanarayan
Hi Abhishek,

Few more questions,


> On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL 
> wrote:
>
>> Hi Amar,
>>
>> Below are the requested logs
>>
>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
>> not a dynamic executable
>>
>> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
>> not a dynamic executable
>>
>>
Can you please add a * at the end, so it gets the linked library list from
the actual files (ideally this is a symlink, but I expected it to resolve
like in Fedora).



> root@128:/# gdb /usr/sbin/glusterd core.1099
>> GNU gdb (GDB) 7.10.1
>> Copyright (C) 2015 Free Software Foundation, Inc.
>> License GPLv3+: GNU GPL version 3 or later <
>> http://gnu.org/licenses/gpl.html>
>> This is free software: you are free to change and redistribute it.
>> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
>> and "show warranty" for details.
>> This GDB was configured as "powerpc64-wrs-linux".
>> Type "show configuration" for configuration details.
>> For bug reporting instructions, please see:
>> .
>> Find the GDB manual and other documentation resources online at:
>> .
>> For help, type "help".
>> Type "apropos word" to search for commands related to "word"...
>> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
>> found)...done.
>> [New LWP 1109]
>> [New LWP 1101]
>> [New LWP 1105]
>> [New LWP 1110]
>> [New LWP 1099]
>> [New LWP 1107]
>> [New LWP 1119]
>> [New LWP 1103]
>> [New LWP 1112]
>> [New LWP 1116]
>> [New LWP 1104]
>> [New LWP 1239]
>> [New LWP 1106]
>> [New LWP ]
>> [New LWP 1108]
>> [New LWP 1117]
>> [New LWP 1102]
>> [New LWP 1118]
>> [New LWP 1100]
>> [New LWP 1114]
>> [New LWP 1113]
>> [New LWP 1115]
>>
>> warning: Could not load shared library symbols for linux-vdso64.so.1.
>> Do you need "set solib-search-path" or "set sysroot"?
>> [Thread debugging using libthread_db enabled]
>> Using host libthread_db library "/lib64/libthread_db.so.1".
>> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140
>> --volfile-id gv0.128.224.95.140.tmp-bric'.
>> Program terminated with signal SIGSEGV, Segmentation fault.
>> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
>> bytes=bytes@entry=36) at malloc.c:3327
>> 3327 {
>> [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
>> (gdb) bt full
>>
>
This is backtrace of one particular thread. I need output of command

(gdb) thread apply all bt full


Also, considering this is a crash in the malloc library call itself, would
like to know the details of OS, Kernel version and gcc versions.

Regards,
Amar

#0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
>> bytes=bytes@entry=36) at malloc.c:3327
>> nb = 
>> idx = 
>> bin = 
>> victim = 
>> size = 
>> victim_index = 
>> remainder = 
>> remainder_size = 
>> block = 
>> bit = 
>> map = 
>> fwd = 
>> bck = 
>> errstr = 0x0
>> __func__ = "_int_malloc"
>> #1  0x3fffb76a93dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
>> ar_ptr = 0x3fffa820
>> victim = 
>> hook = 
>> __func__ = "__libc_malloc"
>> #2  0x3fffb7764fd0 in x_inline (xdrs=0x3fffb1686d20, len=> out>) at xdr_sizeof.c:89
>> len = 36
>> xdrs = 0x3fffb1686d20
>> #3  0x3fffb7842488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #4  0x3fffb7842e84 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #5  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81099f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109aa0 "\265\256\373\200\f\206\361j"
>> stat = 
>> #6  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa81099f0, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #7  0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #8  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa8109870, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa8109920 "\232\373\377\315\352\325\005\271"
>> stat = 
>> #9  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
>> objpp=0x3fffa8109870, obj_size=,
>> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
>> xdr_ref.c:135
>> more_data = 1
>> #10 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
>> /usr/lib64/libgfxdr.so.0
>> No symbol table info available.
>> #11 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
>> pp=0x3fffa81096f0, size=, proc=) at
>> xdr_ref.c:84
>> loc = 0x3fffa81097a0 "\241X\372!\216\256=\342"
>> stat = 
>> ---Type  to contin

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
Hi Amar,

did you get time to check the logs?

Regards,
Abhishek

On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL 
wrote:

> Hi Amar,
>
> Below are the requested logs
>
> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so
> not a dynamic executable
>
> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libgfrpc.so
> not a dynamic executable
>
> root@128:/# gdb /usr/sbin/glusterd core.1099
> GNU gdb (GDB) 7.10.1
> Copyright (C) 2015 Free Software Foundation, Inc.
> License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html>
> This is free software: you are free to change and redistribute it.
> There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
> and "show warranty" for details.
> This GDB was configured as "powerpc64-wrs-linux".
> Type "show configuration" for configuration details.
> For bug reporting instructions, please see:
> .
> Find the GDB manual and other documentation resources online at:
> .
> For help, type "help".
> Type "apropos word" to search for commands related to "word"...
> Reading symbols from /usr/sbin/glusterd...(no debugging symbols
> found)...done.
> [New LWP 1109]
> [New LWP 1101]
> [New LWP 1105]
> [New LWP 1110]
> [New LWP 1099]
> [New LWP 1107]
> [New LWP 1119]
> [New LWP 1103]
> [New LWP 1112]
> [New LWP 1116]
> [New LWP 1104]
> [New LWP 1239]
> [New LWP 1106]
> [New LWP ]
> [New LWP 1108]
> [New LWP 1117]
> [New LWP 1102]
> [New LWP 1118]
> [New LWP 1100]
> [New LWP 1114]
> [New LWP 1113]
> [New LWP 1115]
>
> warning: Could not load shared library symbols for linux-vdso64.so.1.
> Do you need "set solib-search-path" or "set sysroot"?
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib64/libthread_db.so.1".
> Core was generated by `/usr/sbin/glusterfsd -s 128.224.95.140 --volfile-id
> gv0.128.224.95.140.tmp-bric'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
> bytes=bytes@entry=36) at malloc.c:3327
> 3327 {
> [Current thread is 1 (Thread 0x3fffb1689160 (LWP 1109))]
> (gdb) bt full
> #0  0x3fffb76a6d48 in _int_malloc (av=av@entry=0x3fffa820,
> bytes=bytes@entry=36) at malloc.c:3327
> nb = 
> idx = 
> bin = 
> victim = 
> size = 
> victim_index = 
> remainder = 
> remainder_size = 
> block = 
> bit = 
> map = 
> fwd = 
> bck = 
> errstr = 0x0
> __func__ = "_int_malloc"
> #1  0x3fffb76a93dc in __GI___libc_malloc (bytes=36) at malloc.c:2921
> ar_ptr = 0x3fffa820
> victim = 
> hook = 
> __func__ = "__libc_malloc"
> #2  0x3fffb7764fd0 in x_inline (xdrs=0x3fffb1686d20, len= out>) at xdr_sizeof.c:89
> len = 36
> xdrs = 0x3fffb1686d20
> #3  0x3fffb7842488 in .xdr_gfx_iattx () from /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #4  0x3fffb7842e84 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #5  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa81099f0, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa8109aa0 "\265\256\373\200\f\206\361j"
> stat = 
> #6  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
> objpp=0x3fffa81099f0, obj_size=,
> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> more_data = 1
> #7  0x3fffb7842ec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #8  0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa8109870, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa8109920 "\232\373\377\315\352\325\005\271"
> stat = 
> #9  0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
> objpp=0x3fffa8109870, obj_size=,
> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> more_data = 1
> #10 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #11 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa81096f0, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa81097a0 "\241X\372!\216\256=\342"
> stat = 
> ---Type  to continue, or q  to quit---
> #12 0x3fffb7764e04 in __GI_xdr_pointer (xdrs=0x3fffb1686d20,
> objpp=0x3fffa81096f0, obj_size=,
> xdr_obj=@0x3fffb785f4b0: 0x3fffb7842dc0 <.xdr_gfx_dirplist>) at
> xdr_ref.c:135
> more_data = 1
> #13 0x3fffb7842ec0 in .xdr_gfx_dirplist () from
> /usr/lib64/libgfxdr.so.0
> No symbol table info available.
> #14 0x3fffb7764c28 in __GI_xdr_reference (xdrs=0x3fffb1686d20,
> pp=0x3fffa8109570, size=, proc=) at
> xdr_ref.c:84
> loc = 0x3fffa8109620 "\265\205\003Vu'\002L"
>

Re: [Gluster-users] [Gluster-Maintainers] Release 6: Release date update

2019-03-12 Thread Sankarshan Mukhopadhyay
On Wed, Mar 13, 2019 at 7:55 AM Shyam Ranganathan  wrote:
>
> On 3/5/19 1:17 PM, Shyam Ranganathan wrote:
> > Hi,
> >
> > Release-6 was to be an early March release, and due to finding bugs
> > while performing upgrade testing, is now expected in the week of 18th
> > March, 2019.
> >
> > RC1 builds are expected this week, to contain the required fixes, next
> > week would be testing our RC1 for release fitness before the release.
>
> RC1 is tagged, and will mostly be packaged for testing by tomorrow.
>
> Expect package details in a day or two, to aid with testing the release.

Would be worth posting it out via Twitter as well. Do we plan to
provide any specific guidance on testing particular areas/flows? For
example, upgrade tests with some of the combinations - I recollect
Amar had published a spreadsheet of items - should we continue with
those?
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [Gluster-Maintainers] Release 6: Release date update

2019-03-12 Thread Shyam Ranganathan
On 3/5/19 1:17 PM, Shyam Ranganathan wrote:
> Hi,
> 
> Release-6 was to be an early March release, and due to finding bugs
> while performing upgrade testing, is now expected in the week of 18th
> March, 2019.
> 
> RC1 builds are expected this week, to contain the required fixes, next
> week would be testing our RC1 for release fitness before the release.

RC1 is tagged, and will mostly be packaged for testing by tomorrow.

Expect package details in a day or two, to aid with testing the release.

> 
> As always, request that users test the RC builds and report back issues
> they encounter, to help make the release a better quality.
> 
> Shyam
> ___
> maintainers mailing list
> maintain...@gluster.org
> https://lists.gluster.org/mailman/listinfo/maintainers
> 
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-12 Thread Artem Russakovskii
Hi Amar,

Any updates on this? I'm still not seeing it in OpenSUSE build repos. Maybe
later today?

Thanks.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR



On Wed, Mar 6, 2019 at 10:30 PM Amar Tumballi Suryanarayan <
atumb...@redhat.com> wrote:

> We are talking days. Not weeks. Considering already it is Thursday here. 1
> more day for tagging, and packaging. May be ok to expect it on Monday.
>
> -Amar
>
> On Thu, Mar 7, 2019 at 11:54 AM Artem Russakovskii 
> wrote:
>
>> Is the next release going to be an imminent hotfix, i.e. something like
>> today/tomorrow, or are we talking weeks?
>>
>> Sincerely,
>> Artem
>>
>> --
>> Founder, Android Police , APK Mirror
>> , Illogical Robot LLC
>> beerpla.net | +ArtemRussakovskii
>>  | @ArtemR
>> 
>>
>>
>> On Tue, Mar 5, 2019 at 11:09 AM Artem Russakovskii 
>> wrote:
>>
>>> Ended up downgrading to 5.3 just in case. Peer status and volume status
>>> are OK now.
>>>
>>> zypper install --oldpackage glusterfs-5.3-lp150.100.1
>>> Loading repository data...
>>> Reading installed packages...
>>> Resolving package dependencies...
>>>
>>> Problem: glusterfs-5.3-lp150.100.1.x86_64 requires libgfapi0 = 5.3, but
>>> this requirement cannot be provided
>>>   not installable providers: libgfapi0-5.3-lp150.100.1.x86_64[glusterfs]
>>>  Solution 1: Following actions will be done:
>>>   downgrade of libgfapi0-5.4-lp150.100.1.x86_64 to
>>> libgfapi0-5.3-lp150.100.1.x86_64
>>>   downgrade of libgfchangelog0-5.4-lp150.100.1.x86_64 to
>>> libgfchangelog0-5.3-lp150.100.1.x86_64
>>>   downgrade of libgfrpc0-5.4-lp150.100.1.x86_64 to
>>> libgfrpc0-5.3-lp150.100.1.x86_64
>>>   downgrade of libgfxdr0-5.4-lp150.100.1.x86_64 to
>>> libgfxdr0-5.3-lp150.100.1.x86_64
>>>   downgrade of libglusterfs0-5.4-lp150.100.1.x86_64 to
>>> libglusterfs0-5.3-lp150.100.1.x86_64
>>>  Solution 2: do not install glusterfs-5.3-lp150.100.1.x86_64
>>>  Solution 3: break glusterfs-5.3-lp150.100.1.x86_64 by ignoring some of
>>> its dependencies
>>>
>>> Choose from above solutions by number or cancel [1/2/3/c] (c): 1
>>> Resolving dependencies...
>>> Resolving package dependencies...
>>>
>>> The following 6 packages are going to be downgraded:
>>>   glusterfs libgfapi0 libgfchangelog0 libgfrpc0 libgfxdr0 libglusterfs0
>>>
>>> 6 packages to downgrade.
>>>
>>> Sincerely,
>>> Artem
>>>
>>> --
>>> Founder, Android Police , APK Mirror
>>> , Illogical Robot LLC
>>> beerpla.net | +ArtemRussakovskii
>>>  | @ArtemR
>>> 
>>>
>>>
>>> On Tue, Mar 5, 2019 at 10:57 AM Artem Russakovskii 
>>> wrote:
>>>
 Noticed the same when upgrading from 5.3 to 5.4, as mentioned.

 I'm confused though. Is actual replication affected, because the 5.4
 server and the 3x 5.3 servers still show heal info as all 4 connected, and
 the files seem to be replicating correctly as well.

 So what's actually affected - just the status command, or leaving 5.4
 on one of the nodes is doing some damage to the underlying fs? Is it
 fixable by tweaking transport.socket.ssl-enabled? Does upgrading all
 servers to 5.4 resolve it, or should we revert back to 5.3?

 Sincerely,
 Artem

 --
 Founder, Android Police , APK Mirror
 , Illogical Robot LLC
 beerpla.net | +ArtemRussakovskii
  | @ArtemR
 


 On Tue, Mar 5, 2019 at 2:02 AM Hu Bert  wrote:

> fyi: did a downgrade 5.4 -> 5.3 and it worked. all replicas are up and
> running. Awaiting updated v5.4.
>
> thx :-)
>
> Am Di., 5. März 2019 um 09:26 Uhr schrieb Hari Gowtham <
> hgowt...@redhat.com>:
> >
> > There are plans to revert the patch causing this error and rebuilt
> 5.4.
> > This should happen faster. the rebuilt 5.4 should be void of this
> upgrade issue.
> >
> > In the meantime, you can use 5.3 for this cluster.
> > Downgrading to 5.3 will work if it was just one node that was
> upgrade to 5.4
> > and the other nodes are still in 5.3.
> >
> > On Tue, Mar 5, 2019 at 1:07 PM Hu Bert 
> wrote:
> > >
> > > Hi Hari,
> > >
> > > thx for the hint. Do you know when this will be fixed? Is a
> downgrade
> > > 5.4 -> 5.3 a possibility to fix this?
> > >
> > > Hubert
> > >
> > > Am Di., 5. März 2019 um 08:32 Uhr schrieb Hari Gowtham <
> hgowt...@redhat.com>:
> > > >
> > > > Hi,
> > > >
> > > > This is a known iss

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Taste-Of-IT
Hi,


i found a Bug about this in Version 3.10. I run 3.13.2 - for your Information. 
As far as i can see, the default of 1% rule is active and not configure 0 = for 
disable storage.reserve.


So what can i do? Finish remove brick? Upgrade to newer Version and rerun 
rebalance? 


thx
Taste

Am 12.03.2019 12:45:51, schrieb Taste-Of-IT:

> Hi Susant,
> 
> and thanks for your fast reply and pointing me to that log. So i was able to 
> find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: 
> Could not find any subvol with space accomodating the file"
> 
> But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes. 

> Options Reconfigured:
> performance.client-io-threads: on
> storage.reserve: 0
> performance.parallel-readdir: off
> performance.readdir-ahead: off
> auth.allow: 192.168.0.*
> nfs.disable: off
> transport.address-family: inet
> Ok since there is enough disk space on other Bricks and i actually didnt 
> complete brick-remove, can i rerun brick-remove to rebalance last Files and 
> Folders?
> 
> Thanks
> > Taste


> Am 12.03.2019 10:49:13, schrieb Susant Palai:

> > Would it be possible for you to pass the rebalance log file on the node 
> > from which you want to remove the brick? (location : 
> > /var/log/glusterfs/)
> > 
> > + the following information:
> >  1 - gluster volume info 
> > > >  2 - gluster volume status
> > > >  2 - df -h output on all 3 nodes
> > 

> > Susant
> > 
> > On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <> > kont...@taste-of-it.de> > 
> > > wrote:
> > > Hi,
> > > i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / 
> > > Bricks.  I want to remove one Brick and run gluster volume remove-brick 
> > >   start. The Job completes and shows 11960 failures and 
> > > only transfers 5TB out of 15TB Data. I have still files and folders on 
> > > this volume on the brick to remove. I actually didnt run the final 
> > > command  with "commit". Both other Nodes have each over 6TB of free 
> > > Space, so it can hold the remaininge Data from Brick3 theoretically.
> > > 
> > > Need help.
> > > thx
> > > Taste
> > > ___
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > https://lists.gluster.org/mailman/listinfo/gluster-users

> > > > 

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Taste-Of-IT

Hi Susant,

and thanks for your fast reply and pointing me to that log. So i was able to 
find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: 
Could not find any subvol with space accomodating the file"

But Volume Detail and df -h show xTB of free Disk Space and also Free Inodes. 


Options Reconfigured:
performance.client-io-threads: on
storage.reserve: 0
performance.parallel-readdir: off
performance.readdir-ahead: off
auth.allow: 192.168.0.*
nfs.disable: off
transport.address-family: inet

Ok since there is enough disk space on other Bricks and i actually didnt 
complete brick-remove, can i rerun brick-remove to rebalance last Files and 
Folders?

Thanks
Taste


Am 12.03.2019 10:49:13, schrieb Susant Palai:

> Would it be possible for you to pass the rebalance log file on the node from 
> which you want to remove the brick? (location : 
> /var/log/glusterfs/)
> 
> + the following information:
>  1 - gluster volume info 
> >  2 - gluster volume status
> >  2 - df -h output on all 3 nodes
> 

> Susant
> 
> On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT <> kont...@taste-of-it.de> > 
> wrote:
> > Hi,
> > i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / 
> > Bricks.  I want to remove one Brick and run gluster volume remove-brick 
> >   start. The Job completes and shows 11960 failures and 
> > only transfers 5TB out of 15TB Data. I have still files and folders on this 
> > volume on the brick to remove. I actually didnt run the final command  with 
> > "commit". Both other Nodes have each over 6TB of free Space, so it can hold 
> > the remaininge Data from Brick3 theoretically.
> > 
> > Need help.
> > thx
> > Taste
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > https://lists.gluster.org/mailman/listinfo/gluster-users

> > 



___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Susant Palai
Would it be possible for you to pass the rebalance log file on the node
from which you want to remove the brick? (location :
/var/log/glusterfs/)

+ the following information:
 1 - gluster volume info
 2 - gluster volume status
 2 - df -h output on all 3 nodes


Susant

On Tue, Mar 12, 2019 at 3:08 PM Taste-Of-IT  wrote:

> Hi,
> i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes /
> Bricks.  I want to remove one Brick and run gluster volume remove-brick
>   start. The Job completes and shows 11960 failures and
> only transfers 5TB out of 15TB Data. I have still files and folders on this
> volume on the brick to remove. I actually didnt run the final command  with
> "commit". Both other Nodes have each over 6TB of free Space, so it can hold
> the remaininge Data from Brick3 theoretically.
>
> Need help.
> thx
> Taste
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Taste-Of-IT
Hi,
i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / 
Bricks.  I want to remove one Brick and run gluster volume remove-brick  
 start. The Job completes and shows 11960 failures and only 
transfers 5TB out of 15TB Data. I have still files and folders on this volume 
on the brick to remove. I actually didnt run the final command  with "commit". 
Both other Nodes have each over 6TB of free Space, so it can hold the 
remaininge Data from Brick3 theoretically.

Need help.
thx
Taste
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users