Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
Hi Amar, this problem seems to be configuration issue due to librpc. Could you please let me know what should be configuration I need to use? Regards, Abhishek On Wed, Mar 13, 2019 at 10:42 AM ABHISHEK PALIWAL wrote: > logs for libgfrpc.so > > pabhishe@arn-build3$ldd >

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
logs for libgfrpc.so pabhishe@arn-build3$ldd ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.* ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0: not a dynamic executable ./5.4-r0/packages-split/glusterfs/usr/lib64/libgfrpc.so.0.0.1: not a dynamic executable On Wed, Mar 13, 2019

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread Amar Tumballi Suryanarayan
Hi Abhishek, Few more questions, > On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL > wrote: > >> Hi Amar, >> >> Below are the requested logs >> >> pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so >> not a dynamic executable >> >> pabhishe@arn-build3$ldd

Re: [Gluster-users] Glusterfsd crashed with SIGSEGV

2019-03-12 Thread ABHISHEK PALIWAL
Hi Amar, did you get time to check the logs? Regards, Abhishek On Tue, Mar 12, 2019 at 10:58 AM ABHISHEK PALIWAL wrote: > Hi Amar, > > Below are the requested logs > > pabhishe@arn-build3$ldd ./sysroot-destdir/usr/lib64/libglusterfs.so > not a dynamic executable > > pabhishe@arn-build3$ldd

Re: [Gluster-users] [Gluster-Maintainers] Release 6: Release date update

2019-03-12 Thread Sankarshan Mukhopadhyay
On Wed, Mar 13, 2019 at 7:55 AM Shyam Ranganathan wrote: > > On 3/5/19 1:17 PM, Shyam Ranganathan wrote: > > Hi, > > > > Release-6 was to be an early March release, and due to finding bugs > > while performing upgrade testing, is now expected in the week of 18th > > March, 2019. > > > > RC1

Re: [Gluster-users] [Gluster-Maintainers] Release 6: Release date update

2019-03-12 Thread Shyam Ranganathan
On 3/5/19 1:17 PM, Shyam Ranganathan wrote: > Hi, > > Release-6 was to be an early March release, and due to finding bugs > while performing upgrade testing, is now expected in the week of 18th > March, 2019. > > RC1 builds are expected this week, to contain the required fixes, next > week would

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-03-12 Thread Artem Russakovskii
Hi Amar, Any updates on this? I'm still not seeing it in OpenSUSE build repos. Maybe later today? Thanks. Sincerely, Artem -- Founder, Android Police , APK Mirror , Illogical Robot LLC beerpla.net | +ArtemRussakovskii

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Taste-Of-IT
Hi, i found a Bug about this in Version 3.10. I run 3.13.2 - for your Information. As far as i can see, the default of 1% rule is active and not configure 0 = for disable storage.reserve. So what can i do? Finish remove brick? Upgrade to newer Version and rerun rebalance? thx Taste Am

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Taste-Of-IT
Hi Susant, and thanks for your fast reply and pointing me to that log. So i was able to find the problem: "dht-rebalance.c:1052:__dht_check_free_space] 0-vol4-dht: Could not find any subvol with space accomodating the file" But Volume Detail and df -h show xTB of free Disk Space and also Free

Re: [Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Susant Palai
Would it be possible for you to pass the rebalance log file on the node from which you want to remove the brick? (location : /var/log/glusterfs/) + the following information: 1 - gluster volume info 2 - gluster volume status 2 - df -h output on all 3 nodes Susant On Tue, Mar 12, 2019 at

[Gluster-users] Removing Brick in Distributed GlusterFS

2019-03-12 Thread Taste-Of-IT
Hi, i have a 3 Node Distributed Gluster. I have one Volume over all 3 Nodes / Bricks. I want to remove one Brick and run gluster volume remove-brick start. The Job completes and shows 11960 failures and only transfers 5TB out of 15TB Data. I have still files and folders on this volume on