Hello,
Our setup : 3 Centos 7.2 nodes, with gluster 3.7.6 in replica-3, used as
storage+compute for an oVirt 3.5.6 DC.
Two days ago, we added some nagios/centreon monitoring watching every 5
minutes the state of the heal queue :
(something like "gluster volume heal some_vol info" with the
Hi Susant,
Thank you for your instructions. I'll do that.
My volume contains more than 2 million end sub directories. Most of the end sub
directories contains 10~30 small files. Current total size is about 900G. Two
bricks, each one is 1T. Current ram size is 8G.
Previously I saw 3
Le 17/12/2015 10:10, Nicolas Ecarnot a écrit :
Hello,
Our setup : 3 Centos 7.2 nodes, with gluster 3.7.6 in replica-3, used as
storage+compute for an oVirt 3.5.6 DC.
Two days ago, we added some nagios/centreon monitoring watching every 5
minutes the state of the heal queue :
(something like
On 12/09/2015 04:56 PM, Amye Scavarda wrote:
> In the interest of making our documentation usable again, we've gone
> through MediaWiki (the old community pages and documentation) and found
> out what was left behind and what needed to be moved over to our
> Github-based wiki pages. We'll be
On Wed, Dec 09, 2015 at 02:56:15PM -0800, Amye Scavarda wrote:
> In the interest of making our documentation usable again, we've gone
> through MediaWiki (the old community pages and documentation) and found out
> what was left behind and what needed to be moved over to our Github-based
> wiki
Dear Gluster users,
we are currently running a 12 nodes Gluster cluster, with CTDB on top
and Samba 3.6 (installation was performed with RHGS3.0).
We're currently running into issues when trying to perform a rolling
upgrade to Samba 4.1 to one of the nodes, while the others are still
running.
Ok from your reply rebalance seems to be fine.
So what you can do is check whether the mem-usage of brick process keeps
increasing constantly. If that is the case take multiple state-dumps
intermittently.
Regards,
Susant
- Original Message -
From: "PuYun"
To:
Please use the following links to join the session remotely.
Event page: https://plus.google.com/events/c1u9mm8s59772sfpstbbb96odts
Video link: http://www.youtube.com/watch?v=CpHRtsWiCSg
Best Regards,
Vishwanath
On 12 December 2015 at 00:53, M S Vishwanath Bhat wrote:
>
>
Hi Susant,
You are right, the rebalance process itself is normal now. But the writing
brick keeps increasing during rebalancing. Current task has been running for 16
hours, here is the top info.
= top ===
top - 08:58:27 up 3 days, 12:08, 1 user,