Le 29/07/2016 20:27, Pranith Kumar Karampuri a écrit :
On Fri, Jul 29, 2016 at 10:09 PM, Pranith Kumar Karampuri
> wrote:
On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret
On Fri, Jul 29, 2016 at 10:09 PM, Pranith Kumar Karampuri <
pkara...@redhat.com> wrote:
>
>
> On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret <
> yannick.per...@liris.cnrs.fr> wrote:
>
>> Ok, last try:
>> after investigating more versions I found that FUSE client leaks memory
>> on all of them.
Thank you very much for the information. I have read the documentation link
you have listed below and just wanted confirmation about the remove-brick
process. I did not see any documentation about the ability to use the
remove-brick stop command so that is good to know.
I have a couple of
On Fri, Jul 29, 2016 at 2:26 PM, Yannick Perret <
yannick.per...@liris.cnrs.fr> wrote:
> Ok, last try:
> after investigating more versions I found that FUSE client leaks memory on
> all of them.
> I tested:
> - 3.6.7 client on debian 7 32bit and on debian 8 64bit (with 3.6.7
> serveurs on debian
On Friday 29 July 2016, ABHISHEK PALIWAL wrote:
> Hi,
>
> After a long time I am posting one more issue here.
>
> We have two board and glusterfs in sync on both of them, and our test case
> to restart one board continuously but in this TestCase we are getting
>
Hi
Why CTDB 251 is doing this when one node shutdown or go down randomly (and
sometimes it just works like charm),
;root@gnode1:~[root@gnode1 ~]# ctdb status
Number of nodes:2
pnn:0 10.0.72.5 BANNED|INACTIVE (THIS NODE)
pnn:1 10.0.72.6 DISCONNECTED|UNHEALTHY|INACTIVE
Ok, last try:
after investigating more versions I found that FUSE client leaks memory
on all of them.
I tested:
- 3.6.7 client on debian 7 32bit and on debian 8 64bit (with 3.6.7
serveurs on debian 8 64bit)
- 3.6.9 client on debian 7 32bit and on debian 8 64bit (with 3.6.7
serveurs on debian
Hi,
After a long time I am posting one more issue here.
We have two board and glusterfs in sync on both of them, and our test case
to restart one board continuously but in this TestCase we are getting
duplicate entries of UUID in "gluster peer status" command and it is very
rarely seen.
So, I
You can also see the documentation here
https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Managing%20Volumes/#shrinking-volumes
Rafi KC
On 07/29/2016 11:39 AM, Mohammed Rafi K C wrote:
>
>
> I will summarize the procedure for removing a brick with description.
>
>
> 1) start an add
I will summarize the procedure for removing a brick with description.
1) start an add brick operation using gluster volume remov-brick
command. This command will mark the mentioned brick as a decommissioned
brick. Also, this will kick a process that will start migrating data
from the
10 matches
Mail list logo