On 12/30/2015 06:42 AM, Ravishankar N wrote:
On 12/30/2015 04:20 AM, Kyle Harris wrote:
Hello All,
Forgive the duplicate but I forgot to give the first post a title so
this corrects that. Anyway, I recently discovered the new arbiter
functionality of the 3.7 branch so I decided to give it a
I think I have a workaround here:
Remove /var/lib/glusterd/vols//node_state.info file from all
the nodes and restart GlusterD
Post that you should be able to execute gluster volume status successfully.
Thanks,
Atin
On 12/30/2015 11:36 AM, Atin Mukherjee wrote:
> I have some updates:
>
> Actual
I have some updates:
Actually the opCode what we check here for aggregating status is for
rebalance process (volinfo->rebal.op), not for the incoming operation,
so I was wrong with my first analysis. However having said that
rebalance op could either be GD_OP_REMOVE_BRICK or GD_OP_REBALANCE. If
it
On 12/30/2015 04:20 AM, Kyle Harris wrote:
Hello All,
Forgive the duplicate but I forgot to give the first post a title so
this corrects that. Anyway, I recently discovered the new arbiter
functionality of the 3.7 branch so I decided to give it a try. First
off, I too am looking forward to
Hello All,
Forgive the duplicate but I forgot to give the first post a title so this
corrects that. Anyway, I recently discovered the new arbiter functionality
of the 3.7 branch so I decided to give it a try. First off, I too am
looking forward to the ability to add an arbiter to an already exis
Hi all,
I am working with 4 physical servers and on each of them we set up a virtual
servers for regular usages. The glusterfs server is set up on the physical
servers and the glusterfs clients are running on the virtual machines.
When I run some IO intensive jobs in a virtual machines for som
Hello All,
I recently discovered the new arbiter functionality of the 3.7 branch so I
decided to give it a try. First off, I too am looking forward to the
ability to add an arbiter to an already existing volume as discussed in the
following thread:
https://www.gluster.org/pipermail/gluster-users/
-Atin
Sent from one plus one
On Dec 17, 2015 3:21 PM, "Nicolas Ecarnot" wrote:
>
> Le 17/12/2015 10:10, Nicolas Ecarnot a écrit :
>>
>> Hello,
>>
>> Our setup : 3 Centos 7.2 nodes, with gluster 3.7.6 in replica-3, used as
>> storage+compute for an oVirt 3.5.6 DC.
>>
>> Two days ago, we added some
I'll dive into it in detail in some time. Could you provide
cli/cmd_history/glusterd log files for further debugging.
-Atin
Sent from one plus one
On Dec 29, 2015 9:53 PM, "Christophe TREFOIS"
wrote:
> Hi Atin,
>
> Same issue. I restarted glusterd and glusterfsd everywhere and it seems
> the thi
Hi Atin,
Same issue. I restarted glusterd and glusterfsd everywhere and it seems the
thing is still in STATEDUMP.
Any other pointers?
Kind regards,
—
Christophe
> On 29 Dec 2015, at 16:19, Atin Mukherjee wrote:
>
>
>
> On 12/29/2015 07:09 PM, Christophe TREFOIS wrote:
>> Hi,
>>
>>
>>
On 12/29/2015 07:09 PM, Christophe TREFOIS wrote:
> Hi,
>
>
>> On 29 Dec 2015, at 14:27, Atin Mukherjee wrote:
>>
>> It seems like your opCode is STATEDUMP instead of STATUS which is weird.
>> Are you running a heterogeneous cluster?
>
> What does that mean? In principle no.
This means when a
Le 17/12/2015 10:51, Nicolas Ecarnot a écrit :
Le 17/12/2015 10:10, Nicolas Ecarnot a écrit :
Hello,
Our setup : 3 Centos 7.2 nodes, with gluster 3.7.6 in replica-3, used as
storage+compute for an oVirt 3.5.6 DC.
Two days ago, we added some nagios/centreon monitoring watching every 5
minutes t
Hi,
> On 29 Dec 2015, at 14:27, Atin Mukherjee wrote:
>
> It seems like your opCode is STATEDUMP instead of STATUS which is weird.
> Are you running a heterogeneous cluster?
What does that mean? In principle no.
> What is the last version you
> were running with?
I think it was 3.7.3 or 3.7.
It seems like your opCode is STATEDUMP instead of STATUS which is weird.
Are you running a heterogeneous cluster? What is the last version you
were running with? What's the current cluster op-version?
Thanks,
Atin
On 12/29/2015 06:31 PM, Christophe TREFOIS wrote:
> Dear all,
>
> I have a 3-node
Dear all,
I have a 3-node distribute setup with a controller of GlusterFS and upgraded to
3.7.6 today and to CentOS 7.2.
After the ugprade (reboot), I can start the volume fine and see mounted volume
as well on the controller.
However, a gluster volume info results in an
[root@stor104 gluste
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
(https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
16 matches
Mail list logo