Hi Atin,
This is the steps exactly I have done which caused failure. additional to
this node3 OS drive was running out of space when service failed. so I have
cleared some space in OS drive but still service failed to start.
Trying to simulate a situation. where volume stoped abnormally and
entir
I'm not very sure how did you end up into a state where in one of the node
lost information of one peer from the cluster. I suspect doing a replace
node operation you somehow landed into this situation by an incorrect step.
Until and unless you could elaborate more on what all steps you have
perfor
Hi Atin,
yes, it worked out thank you.
what would be the cause of this issue?
On Fri, Jan 25, 2019 at 1:56 PM Atin Mukherjee wrote:
> Amudhan,
>
> So here's the issue:
>
> In node3, 'cat /var/lib/glusterd/peers/* ' doesn't show up node2's details
> and that's why glusterd wasn't able to reso
Amudhan,
So here's the issue:
In node3, 'cat /var/lib/glusterd/peers/* ' doesn't show up node2's details
and that's why glusterd wasn't able to resolve the brick(s) hosted on node2.
Can you please pick up 0083ec0c-40bf-472a-a128-458924e56c96 file from
/var/lib/glusterd/peers/ from node 4 and pla
Amudhan,
I see that you have provided the content of the configuration of the volume
gfs-tst where the request was to share the dump of /var/lib/glusterd/* . I
can not debug this further until you share the correct dump.
On Thu, Jan 17, 2019 at 3:43 PM Atin Mukherjee wrote:
> Can you please run
Ok, no problem.
On Sat 19 Jan, 2019, 7:55 AM Atin Mukherjee I have received but haven’t got a chance to look at them. I can only come
> back on this sometime early next week based on my schedule.
>
> On Fri, 18 Jan 2019 at 16:52, Amudhan P wrote:
>
>> Hi Atin,
>>
>> I have sent files to your ema
I have received but haven’t got a chance to look at them. I can only come
back on this sometime early next week based on my schedule.
On Fri, 18 Jan 2019 at 16:52, Amudhan P wrote:
> Hi Atin,
>
> I have sent files to your email directly in other mail. hope you have
> received.
>
> regards
> Amud
Hi Atin,
I have sent files to your email directly in other mail. hope you have
received.
regards
Amudhan
On Thu, Jan 17, 2019 at 3:43 PM Atin Mukherjee wrote:
> Can you please run 'glusterd -LDEBUG' and share back the glusterd.log?
> Instead of doing too many back and forth I suggest you to sh
Can you please run 'glusterd -LDEBUG' and share back the glusterd.log?
Instead of doing too many back and forth I suggest you to share the content
of /var/lib/glusterd from all the nodes. Also do mention which particular
node the glusterd service is unable to come up.
On Thu, Jan 17, 2019 at 11:34
I have created the folder in the path as said but still, service failed to
start below is the error msg in glusterd.log
[2019-01-16 14:50:14.555742] I [MSGID: 100030] [glusterfsd.c:2741:main]
0-/usr/local/sbin/glusterd: Started running /usr/local/sbin/glusterd
version 4.1.6 (args: /usr/local/sbin/
If gluster volume info/status shows the brick to be /media/disk4/brick4
then you'd need to mount the same path and hence you'd need to create the
brick4 directory explicitly. I fail to understand the rationale how only
/media/disk4 can be used as the mount path for the brick.
On Wed, Jan 16, 2019
Yes, I did mount bricks but the folder 'brick4' was still not created
inside the brick.
Do I need to create this folder because when I run replace-brick it will
create folder inside the brick. I have seen this behavior before when
running replace-brick or heal begins.
On Wed, Jan 16, 2019 at 5:05
On Wed, Jan 16, 2019 at 5:02 PM Amudhan P wrote:
> Atin,
> I have copied the content of 'gfs-tst' from vol folder in another node.
> when starting service again fails with error msg in glusterd.log file.
>
> [2019-01-15 20:16:59.513023] I [MSGID: 100030] [glusterfsd.c:2741:main]
> 0-/usr/local/sb
Atin,
I have copied the content of 'gfs-tst' from vol folder in another node.
when starting service again fails with error msg in glusterd.log file.
[2019-01-15 20:16:59.513023] I [MSGID: 100030] [glusterfsd.c:2741:main]
0-/usr/local/sbin/glusterd: Started running /usr/local/sbin/glusterd
version
This is a case of partial write of a transaction and as the host ran out of
space for the root partition where all the glusterd related configurations
are persisted, the transaction couldn't be written and hence the new
(replaced) brick's information wasn't persisted in the configuration. The
worka
Hi,
In short, when I started glusterd service I am getting following error msg
in the glusterd.log file in one server.
what needs to be done?
error logged in glusterd.log
[2019-01-15 17:50:13.956053] I [MSGID: 100030] [glusterfsd.c:2741:main]
0-/usr/local/sbin/glusterd: Started running /usr/loca
16 matches
Mail list logo