Hi,
I'm not sure which log file you are referring to, but as log files are stored
on a
ram disk in my case, I'm afraid it is gone.
I use a work-around for this problem: To avoid having to start the volume
(again)
using the 'force' option, I wait until the local brick specified in the volume
fi
On 06/22/2015 09:55 PM, Atin Mukherjee wrote:
> Sent from one plus one
> On Jun 22, 2015 7:31 PM, "Andreas Hollaus"
> wrote:
>>
>> Hi,
>>
>> Well, I don't really know what to expect, but there actually are some
> errors:
>> Could it be due to that missing extended attribute? I don't understand
>
Sent from one plus one
On Jun 22, 2015 7:31 PM, "Andreas Hollaus"
wrote:
>
> Hi,
>
> Well, I don't really know what to expect, but there actually are some
errors:
> Could it be due to that missing extended attribute? I don't understand
why it's missing (yet).
You are correct. Missing volume-id cou
The local brick is available, but there's no guarantee that the remote
brick (the replica) is available at the time glusterd is started. Would
that put GlusterFS in a bad state which isn't resolved automatically
when that brick is available again? Would that be the reason why I have
to start th
On Mon, Jun 22, 2015 at 03:35:56PM +0200, Andreas Hollaus wrote:
> Hi,
>
> I keep having this situation where I have to start the volume using the force
> option.Why isn't the volume started without this?
> It seems to have these problems after a node restart, but I expected the
> volume to be
>
Hi,
Well, I don't really know what to expect, but there actually are some
errors:
Could it be due to that missing extended attribute? I don't understand
why it's missing (yet).
Regards
Andreas
# more /var/log/glusterfs/bricks/opt-lvmdir-c2-brick.log
[2015-06-22 13:23:47.924071] I [MSGID: 100
Sent from one plus one
On Jun 22, 2015 7:06 PM, "Andreas Hollaus"
wrote:
>
> Hi,
>
> I keep having this situation where I have to start the volume using the
force
> option.Why isn't the volume started without this?
> It seems to have these problems after a node restart, but I expected the
volume t
Hi,
I keep having this situation where I have to start the volume using the force
option.Why isn't the volume started without this?
It seems to have these problems after a node restart, but I expected the volume
to be
restarted properly whenever the service is restarted.
Regards
Andreas
On 06/
Hi,
Well that did the trick. Thanks!
Regards
Andreas
On 06/22/15 10:07, Sakshi Bansal wrote:
> Both the bricks are down. Can you run -
> $ gluster volume start force
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/ma
Both the bricks are down. Can you run -
$ gluster volume start force
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users
Hi,
I already supplied the output from that command, and as you can see the volume
is
started (according to the output).
Regards
Andreas
On 06/22/15 09:44, Sakshi Bansal wrote:
> Hello,
>
> I suppose the volume is not started. You can check the status of the volume
> by -
> $ gluster volume
Hello,
I suppose the volume is not started. You can check the status of the volume by
-
$ gluster volume info.
To start the volume-
$ gluster volume start
Once you start the volume successfully, try to mount it again.
Thanks and Regards
Sakshi Bansal
__
Hi,
I can't mount the GlusterFS volume anymore. Any ideas what could be wrong? I
guess
the volume should mount even though the remote brick might not be available? In
this
case it is available ('pingable'), but in some cases it may not be, which seems
pretty normal for a replicated volume.
Rega
13 matches
Mail list logo