On Fri, Feb 16, 2018 at 05:44:43AM -0600, Dave Sherohman wrote:
> On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote:
> > Have you checked for any file system errors on the brick mount point?
>
> I hadn't. fsck reports no errors.
>
> > What about the heal? Does it report any pending heals?
>
On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote:
> Have you checked for any file system errors on the brick mount point?
I hadn't. fsck reports no errors.
> What about the heal? Does it report any pending heals?
There are now. It looks like taking the brick offline to fsck it was
enough
Hi,
Have you checked for any file system errors on the brick mount point?
I once was facing weird io errors and xfs_repair fixed the issue.
What about the heal? Does it report any pending heals?
On Feb 15, 2018 14:20, "Dave Sherohman" wrote:
> Well, it looks like I've stumped the list, so I d
Well, it looks like I've stumped the list, so I did a bit of additional
digging myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
Howeve
I'm using gluster for a virt-store with 3x2 distributed/replicated
servers for 16 qemu/kvm/libvirt virtual machines using image files
stored in gluster and accessed via libgfapi. Eight of these disk images
are standalone, while the other eight are qcow2 images which all share a
single backing file
On 10 November 2015 at 10:03, Thing wrote:
> Does the arbiter node have to be high spec? ie I have a raspberrypi as my
> bastion host / system controller, if its a simple "write to node 2 as 1 is
> off" sort of thing the Pi might cope. If its like that at all of course,
> the Pi cant do any ban
*Thing
> *Sent: *Tuesday, 10 November 2015 9:16 AM
> *To: *Gluster Users
> *Subject: *[Gluster-users] failover
>
>
>
>
>
> Hi,
>
> I am just testing a KVM frontend with VMs and using a backend 2 node
> gluster setup mount with glusterfs. this runs fine, but my under
quorum enabled. So
long as any *two* nodes stay in contact, they can be written to.
Alternatively two nodes plus one arbiter node will work as well.
Sent from Mail for Windows 10
From: Thing
Sent: Tuesday, 10 November 2015 9:16 AM
To: Gluster Users
Subject: [Gluster-users] failover
Hi,
I am
Hi,
I am just testing a KVM frontend with VMs and using a backend 2 node
gluster setup mount with glusterfs. this runs fine, but my understanding
was that if the gluster node the front end is attached to goes down the
client swaps to the other node and carries on?
So I switched gluster node 1 of
Hi Vijay,
Thanksis "replace-brick commit force" also can possible through Ovirt
Admin Portal or do i need to use the command line to run this command ??
Also i am looking for HA for glusterfs...let me explain
my architecture more :-
1. I have 4 node glusterfs clusterand same 4 nodes i a
On 10/23/2014 01:35 PM, Punit Dambiwal wrote:
On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal mailto:hypu...@gmail.com>> wrote:
Hi,
I have one question regarding the gluster failover...let me
explain my current architecture,i am using Ovirt with gluster...
Hi,
Is there any body has some reference and update...
Thanks,
punit
On Wed, Oct 15, 2014 at 12:30 PM, Punit Dambiwal wrote:
> Hi All,
>
> Is there any body can help me ???
>
> On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal
> wrote:
>
>> Hi,
>>
>> I have one question regarding the gluster f
Hi All,
Is there any body can help me ???
On Mon, Oct 13, 2014 at 11:54 AM, Punit Dambiwal wrote:
> Hi,
>
> I have one question regarding the gluster failover...let me explain my
> current architecture,i am using Ovirt with gluster...
>
> 1. One Ovirt Engine (Ovirt 3.4)
> 2. 4 * Ovirt Node as w
Hi,
I have one question regarding the gluster failover...let me explain my
current architecture,i am using Ovirt with gluster...
1. One Ovirt Engine (Ovirt 3.4)
2. 4 * Ovirt Node as well as Gluster storage node...with 12 brick in one
node...(Gluster Version 3.5)
3. All 4 node in distributed repli
Hi, all
I'm new to gluster, and still trying to figure out whether it's suitable
for my project. Gluster has built-in failover support, which is very great,
but after reading the user manual, I found that the split-brain, one of the
most difficult problems of HA design, wasn't metioned at all.
Does
Hi all,
please can you explain me what happens if the master node in a mirrored 2 node
cluster scenario fails? There must be a failover mechanism, but how works
that?
To mount the volume the ip address of the management node is used, not both
node ip addresses.
Maybe you can point me to the a
16 matches
Mail list logo