Hi,
I am planing to use snapshots for backups of virtual machines which are
stored in a gluster volume. I managed to get snapshots working, but I
have a small dilemma how to backups files.
My motivation is to do a full backup onto a different harddrive in case
of brick hdd failure.
The
Hello,
We are using glusterfs version 3.7.2.
There are few i/o errors reported during our testing and i/o errors are due
to split brain files.
We tried to find the split brain files with gluster volume heal vol info
split-brain.
But strangely the command shows that the number of split brain files
On Tue, 2015-08-11 at 11:14 +0530, Atin Mukherjee wrote:
On 08/11/2015 10:44 AM, Kingsley wrote:
On Tue, 2015-08-11 at 07:48 +0530, Atin Mukherjee wrote:
-Atin
Sent from one plus one
On Aug 10, 2015 11:58 PM, Kingsley glus...@gluster.dogwind.com
wrote:
On Mon, 2015-08-10 at
Hello!
I'm cross-posting to gluster-users to get more feedback. Preferably
reply to gluster-infra, but feedback on wrong list is better than
no feedback at all, so whatever you like :-)
There has been some discussion (and a long-standing item in the
community meeting agenda) about a calendar
Hi all,
This meeting is scheduled for anyone that is interested in learning more
about, or assisting with the Bug Triage.
Meeting details:
- location: #gluster-meeting on Freenode IRC
( https://webchat.freenode.net/?channels=gluster-meeting )
- date: every Tuesday
- time: 12:00 UTC
Hi all
Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown:
[2015-08-11 11:40:33.807940] E
[glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management: tune2fs
exited with non-zero exit status
[2015-08-11 11:40:33.807962] E
Hi,
If you need to reboot all bricks in a volume, what's the best way to do
this seamlessly?
I did this a few days ago by rebooting one, then waiting for gluster
volume info on another brick to show it back online before doing the
next, and so on. However, it went a bit wrong and I ended up with
-Atin
Sent from one plus one
On Aug 11, 2015 7:54 PM, Davy Croonen davy.croo...@smartbit.be wrote:
Hi all
Our etc-glusterfs-glusterd.vol.log is filling up with entries as shown:
[2015-08-11 11:40:33.807940] E
[glusterd-utils.c:7410:glusterd_add_inode_size_to_dict] 0-management:
tune2fs
Well as you mentioned you might have rebooted the other node of the replica
pair when the self heal was in progress. AFR team can help you with details
if there is a way to detect whether heal is in progress or not.
-Atin
Sent from one plus one
On Aug 11, 2015 10:06 PM, Kingsley
This looks like SSH configuration issue. Please cleanup all the lines in
/root/.ssh/authorized_keys which are connected from Master nodes which
do not starts with command=
Please let us know, which ssh key is used to create passwordless SSH
from Master node to Slave node.
To resolve the
10 matches
Mail list logo