To answer my own question. I now realize the importance of creating
specifically directories and not files because it appears that directories
are created on each pair regardless of the hash. And so, if a host is down,
changes will be marked as pending for that host. With files it's different
The guide here: https://gluster.readthedocs.io/en/latest/
Administrator%20Guide/Managing%20Volumes/#replace-faulty-brick suggests
running the following while the partner host is down:
mkdir /mnt/r2/ rmdir /mnt/r2/ setfattr -n trusted.non-existent-key -v abc /mnt/r2 setfattr -x
Actually, I was planning to slowly grow the individual brick to a bigger size
without doing data transfer or volume shutdown.
Alex Leung
On 10/10/16, 11:15 AM, "gluster-users-boun...@gluster.org on behalf of Joe
Julian"
On 10/10/2016 11:07 AM, Serkan Çoban wrote:
Is it like
Gluster volume add-brick pdsclust raid1-gb:/data/gfs raid2-gb:/data/gfs
raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs raid7-gb:/data/gfs
Yes the command is like that.
Besides, Can I have different size of the brick? Such as
I've written an example of how gluster's dht works on my blog at
https://joejulian.name/blog/dht-misses-are-expensive/ which might make
it clear why the end result is not what you expected.
By setting cluster.min-free-disk (defaults to 10%) you can, at least,
ensure that your new bricks are
>Is it like
>Gluster volume add-brick pdsclust raid1-gb:/data/gfs raid2-gb:/data/gfs
>raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs raid7-gb:/data/gfs
Yes the command is like that.
> Besides, Can I have different size of the brick? Such as raid1,2,3 is 20 TB
> and raid5,6,7 is 40TB?
Thanks, but what is the exact command to add-brick?
volume add-brick [ ] ...
Is it like
Gluster volume add-brick pdsclust raid1-gb:/data/gfs raid2-gb:/data/gfs
raid3-gb:/data/gfs raid5-gb:/data/gfs raid6-gb:/data/gfs raid7-gb:/data/gfs
What is the value of >
Hi,
We have a 2 node, distributed replicated setup (11 bricks on each node). Each
of these bricks are 6TB in size.
node_A:/brick1 replicates node_B:/brick1
node_A:/brick2 replicates node_B:/brick2
node_A:/brick3 replicates node_B:/brick3
…
…
node_A:/brick11 replicates node_B:/brick11
We
On Thu, Oct 6, 2016 at 11:34 AM, Leung, Alex (398C)
wrote:
> Here is my configuration:
>
>
>
> [root@raid4 ~]# gluster volume info
>
> Volume Name: pdsclust
>
> Type: Disperse
>
> Volume ID: 02629f52-cfe1-4542-8581-21d25e254d39
>
> Status: Started
>
> Number of Bricks: 1
On 10/06/2016 04:25 PM, Michael Ciccarelli wrote:
this is the info file contents.. is there another file you would want to
see for config?
type=2
count=2
status=1
sub_count=2
stripe_count=1
replica_count=2
disperse_count=0
redundancy_count=0
version=3
transport-type=0
Hi,
We have a few Proxmox clusters using GlusterFS as storage.
The nodes are both running the gluster brick and proxmox,
and one of the problems we have often is that when a server
crashes for some reason, the InnoDB of the VM that were
running on the VMs it hosted are dead. Most of the time
On Wed, Oct 05, 2016 at 04:49:48PM +0300, deZillium wrote:
> Hello,
>
> I can't, for the life of me, get NFS mounts working from clients. Any help
> is greatly appreciated.
>
> OS: Debian 8.6
>
> GlusterFS 3.8.4-1 installed using the GlusterFS Debian repos.
>
>
> Mounting using the native
Here is my configuration:
[root@raid4 ~]# gluster volume info
Volume Name: pdsclust
Type: Disperse
Volume ID: 02629f52-cfe1-4542-8581-21d25e254d39
Status: Started
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: raid4-gb:/data/gfs
Brick2: raid8-gb:/data/gfs
Brick3:
Hello,
I can't, for the life of me, get NFS mounts working from clients. Any
help is greatly appreciated.
OS: Debian 8.6
GlusterFS 3.8.4-1 installed using the GlusterFS Debian repos.
Mounting using the native (glusterfs) client works but not as fast as I
would like it to, so I have to use
Just to clarify my patch only takes care of introducing the required
changes in 3.9 and not in 3.8. A seperate patch needs to be sent out to
ensure nfs service comes up for the volumes which were created pre upgrade
to 3.8 and have default configuration.
On Monday 10 October 2016, Jiffin Tony
Hi all,
I am trying to list out glusterd issues with the 3.8 feature "Gluster
NFS being off by default".
As per current implementation,
1.) On a freshly installed setup with 3.8/3.9, if u create a volume,
then Gluster NFS won't
come by default and in the vol info we can see "
16 matches
Mail list logo