Re: [Gluster-users] Volume to store vm

2019-09-09 Thread Dave Sherohman
a of the data. As you can see, using an arbiter gives you (nearly) as much data security as an additional replica, while consuming a tiny, tiny fraction of the space that would be "lost" to an additional full replica. If you're trying to maximize usable capacity in your volu

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-07-10 Thread Dave Sherohman
e "-T" permissions are internal files and can be > ignored. Ravi and Krutika, please take a look at the other files. > > Regards, > Nithya > > > On Fri, 28 Jun 2019 at 19:56, Dave Sherohman wrote: > > > On Thu, Jun 27, 2019 at 12:17:10PM +05

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-28 Thread Dave Sherohman
fa7f6e5.10724 -rw-r--r-- 2 root libvirt-qemu 4194304 Apr 11 2018 c953c676-152d-4826-80ff-bd307fa7f6e5.3101 --- cut here --- -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-28 Thread Dave Sherohman
OK, I'm just careless. Forgot to include "start" after the list of bricks... On Fri, Jun 28, 2019 at 04:03:40AM -0500, Dave Sherohman wrote: > On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote: > > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote:

Re: [Gluster-users] Removing subvolume from dist/rep volume

2019-06-28 Thread Dave Sherohman
On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote: > On Tue, 25 Jun 2019 at 15:26, Dave Sherohman wrote: > > My objective is to remove nodes B and C entirely. > > > > First up is to pull their bricks from the volume: > > > > # gluster volume rem

[Gluster-users] Removing subvolume from dist/rep volume

2019-06-25 Thread Dave Sherohman
t stretch main -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Add Arbiter Brick to Existing Distributed Replicated Volume

2018-12-07 Thread Dave Sherohman
y the complete list of bricks in any add-brick command. So if you have bricks D1-2, D1-2, D2-1, D2-2, D3-1, and D3-2, adding arbiters (A-1 through A-3) would be gluster volume add-brick MyVolume replica 3 arbiter 1 D1-1 D1-2 A-1 D2-1 D2-2 A-2 D3-1 D3-2 A

Re: [Gluster-users] Kicking a stuck heal

2018-10-22 Thread Dave Sherohman
ve a fully-consistent cluster again? On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote: > Last Friday, I rebooted one of my gluster nodes and it didn't properly > mount the filesystem holding its brick (I had forgotten to add it to > fstab...), so, when I got back to

Re: [Gluster-users] Gluster client

2018-10-16 Thread Dave Sherohman
what other nodes it might attempt to connect to. I primarily use gluster for VM disk images, so, in my case, I list all the gluster nodes in the VM definition and, if the first one isn't reachable, then it tries the second and so on until it finds one that's available to connect to. Wha

Re: [Gluster-users] Kicking a stuck heal

2018-09-20 Thread Dave Sherohman
e done live? About how long should we expect it to take to upgrade a 23T (4.5T used) replica 2+A volume with three subvolumes? -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kicking a stuck heal

2018-09-10 Thread Dave Sherohman
to mean making changes to it. > May be that may convince you to re-consider your stance about the > upgrade to one of the active stable releases on gluster and then we > can see if you still face the problem and we could help fix it in > further releases. Sounds

Re: [Gluster-users] Kicking a stuck heal

2018-09-07 Thread Dave Sherohman
On Fri, Sep 07, 2018 at 10:46:01AM +0530, Pranith Kumar Karampuri wrote: > On Tue, Sep 4, 2018 at 6:06 PM Dave Sherohman wrote: > > > On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote: > > > Is there anything I can do to kick the self-heal back into action and

Re: [Gluster-users] Kicking a stuck heal

2018-09-04 Thread Dave Sherohman
On Tue, Sep 04, 2018 at 05:32:53AM -0500, Dave Sherohman wrote: > Is there anything I can do to kick the self-heal back into action and > get those final 59 entries cleaned up? In response to the request about what version of gluster I'm running (...which I deleted prematurely...

[Gluster-users] Kicking a stuck heal

2018-09-04 Thread Dave Sherohman
ver a full day later, it's still at 59. Is there anything I can do to kick the self-heal back into action and get those final 59 entries cleaned up? -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org https://lists.gluster.o

Re: [Gluster-users] design of gluster cluster

2018-06-13 Thread Dave Sherohman
path/to/brick (arb-)host6:/path/to/brick2 host3:/path/to/brick > host6:/path/to/brick (arb-)host1:/path/to/brick2 > > is this a sane command? Yep, looks reasonable to me aside from the "replica 2" needing to be "replica 3". -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] design of gluster cluster

2018-06-12 Thread Dave Sherohman
what is the command used to build this? # gluster volume create my-volume replica 3 arbiter 1 host1:/path/to/brick host2:/path/to/brick arb-host1:/path/to/brick host4:/path/to/brick host5:/path/to/brick arb-host2:/path/to/brick host3:/path/to/brick host6:/path/to/brick arb-host3:/path/to

Re: [Gluster-users] glustefs as vmware datastore in production

2018-06-06 Thread Dave Sherohman
datastore if you use replication since all writes are > multiplied. Yep, that's the price you pay for HA. Also, although the writes are multiplied, they're also (at least partially) concurrent, so performance isn't as bad as "divide by the number of replicas". -- Dave

Re: [Gluster-users] glustefs as vmware datastore in production

2018-05-29 Thread Dave Sherohman
r how to access the volume.) -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] arbiter node on client?

2018-05-07 Thread Dave Sherohman
be. In my case, I have three subvolumes (three replica pairs), which means I need three arbiters and those could be spread across multiple nodes, of course, but I don't think saying "I want 12 arbiters instead of 3!" would be supported. -- Dave Sherohman __

Re: [Gluster-users] How to set up a 4 way gluster file system

2018-04-27 Thread Dave Sherohman
hey store only file metadata, not file contents, so you can just scrape up a little spare disk space on two of your boxes, call that space an arbiter, and run with it. In my case, I have 10T data bricks and 100G arbiter bricks; I'm using a total of under 1G across all arbiter bricks for 3T of d

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
re successfully added, self heal should start automatically and > you can check the status of heal using the command, > gluster volume heal info OK, done and the heal is in progress. Thanks again for your help! -- Dave Sherohman ___ Gluster-users

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter 2] [cthulhu brick] [mordiggian brick] [arbiter 3] Or is there more to it than that? -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
ocated for arbiter bricks if it would be sigificantly simpler and safer than repurposing the existing bricks (and I'm getting the impression that it probably would be). Does it particularly matter whether the arbiters are all on the same node or on three separate nodes? -- Dave Sherohman __

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-27 Thread Dave Sherohman
esystem Size Used Avail Use% Mounted on /dev/mapper/gandalf-gluster 885G 55G 786G 7% /var/local/brick0 and the other four have $ df -h /var/local/brick0 Filesystem Size Used Avail Use% Mounted on /dev/sdb111T 254G 11T 3% /var/local/brick0 -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-26 Thread Dave Sherohman
months already with the current configuration and there are several virtual machines running off the existing volume, so I'll need to reconfigure it online if possible. -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Quorum in distributed-replicate volume

2018-02-26 Thread Dave Sherohman
-3-4-5-6 still together, then brick 1 will recognize that it doesn't have volume-wide quorum and reject writes, thus allowing brick 2 to remain authoritative and able to accept writes. -- Dave Sherohman ___ Gluster-users mailing list Gl

Re: [Gluster-users] Failover problems with gluster 3.8.8-1 (latest Debian stable)

2018-02-20 Thread Dave Sherohman
On Fri, Feb 16, 2018 at 05:44:43AM -0600, Dave Sherohman wrote: > On Thu, Feb 15, 2018 at 09:34:02PM +0200, Alex K wrote: > > Have you checked for any file system errors on the brick mount point? > > I hadn't. fsck reports no errors. > > > What about the heal? Doe

Re: [Gluster-users] Failover problems with gluster 3.8.8-1 (latest Debian stable)

2018-02-16 Thread Dave Sherohman
fsck it was enough to trigger gluster to recheck everything. I'll check after it finishes to see whether this ultimately resolves the issue. -- Dave Sherohman ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/

Re: [Gluster-users] Failover problems with gluster 3.8.8-1 (latest Debian stable)

2018-02-15 Thread Dave Sherohman
verything from /var/local/brick0, and then re-add it to the cluster as if I were replacing a physically failed disk? Seems like that should work in principle, but it feels dangerous to wipe the partition and rebuild, regardless. On Tue, Feb 13, 2018 at 07:33:44AM -0600, Dave Sherohman wrote: >

[Gluster-users] Failover problems with gluster 3.8.8-1 (latest Debian stable)

2018-02-13 Thread Dave Sherohman
Status of Volume palantir -- Task : Rebalance ID : c38e11fe-fe1b-464d-b9f5-1398441cc229 Status : completed -- Dave Sherohman ___ Gluster-users mailing list Glust