Re: [Gluster-users] 3.8 Release

2016-06-23 Thread WK
the upcoming 3.7.12 release or go straight to the upcoming 3.8.x branch (and avoid a rip and replace upgrade later on). -wk On 6/22/2016 4:18 PM, Lindsay Mathieson wrote: Hey all, have been away for two weeks and I see there has been a 3.8 release with some fascinating new features and a slew

Re: [Gluster-users] add/replace brick corrupting data

2016-05-16 Thread WK
upgraded to 3.7.x yet (still on 3.4 cuz it aint broke) and are hoping that sharding solves that problem. But it seems everytime it looks like things are 'safe' for 3.7.x, something comes up. Fortunately, we like the fuse mount so maybe we are still ok. -wk On 5/16/2016 4:42 AM, Lindsay Mathieson

Re: [Gluster-users] Need urgent help to downgrade 3.7.12 to 3.7.11

2016-06-29 Thread WK
glad you are back up but what was the issue? was it a function of the upgrade or something wrong with 3.7.12 -wk On 6/29/2016 7:35 AM, Lindsay Mathieson wrote: On 29/06/2016 11:27 PM, Kaushal M wrote: You need to edit the `/var/lib/glusterd/glusterd.info` manually on all the nodes to reduce

Re: [Gluster-users] Vm migration between diff clusters

2017-01-18 Thread WK
for a storage migration, and avoid the NFS setup. You just make sure the mount points are identical as well as defined as a POOL and of course since you don't get any locking, you also make sure only one thing occurs at a time. but then you use "libvirt blockcopy&q

Re: [Gluster-users] Vm migration between diff clusters

2017-01-19 Thread WK
older Gluster boxes, still on 3.4. NFS is turned off, perhaps deliberately by the tech who installed it. On the newer versions, I was under the impression that NFS was deprecated in favor of Ganesha? Is that turned on by default? -wk On 1/19/2017 1:02 AM, Kevin Lemonnier wrote: In a pinch you

Re: [Gluster-users] Vm migration between diff clusters

2017-01-19 Thread WK
Hah, we the opposite problem. We have a resident FireWall Nazi, who is involved when boxes go online. You practically have to have a court order to be able to open up a port on any box/instance, no matter how restricted "No port for you!" -wk On 1/19/2017 2:30 PM, Kevin Lemon

[Gluster-users] So what are people using for 10G nics

2016-08-26 Thread WK
Prices seem to be dropping online at NewEgg etc and going from 2 nodes to 3 nodes for a quorum implies a lot more traffic than would be comfortable with 1G. Any NIC/Switch recommendations for RH/Cent 7.x and Ubuntu 16? -wk ___ Gluster-users

Re: [Gluster-users] gluster volume 3.10.4 hangs

2017-07-31 Thread WK
it figures things out and then continue on. When the missing node returns, the self-heal will kick in and you will be back to 100%. Your other alternative is to turn off quorum. But that risks split-brain. Depending upon your data, that may or may not be a serious issue. -wk

Re: [Gluster-users] Volume hacked

2017-08-06 Thread wk
I'm not sure what you mean by saying "NFS is available by anyone"? Are your gluster nodes physically isolated on their own network/switch? In other words can an outsider access them directly without having to compromise a NFS client machine first? -bill On 8/6/2017 7:57 AM,

Re: [Gluster-users] Volume hacked

2017-08-06 Thread wk
On 8/6/2017 1:09 PM, lemonni...@ulrar.net wrote: Are your gluster nodes physically isolated on their own network/switch? Nope, impossible to do for us ok, yes, that makes it much harder to secure. You should add VLANS, and/or overlay networks and/or Mac Address

Re: [Gluster-users] Teaming vs Bond?

2017-06-19 Thread WK
I finally did find some stats on teaming http://rhelblog.redhat.com/2014/06/23/team-driver/ On 6/19/2017 10:42 AM, WK wrote: OK, at least its not an *issue* with Gluster. I didn't expect any but you never know. I have been amused at the 'lack' of discussion on Teaming performance found

Re: [Gluster-users] Teaming vs Bond?

2017-06-19 Thread WK
Checks Inc.| System Administrator* *Office*708.613.2284 On Sat, Jun 17, 2017 at 2:59 PM, wk <wkm...@bneit.com <mailto:wkm...@bneit.com>> wrote: I'm looking at tuning up a new site and the bonding issue came up A google search reveals that the gluster docs (and Lindsay)

Re: [Gluster-users] URGENT - Cheat on quorum

2017-05-21 Thread WK
On 5/21/2017 7:00 PM, Ravishankar N wrote: On 05/22/2017 03:11 AM, W Kern wrote: gluster volume set VOL cluster.quorum-type none from the remaining 'working' node1 and it simply responds with "volume set: failed: Quorum not met. Volume operation not allowed" how do you FORCE gluster to

Re: [Gluster-users] [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread WK
On 5/30/2017 3:24 PM, Ric Wheeler wrote: As a community, each member needs to make sure that their specific use case has the resources it needs to flourish. If some team cares about Gluster in openstack, they should step forward and provide the engineering and hardware resources needed to

Re: [Gluster-users] Recovering from Arb/Quorum Write Locks

2017-05-29 Thread wk
On 5/28/2017 9:24 PM, Ravishankar N wrote: I think you should try to find if there were self-heals pending to gluster1 before you brought gluster2 down or the VMs should not have paused. yes, if I watch for and then force outstanding heals (if the self-heal hasn't kicked in) prior to

Re: [Gluster-users] [Gluster-devel] Fwd: Re: GlusterFS removal from Openstack Cinder

2017-05-30 Thread WK
On 5/30/2017 4:19 PM, Ric Wheeler wrote: On 05/30/2017 06:54 PM, WK wrote: Why is RedHat not interested in Gluster in OpenStack? Its obvious from my years lurking on the Openstack Mailing lists that the OpenStack community is most comfortable with Ceph. When asked about Gluster, I've

[Gluster-users] Teaming vs Bond?

2017-06-17 Thread wk
I'm looking at tuning up a new site and the bonding issue came up A google search reveals that the gluster docs (and Lindsay) recommend balance-alb bonding. However, "team"ing came up which I wasn't familiar with. Its already in RH6/7 and Ubuntu and their Github page implies its stable.

Re: [Gluster-users] How to remove dead peer, osrry urgent again :(

2017-06-10 Thread WK
On 6/10/2017 5:12 PM, Lindsay Mathieson wrote: Three good nodes - vnb, vng, vnh and one dead - vna from node vng: root@vng:~# gluster peer status Number of Peers: 3 Hostname: vna.proxmox.softlog Uuid: de673495-8cb2-4328-ba00-0419357c03d7 State: Peer in Cluster (Disconnected) Hostname:

Re: [Gluster-users] Performance drop from 3.8 to 3.10

2017-09-22 Thread WK
improvements. I therefore don't have any baseline stats to compare any performance diffs. I'm curious as to what changed in 3.10 that would account for any change in performance from 3.8 and in a similar vein what changes to expect in 3.12.x as we are thinking about making that jump soon. -wk

Re: [Gluster-users] Peer isolation while healing

2017-10-09 Thread WK
You have replica2 so you can't really take 50% of your cluster down without turning off quorum (and risking split brain). So detaching the rebuilding peer is really not an option. If you had replica3 or an arbiter, you CAN detach or isolate the problem peer.  I've done things like change the

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-08 Thread WK
oVirt/RHEV. Is it possible that your platform is triggering a protective response on the VMs (by suspending). -wk On 9/8/2017 5:13 AM, Gandalf Corvotempesta wrote: 2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szal...@gmail.com>: Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O a

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-08 Thread WK
I've always wondered what the scenario for these situations are (aside from the doc description of nodes coming up and down). Aren't Gluster writes atomic for all nodes?  I seem to recall Jeff Darcy stating that years ago. So a clean shutdown for maintenance shouldn't be a problem at all. If

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-09 Thread WK
On 9/8/2017 11:05 PM, Pavel Szalbot wrote: When we return the c1g node, we do see a "pause" in the VMs as the shards heal. By pause meaning a terminal session gets spongy, but that passes pretty quickly. Hmm, do you see any errors in VM's dmesg? Or any other reasons for "sponginess"? No,

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-09 Thread WK
to be a potential explanation -wk On 9/9/2017 6:49 AM, Pavel Szalbot wrote: Yes, this is my observation so far. On Sep 9, 2017 13:32, "Gionatan Danti" <g.da...@assyoma.it <mailto:g.da...@assyoma.it>> wrote: So, to recap: - with gfapi, your VMs crashes/mount read-o

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-09 Thread WK
fine. And trust me, there had been A LOT of various crashes, reboots and kill of nodes. Maybe it's a version thing ? A new bug in the new gluster releases that doesn't affect our 3.7.15. On Sat, Sep 09, 2017 at 10:19:24AM -0700, WK wrote: Well, that makes me feel better. I've seen all

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-09-10 Thread WK
On 9/10/2017 2:02 AM, Pavel Szalbot wrote: WK: I use bonded 2x10Gbps and I do get crashes only in heavy I/O situations (fio). Upgrading system (apt-get dist-upgrade) was ok, so this might be even related to amount of IOPS. -ps Well, 20Gbps of writes could overwhelm a lot of DFS clusters

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-24 Thread WK
On 8/23/2017 10:44 PM, Pavel Szalbot wrote: Hi, On Thu, Aug 24, 2017 at 2:13 AM, WK <wkm...@bneit.com> wrote: The default timeout for most OS versions is 30 seconds and the Gluster timeout is 42, so yes you can trigger an RO event. I get read-only mount within approximately 2 seconds

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread WK
On 8/25/2017 12:56 AM, Gionatan Danti wrote: WK wrote: 2 node plus Arbiter. You NEED the arbiter or a third node. Do NOT try 2 node with a VM This is true even if I manage locking at application level (via virlock or sanlock)? We ran Rep2 for years on 3.4.  It does work if you

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread WK
On 8/25/2017 2:21 PM, lemonni...@ulrar.net wrote: This concern me, and it is the reason I would like to avoid sharding. How can I recover from such a situation? How can I "decide" which (reconstructed) file is the one to keep rather than to delete? No need, on a replica 3 that just doesn't

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-25 Thread WK
On 8/25/2017 12:43 PM, lemonni...@ulrar.net wrote: I think you are talking about DRBD 8, which is indeed very easy. DRBD 9 on the other hand, which is the one that compares to gluster (more or less), is a whole other story. Never managed to make it work correctly either Yes, and I noticed

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-23 Thread WK
That really isnt an arbiter issue or for that matter a Gluster issue. We have seen that with vanilla NAS servers that had some issue or another. Arbiter simply makes it less likely to be an issue than replica 2 but in turn arbiter is less 'safe' than replica 3. However, in regards to Gluster

Re: [Gluster-users] GlusterFS as virtual machine storage

2017-08-23 Thread WK
he arbiter+sharding go a long way in solving that issue. -wk ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] data corruption - any update?

2017-10-04 Thread WK
Just so I know. Is it correct to assume that this corruption issue is ONLY involved if you are doing rebalancing with sharding enabled. So if I am not doing rebalancing I should be fine? -bill On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: On Wed, Oct 4, 2017 at 10:51 AM, Nithya

[Gluster-users] Hybrid drives SSHD on Gluster peers

2017-10-10 Thread WK
ost of those reviews were when there was a significant price difference. -wk ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings

2017-10-20 Thread WK
Will Do, re post as an issue. I may repost on Monday about if the virt settings I am using are current for 3.12, I've also seen that this list is dead on the weekend. -wk On 10/20/2017 11:25 AM, Kaleb S. KEITHLEY wrote: On 10/20/2017 01:06 PM, WK wrote: 2. Can someone get the correct

[Gluster-users] Ubuntu Xenial 3.12.2 Gluster DEB packages are missing VIRT group settings

2017-10-20 Thread WK
for a newbie to be able to get a hold of easily. -wk ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] So how badly will Gluster be affected by the Intel 'fix'

2018-01-04 Thread WK
I'm reading that the new kernel will slow down context switches. That is of course a big deal with FUSE mounts. Has anybody installed the new kernels yet and observed any performance degradation? -wk ___ Gluster-users mailing list Gluster-users

Re: [Gluster-users] Syntax for creating arbiter volumes in gluster 4.0

2017-12-20 Thread WK
I definately prefer replica 2 arbiter 1. It makes more sense and is more accurate, since that scenario has only two copies of the actual data. -wk On 12/20/2017 2:14 AM, Ravishankar N wrote: Hi, The existing syntax in the gluster CLI for creating arbiter volumes is `gluster volume

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-26 Thread WK
+1 I also would like to see those instruction. I've been interested in NFS-Ganesha with Gluster but there wasn't obvious references that were up to date. (Let alone introduction of StorHaug) -wk On 2/25/2018 10:28 PM, Serkan Çoban wrote: I would like to see the steps for reference, can

Re: [Gluster-users] Reconstructing files from shards

2018-04-23 Thread WK
up.  In the happy case, you can test it by comparing the md5sum of the file from the mount to that of your stitched file." We tested it with some VM files and it indeed worked fine. That was probably on 3.10.1 at the time. -wk On 4/20/2018 12:44 PM, Jamie Lawrence wrote: Hel

Re: [Gluster-users] Reconstructing files from shards

2018-04-23 Thread WK
On 4/23/2018 11:46 AM, Jamie Lawrence wrote: Thanks for that, WK. Do you know if those images were sparse files? My understanding is that this will not work with files with holes. We typically use qcow2 images (compat 1.1) with metadata preallocated (so yes, sparse)  So we may

Re: [Gluster-users] Fwd: VM freeze issue on simple gluster setup.

2019-12-12 Thread WK
On 12/12/2019 4:34 AM, Ravishankar N wrote: On 12/12/19 4:01 am, WK wrote: so I can get some sort of resolution on the issue (i.e. is it hardware, Gluster etc) I guess what I really need to know is 1) Node 2 complains that it cant reach node 1 and node 3.  If this was an OS/Hardware

[Gluster-users] Fwd: VM freeze issue on simple gluster setup.

2019-12-11 Thread WK
have error message complaining about not reaching node2 2) how significant is it that the node was running 6.5 while node 1 and node 2 were running 6.4 -wk Forwarded Message Subject:VM freeze issue on simple gluster setup. Date: Thu, 5 Dec 2019 16:23:35 -0800 From

[Gluster-users] VM freeze issue on simple gluster setup.

2019-12-05 Thread WK
on from 5.x up and none of the others have had this issue Any advise would be appreciated. Sincerely, Wk Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/441850968 NA/EMEA Schedule - Every 1st and 3rd Tuesday a

Re: [Gluster-users] determine filename from shard?

2019-10-15 Thread WK
great, I'll dig into when I get a chance and report back. -wk On 10/14/2019 10:05 AM, Amar Tumballi wrote: Awesome, thanks! Then, I hope this https://github.com/gluster/glusterfs/commit/ab2558a3e7a1b2de2d63a3812ab4ed58d10d8619 is included in the build. What it means is, if you just list

Re: [Gluster-users] determine filename from shard?

2019-10-14 Thread WK
ventually figured out the problem VM using other methods and resolved the issue, but we would still like to know if there is a script or recipe to determine what file a shard may belong to as that would have sped up the resolution. -wk Community Meeting

[Gluster-users] set: failed: Quorum not met. Volume operation not allowed.

2020-08-26 Thread WK
set commands due to the quorum and spits out the set failed error. So in modern Gluster, what is the preferred method for starting and mounting a  single node/volume that was once part of a actual 3 node cluster? Thanks. -wk Community Meeting Calendar: Schedule - Every 2nd

Re: [Gluster-users] set: failed: Quorum not met. Volume operation not allowed.

2020-08-27 Thread WK
No Luck.  Same problem. I stopped the volume. I ran the remove-brick command. It warned about not being able to migrate files from removed bricks and asked if I want to continue. when I say 'yes' Gluster responds with 'failed: Quorum not met Volume operation not allowed' -wk On 8/26/2020

Re: [Gluster-users] set: failed: Quorum not met. Volume operation not allowed. SUCCESS

2020-08-27 Thread WK
-type none Finally I used Karthik's remove-brick command and it worked this time and I am now copying off the needed image. So I guess order counts. Thanks. -wk On 8/27/2020 12:47 PM, WK wrote: No Luck.  Same problem. I stopped the volume. I ran the remove-brick command. It warned about not

Re: [Gluster-users] Gluster monitoring

2020-10-27 Thread WK
https://github.com/gluster/gstatus we run this from an ansible driven cronjob and check for the healthy signal in status, as well as looking for healing files that seem to persist. We have a number of gluster clusters and we have found its warnings both useful and timely. -wk On 10/26

Re: [Gluster-users] Latest NFS-Ganesha Gluster Integration docs

2020-06-29 Thread WK
much of a difference. You get a smaller number of heals but they are bigger and take longer to sync. Does anyone know why the difference and the reasoning involved? -WK Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https

[Gluster-users] Gluster 7 or 8?

2020-12-17 Thread WK
? Is it considered Safe? How much of an improvement of people seen with it? For Us, Classic Arbiter works great, so we would need a reaon to go with Thin, but speed would be one of them if we are giving up safety. -wk Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday

[Gluster-users] Thin Arbiter (was Gluster 7 or 8?)

2020-12-18 Thread WK
OK, so it seems 8.x is the way to go So what about Thin Arbiter? Is anyone one using it in production? -wk On 12/18/2020 12:59 AM, Olaf Buitelaar wrote: It is in their release notes; https://docs.gluster.org/en/latest/release-notes/7.9/ <https://docs.gluster.org/en/latest/release-no

Re: [Gluster-users] Gluster 7 or 8?

2020-12-18 Thread WK
OK, so it seems 8.x is the way to go So what about Thin Arbiter? Is anyone one using it in production? -wk On 12/18/2020 12:59 AM, Olaf Buitelaar wrote: It is in their release notes; https://docs.gluster.org/en/latest/release-notes/7.9/ <https://docs.gluster.org/en/latest/release-no

Re: [Gluster-users] Very poor GlusterFS Volume performance (glusterfs 8.2)

2020-11-09 Thread WK
ore Gbit/s ports. Round-Robin teamd is really easy to setup, or use the traditional bonding in its various flavors. You probably have some spare NIC cards lying around so its usually a 'freebie' Of course best case  would be to make the jump to 10Gb/s kit. -wk Community Meetin

Re: [Gluster-users] Gluster monitoring

2020-10-27 Thread WK
sorry, I didn't notice you had already looked at gstatus. Nonetheless with its JSON output you certainly cover the issues you described i.e. "When Brick went down (crash, failure, shutdown), node failure, peering issue, on-going healing" which is how we use it. -wk On 10/27/20

Re: [Gluster-users] [EXT] Re: [Glusterusers] State of the gluster project

2023-10-28 Thread wk
at the time, I found it that it required more moving parts (compared to Gluster which is of course silly simple to get going) for not much improvement, but i'd assume they have improved that since then. -wk Kind regards, Alex. /Z On Sat, 28 Oct 2023 at 11:21, Strahil Nikolov

Re: [Gluster-users] [EXT] Re: [Glusterusers] Arbiter node in slow network

2023-01-04 Thread WK
normal ops and even when healing after host maintenance its less than 100Mb/s -wk On 12/31/2022 12:43 AM, Alan Orth wrote: Hi Filipe, I think it would probably be fine. The Red Hat Storage docs list the important thing being /5ms latency/, not link speed: https://access.redhat.com

Re: [Gluster-users] [EXT] [Glusterusers] RESOLVED log file spewing on one node but not the othe

2023-07-23 Thread wk
to be refreshed, it had been up for hundreds of days. -wk On 7/21/23 12:02 PM, W Kern wrote: we have an older 2+1 arbiter gluster cluster running 6.10  on Ubuntu18LTS It has run beautifully for years. Only occaisionally needing attention as drives have died, etc Each peer has two volumes. G1 and G2

Re: [Gluster-users] thin arbiter vs standard arbiter

2018-08-02 Thread WK Lists
Hi WK, There are a few patches [1]  that are still undergoing review . It would be good to wait for some more time until trying it out. If you are interested in testing, I'll be happy to inform you once they get merged. [1] https://review.gluster.org/#/c/20095/, https