Re: [Gluster-users] Client disconnections, memory use

2019-11-19 Thread Jamie Lawrence
> Hi, > For the memory increase, please capture statedumps of the process at > intervals of an hour and send it across. > https://docs.gluster.org/en/latest/Troubleshooting/statedump/ > describes how > to generate a statedump for the client process. > First, apologies - I missed this email

[Gluster-users] Mysterious volume unmounts

2019-11-19 Thread Jamie Lawrence
Hello, I have a bizarre situation, and cannot figure out what is going on. One Gluster volume will spontaneously unmount. This happens across multiple clients, but only with this volume - other gluster volumes from the same cluster mounted on the same guests do not have this problem. This is

[Gluster-users] Client disconnections, memory use

2019-11-12 Thread Jamie Lawrence
Glusternauts, I have a 3x3 cluster running 5.9 under Ubuntu 16.04. We migrated clients from a different, much older, cluster. Those clients are running 5.9 clients, and spontaneously disconnect. It was signal 15, but no user killed it, and I can't imagine why another daemon would have.

Re: [Gluster-users] Self-heals gone wild

2019-10-09 Thread Jamie Lawrence
> On Oct 9, 2019, at 1:37 AM, Ravishankar N wrote: >> > It looks like your clients are running glusterfs-3.5 or older? > afr_log_self_heal_completion_status() is a function that existed in the > really old replication code before it was refactored. Please use a newer > client, preferably

[Gluster-users] Self-heals gone wild

2019-10-08 Thread Jamie Lawrence
Hello, I recently stood up a 3x2 (soon to be 3x3) distribute-replicate volume on 5.9, running on Centos 7.7. Volume Name: test_stage1_shared Type: Distributed-Replicate Volume ID: 99674d15-7dce-480e-b642-eaf7da72c1a1 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type:

[Gluster-users] Cannot write more than 512 bytes to gluster vol

2019-03-07 Thread Jamie Lawrence
I just stood up a new cluster running 4.1.7, my first experience with version 4. It is a simple replica 3 volume: gluster v create la1_db_1 replica 3 \ gluster-10g-1:/gluster-bricks/la1_db_1/la1_db_1 \ gluster-10g-2:/gluster-bricks/la1_db_1/la1_db_1 \

[Gluster-users] Expanding, rebalance and sharding on 3.12.13

2018-09-11 Thread Jamie Lawrence
Hello, I have a 3 node cluster running 2 three-way dist/replicate volumes for Ovirt and three new nodes currently that I'd like to add. I've unfortunately not had time to closely follow this list the last few months, and am having trouble finding any status on the corruption issue with

Re: [Gluster-users] Reconstructing files from shards

2018-04-27 Thread Jamie Lawrence
> On Apr 26, 2018, at 9:00 PM, Krutika Dhananjay wrote: > But anway, why is copying data into new unsharded volume disruptive for you? The copy itself isn't; blowing away the existing volume and recreating it is. That is for the usual reasons - storage on the cluster

Re: [Gluster-users] Reconstructing files from shards

2018-04-23 Thread Jamie Lawrence
nt, blowing away the volume and reconstructing. Which is a problem. -j > -wk > > > On 4/20/2018 12:44 PM, Jamie Lawrence wrote: >> Hello, >> >> So I have a volume on a gluster install (3.12.5) on which sharding was >> enabled at some point recently. (D

[Gluster-users] Reconstructing files from shards

2018-04-20 Thread Jamie Lawrence
Hello, So I have a volume on a gluster install (3.12.5) on which sharding was enabled at some point recently. (Don't know how it happened, it may have been an accidental run of an old script.) So it has been happily sharding behind our backs and it shouldn't have. I'd like to turn sharding

Re: [Gluster-users] Fixing a rejected peer

2018-03-07 Thread Jamie Lawrence
> On Mar 7, 2018, at 4:39 AM, Atin Mukherjee wrote: > > Please run 'gluster v get all cluster.max-op-version' and what ever value it > throws up should be used to bump up the cluster.op-version (gluster v set all > cluster.op-version ) . With that if you restart the

Re: [Gluster-users] Fixing a rejected peer

2018-03-06 Thread Jamie Lawrence
> On Mar 5, 2018, at 6:41 PM, Atin Mukherjee wrote: > I'm tempted to repeat - down things, copy the checksum the "good" ones agree > on, start things; but given that this has turned into a balloon-squeezing > exercise, I want to make sure I'm not doing this the wrong way.

Re: [Gluster-users] Fixing a rejected peer

2018-03-06 Thread Jamie Lawrence
Just following up on the below after having some time to track down the differences. On the bad peer, the `tier-enabled=0` line in .../vols//info was removed after I copied it over and as mentioned, the cksum file changed to a value that doesn't match the others. The logs only complain about

Re: [Gluster-users] Fixing a rejected peer

2018-03-06 Thread Jamie Lawrence
> On Mar 5, 2018, at 6:41 PM, Atin Mukherjee <amukh...@redhat.com> wrote: > > > > On Tue, Mar 6, 2018 at 6:00 AM, Jamie Lawrence <jlawre...@squaretrade.com> > wrote: > Hello, > > So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 vol

[Gluster-users] Fixing a rejected peer

2018-03-05 Thread Jamie Lawrence
Hello, So I'm seeing a rejected peer with 3.12.6. This is with a replica 3 volume. It actually began as the same problem with a different peer. I noticed with (call it) gluster-2, when I couldn't make a new volume. I compared /var/lib/glusterd between them, and found that somehow the options

[Gluster-users] Volume mounts read-only

2017-07-21 Thread Jamie Lawrence
Hello Glusterites, I have a volume that will not mount read/write. V3.10.3 on Centos 7, this is a replica-3 volume, mounted with the fuse client. This is in support of an Ovirt installation, but I've isolated the problem to Gluster. `gluster peer status` looks normal, as does a `gluster v

Re: [Gluster-users] 120k context switches on GlsuterFS nodes

2017-05-17 Thread Jamie Lawrence
> On May 17, 2017, at 10:20 AM, mabi wrote: > > I don't know exactly what kind of context-switches it was but what I know is > that it is the "cs" number under "system" when you run vmstat. > > Also I use the percona linux monitoring template for cacti >

[Gluster-users] "Fake" distributed-replicated volume

2017-04-07 Thread Jamie Lawrence
Greetings, Glusterites - I have a suboptimal situation, and am wondering if there is any way to create a replica-3 distributed/replicated volume with three machines. I saw in the docs that the create command will fail with multiple bricks on the same peer; is there a way around that/some other

Re: [Gluster-users] one brick vs multiple brick on the same ZFS zpool.

2017-03-08 Thread Jamie Lawrence
Not necessarily. ZFS does things fairly differently than other filesystems, and can be faster than HW RAID. I’d recommend spending a bit of time reading up - the Linux ZFS-discuss list archives are a great place to start - http://list.zfsonlinux.org/pipermail/zfs-discuss/ . That said, if

Re: [Gluster-users] A question of GlusterFS dentries!

2016-11-04 Thread Jamie Lawrence
> > On Nov 2, 2016, at 4:54 AM, Keiviw wrote: > > What is rewinddir() used for ? In other words, What are the situations in > which we use rewinddir?? ‘man rewinddir’ http://lmgtfy.com/?q=rewinddir ___ Gluster-users mailing list

Re: [Gluster-users] Production cluster planning

2016-09-26 Thread Jamie Lawrence
> On Sep 26, 2016, at 4:26 AM, Lindsay Mathieson > wrote: > > On 26/09/2016 8:18 PM, Gandalf Corvotempesta wrote: >> No one ? >> And what about gluster on ZFS? Is that fully supported ? > > I certainly hope so because I'm running a Replica 3 production cluster on

Re: [Gluster-users] Centos 7.2 packages/repo

2016-08-18 Thread Jamie Lawrence
> On Aug 13, 2016, at 2:31 AM, Niels de Vos wrote: > All dependencies get resolved, including glusterfs-fuse, glusterfs-libs > and glusterfs-api that are missing for your try. The main mirror lists > all these packages too, I am not sure how they can be missing when you > use

[Gluster-users] Centos 7.2 packages/repo

2016-08-12 Thread Jamie Lawrence
Glusterites, I’m getting back to a project which involves upgrading a cluster, and am confused by the current state of the packages. The machine I’m dealing with freshly-upgraded to Centos 7.2.1511, and is one of four that will be identical Gluster/oVirt servers. I had a source defined

Re: [Gluster-users] Gluster not saturating 10gb network

2016-08-04 Thread Jamie Lawrence
There is no way you’ll see 6GB/s out of a single disk. I think you’re referring to the rated SATA speed, which has nothing to do with the actual data rates you’ll see from the spinning rust. You might see ~130-150MB/s from a single platter in really nice, artificial workloads, more in RAID

Re: [Gluster-users] Gluster upgrade planning

2016-05-11 Thread Jamie Lawrence
> On May 10, 2016, at 3:27 PM, Lindsay Mathieson <lindsay.mathie...@gmail.com> > wrote: > > On 11/05/2016 4:40 AM, Jamie Lawrence wrote: >> - Is anyone currently running Gluster on Debian or Ubuntu in production? We >> would prefer to get off RHEL-flavored hosts

[Gluster-users] Gluster upgrade planning

2016-05-10 Thread Jamie Lawrence
Hello all, We are working on an upgrade plan that touches a number of things, one of them being our Gluster setup. I wanted to throw some of this out and see if anyone sees any glaring problems with it. One of our constraints is that our current Gluster installation is production and cannot