[Gluster-users] lists.gluster.org issues this weekend
Hello folks, We have discovered that for the last few weeks our mailman server was used for a spam attack. The attacker would make use of the + feature offered by gmail and hotmail. If you send an email to exam...@hotmail.com, example+...@hotmail.com, example+...@hotmail.com, it goes to the same inbox. We were constantly hit with requests to subscribe to a few inboxes. These requests overloaded our mail server so much that it gave up. We detected this failure because a postmortem email to gluster-in...@gluster.org bounced. Any emails sent to our mailman server may have been on hold for the last 24 hours or so. They should be processed now as your email provider re-attempts. For the moment, we've banned subscribing with an email address with a + in the name. If you are already subscribed to the lists with a + in your email address, you will continue to be able to use the lists. We're looking at banning the spam IP addresses from being able to hit the web interface at all. When we have a working alternative, we will look at removing the current ban of using + in address. Apologies for the outage and a big shout out to Michael for taking time out of his weekend to debug and fix the issue. -- nigelb ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)
- Original Message - > From: "Sam McLeod"> To: gluster-users@gluster.org > Sent: Friday, September 15, 2017 6:42:13 AM > Subject: [Gluster-users] 0-client_t: null client [Invalid argument] & high > CPU usage (Gluster 3.12) > > Howdy, > > I'm setting up several gluster 3.12 clusters running on CentOS 7 and have > having issues with glusterd.log and glustershd.log both being filled with > errors relating to null client errors and client-callback functions. > > They seem to be related to high CPU usage across the nodes although I don't > have a way of confirming that (suggestions welcomed!). > > > in /var/log/glusterfs/glusterd.log: > > csvc_request_init+0x7f) [0x7f382007b93f] > -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) > 0-client_t: null client [Invalid argument] > [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref] > (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8] > -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f] > -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) > 0-client_t: null client [Invalid argument] This issue of spurious logging is fixed in v3.12.1. Thanks to Nithya for bringing this issue to my notice. The issue of high cpu usage seems to be a different issue provided logging itself is not driving cpu usage. > > > This is repeated _thousands_ of times and is especially noisey when any node > is running gluster volume set . > > and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log: > > [2017-09-15 00:36:21.654242] W [MSGID: 114010] > [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0: this > function should not be called > > > --- > > > Cluster configuration: > > Gluster 3.12 > CentOS 7.4 > Replica 3, Arbiter 1 > NFS disabled (using Kubernetes with the FUSE client) > Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2 > > > root@int-gluster-03:~ # gluster get > glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532 > > [Global] > MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae > op-version: 31200 > > [Global options] > cluster.brick-multiplex: enable > > [Peers] > Peer1.primary_hostname: int-gluster-02.fqdn.here > Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255 > Peer1.state: Peer in Cluster > Peer1.connected: Connected > Peer1.othernames: > Peer2.primary_hostname: int-gluster-01.fqdn.here > Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6 > Peer2.state: Peer in Cluster > Peer2.connected: Connected > Peer2.othernames: > > (Then volume options are listed) > > > --- > > > Volume configuration: > > root@int-gluster-03:~ # gluster volume info my_volume_name > > Volume Name: my_volume_name > Type: Replicate > Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9 > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 3 = 3 > Transport-type: tcp > Bricks: > Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name > Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name > Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name > Options Reconfigured: > performance.stat-prefetch: true > performance.parallel-readdir: true > performance.client-io-threads: true > network.ping-timeout: 5 > diagnostics.client-log-level: WARNING > diagnostics.brick-log-level: WARNING > cluster.readdir-optimize: true > cluster.lookup-optimize: true > transport.address-family: inet > nfs.disable: on > cluster.brick-multiplex: enable > > > -- > Sam McLeod > @s_mcleod > https://smcleod.net > > ___ > Gluster-users mailing list > Gluster-users@gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users > ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users
Re: [Gluster-users] Confusing lstat() performance
On 15/09/17 02:45, Sam McLeod wrote: > Out of interest have you tried testing performance > with performance.stat-prefetch enabled? Not yet, because I'm still struggling to understand the current more basic setup's performance behaviour (with it being off), but it's definitely on my list and I'll report the outcome. ___ Gluster-users mailing list Gluster-users@gluster.org http://lists.gluster.org/mailman/listinfo/gluster-users