Re: [Gluster-users] EC 1+2

2017-09-25 Thread Xavi Hernandez

Hi,

On 23/09/17 16:00, Gandalf Corvotempesta wrote:
Is possible to create a dispersed volume 1+2 ? (Almost the same as 
replica 3, the same as RAID-6)


This cannot be done by current version of EC. It requires that the 
number of data bricks be greater than the number of redundancy bricks. 
So if you want redundancy 2, you need to have, at least, 3 data bricks 
(4 is better for best performance). This means that the minimum 
recommended configuration with redundancy 2 is 4+2.




If yes, how many server I have to add in the future to expand the 
storage? 1 or 3?


You always need to grow using multiples of the sum of data and 
redundancy bricks. In your case this would be 3. However, as explained 
before, this configuration is not supported. You should use 4+2 to have 
2 bricks of redundancy. In this case you would need to grow in blocks of 
6 bricks.


Xavi




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] 0-client_t: null client [Invalid argument] & high CPU usage (Gluster 3.12)

2017-09-25 Thread Sam McLeod
The 3.12.1 CentOS Storage SIG have now been released: 
http://mirror.centos.org/centos/7/storage/x86_64/gluster-3.12/ 


Big thank you to Niels de Vos from Redhat!

--
Sam McLeod 
@s_mcleod
https://smcleod.net

> On 25 Sep 2017, at 10:57 am, Sam McLeod  wrote:
> 
> FYI - I've been testing the Gluster 3.12.1 packages with the help of the SIG 
> maintainer and I can confirm that the logs are no longer being filled with 
> NFS or null client errors after the upgrade.
> 
> --
> Sam McLeod 
> @s_mcleod
> https://smcleod.net 
> 
>> On 18 Sep 2017, at 10:14 pm, Sam McLeod > > wrote:
>> 
>> Thanks Milind,
>> 
>> Yes I’m hanging out for CentOS’s Storage / Gluster SIG to release the 
>> packages for 3.12.1, I can see the packages were built a week ago but 
>> they’re still not on the repo :(
>> 
>> --
>> Sam
>> 
>> On 18 Sep 2017, at 9:57 pm, Milind Changire > > wrote:
>> 
>>> Sam,
>>> You might want to give glusterfs-3.12.1 a try instead.
>>> 
>>> 
>>> 
>>> On Fri, Sep 15, 2017 at 6:42 AM, Sam McLeod >> > wrote:
>>> Howdy,
>>> 
>>> I'm setting up several gluster 3.12 clusters running on CentOS 7 and have 
>>> having issues with glusterd.log and glustershd.log both being filled with 
>>> errors relating to null client errors and client-callback functions.
>>> 
>>> They seem to be related to high CPU usage across the nodes although I don't 
>>> have a way of confirming that (suggestions welcomed!).
>>> 
>>> 
>>> in /var/log/glusterfs/glusterd.log:
>>> 
>>> csvc_request_init+0x7f) [0x7f382007b93f] 
>>> -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 
>>> 0-client_t: null client [Invalid argument]
>>> [2017-09-15 00:54:14.454022] E [client_t.c:324:gf_client_ref] 
>>> (-->/lib64/libgfrpc.so.0(rpcsvc_request_create+0xf8) [0x7f382007e7e8] 
>>> -->/lib64/libgfrpc.so.0(rpcsvc_request_init+0x7f) [0x7f382007b93f] 
>>> -->/lib64/libglusterfs.so.0(gf_client_ref+0x179) [0x7f3820315e59] ) 
>>> 0-client_t: null client [Invalid argument]
>>> 
>>> 
>>> This is repeated _thousands_ of times and is especially noisey when any 
>>> node is running gluster volume set   .
>>> 
>>> and I'm unsure if it's related but in /var/log/glusterfs/glustershd.log:
>>> 
>>> [2017-09-15 00:36:21.654242] W [MSGID: 114010] 
>>> [client-callback.c:28:client_cbk_fetchspec] 0-my_volume_name-client-0: this 
>>> function should not be called
>>> 
>>> 
>>> ---
>>> 
>>> 
>>> Cluster configuration:
>>> 
>>> Gluster 3.12
>>> CentOS 7.4
>>> Replica 3, Arbiter 1
>>> NFS disabled (using Kubernetes with the FUSE client)
>>> Each node is 8 Xeon E5-2660 with 16GB RAM virtualised on XenServer 7.2
>>> 
>>> 
>>> root@int-gluster-03:~  # gluster get
>>> glusterd state dumped to /var/run/gluster/glusterd_state_20170915_110532
>>> 
>>> [Global]
>>> MYUUID: 0b42ffb2-217a-4db6-96bf-cf304a0fa1ae
>>> op-version: 31200
>>> 
>>> [Global options]
>>> cluster.brick-multiplex: enable
>>> 
>>> [Peers]
>>> Peer1.primary_hostname: int-gluster-02.fqdn.here
>>> Peer1.uuid: e614686d-0654-43c9-90ca-42bcbeda3255
>>> Peer1.state: Peer in Cluster
>>> Peer1.connected: Connected
>>> Peer1.othernames:
>>> Peer2.primary_hostname: int-gluster-01.fqdn.here
>>> Peer2.uuid: 9b0c82ef-329d-4bd5-92fc-95e2e90204a6
>>> Peer2.state: Peer in Cluster
>>> Peer2.connected: Connected
>>> Peer2.othernames:
>>> 
>>> (Then volume options are listed)
>>> 
>>> 
>>> ---
>>> 
>>> 
>>> Volume configuration:
>>> 
>>> root@int-gluster-03:~ # gluster volume info my_volume_name
>>> 
>>> Volume Name: my_volume_name
>>> Type: Replicate
>>> Volume ID: 6574a963-3210-404b-97e2-bcff0fa9f4c9
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: int-gluster-01.fqdn.here:/mnt/gluster-storage/my_volume_name
>>> Brick2: int-gluster-02.fqdn.here:/mnt/gluster-storage/my_volume_name
>>> Brick3: int-gluster-03.fqdn.here:/mnt/gluster-storage/my_volume_name
>>> Options Reconfigured:
>>> performance.stat-prefetch: true
>>> performance.parallel-readdir: true
>>> performance.client-io-threads: true
>>> network.ping-timeout: 5
>>> diagnostics.client-log-level: WARNING
>>> diagnostics.brick-log-level: WARNING
>>> cluster.readdir-optimize: true
>>> cluster.lookup-optimize: true
>>> transport.address-family: inet
>>> nfs.disable: on
>>> cluster.brick-multiplex: enable
>>> 
>>> 
>>> --
>>> Sam McLeod
>>> @s_mcleod
>>> https://smcleod.net 
>>> 
>>> ___
>>> Gluster-users mailing list
>>> Gluster-users@gluster.org 
>>> http://lists.gluster.org/mailman/listinfo/gluster-users 
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Milind
>>> 
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org 

Re: [Gluster-users] Adding bricks to an existing installation.

2017-09-25 Thread Ludwig Gamache
Sharding is not enabled.

Ludwig

On Mon, Sep 25, 2017 at 2:34 PM,  wrote:

> Do you have sharding enabled ? If yes, don't do it.
> If no I'll let someone who knows better answer you :)
>
> On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> > All,
> >
> > We currently have a Gluster installation which is made of 2 servers. Each
> > server has 10 drives on ZFS. And I have a gluster mirror between these 2.
> >
> > The current config looks like:
> > SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
> >
> > I now need to add more space and a third server. Before I do the
> changes, I
> > want to know if this is a supported config. By adding a third server, I
> > simply want to distribute the load. I don't want to add extra redundancy.
> >
> > In the end, I want to have the following done:
> > Add a peer to the cluster
> > Add 2 bricks to the cluster (one on server A and one on SERVER C) to the
> > existing volume
> > Add 2 bricks to the cluster (one on server B and one on SERVER C) to the
> > existing volume
> > After that, I need to rebalance all the data between the bricks...
> >
> > Is this config supported? Is there something I should be careful before I
> > do this? SHould I do a rebalancing before I add the 3 set of disks?
> >
> > Regards,
> >
> >
> > Ludwig
>
> > ___
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Ludwig Gamache
IT Director - Element AI
4200 St-Laurent, suite 1200
514-704-0564
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Adding bricks to an existing installation.

2017-09-25 Thread lemonnierk
Do you have sharding enabled ? If yes, don't do it.
If no I'll let someone who knows better answer you :)

On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote:
> All,
> 
> We currently have a Gluster installation which is made of 2 servers. Each
> server has 10 drives on ZFS. And I have a gluster mirror between these 2.
> 
> The current config looks like:
> SERVER A-BRICK 1 replicated to SERVER B-BRICK 1
> 
> I now need to add more space and a third server. Before I do the changes, I
> want to know if this is a supported config. By adding a third server, I
> simply want to distribute the load. I don't want to add extra redundancy.
> 
> In the end, I want to have the following done:
> Add a peer to the cluster
> Add 2 bricks to the cluster (one on server A and one on SERVER C) to the
> existing volume
> Add 2 bricks to the cluster (one on server B and one on SERVER C) to the
> existing volume
> After that, I need to rebalance all the data between the bricks...
> 
> Is this config supported? Is there something I should be careful before I
> do this? SHould I do a rebalancing before I add the 3 set of disks?
> 
> Regards,
> 
> 
> Ludwig

> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users



signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Adding bricks to an existing installation.

2017-09-25 Thread Ludwig Gamache
All,

We currently have a Gluster installation which is made of 2 servers. Each
server has 10 drives on ZFS. And I have a gluster mirror between these 2.

The current config looks like:
SERVER A-BRICK 1 replicated to SERVER B-BRICK 1

I now need to add more space and a third server. Before I do the changes, I
want to know if this is a supported config. By adding a third server, I
simply want to distribute the load. I don't want to add extra redundancy.

In the end, I want to have the following done:
Add a peer to the cluster
Add 2 bricks to the cluster (one on server A and one on SERVER C) to the
existing volume
Add 2 bricks to the cluster (one on server B and one on SERVER C) to the
existing volume
After that, I need to rebalance all the data between the bricks...

Is this config supported? Is there something I should be careful before I
do this? SHould I do a rebalancing before I add the 3 set of disks?

Regards,


Ludwig
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Bandwidth and latency requirements

2017-09-25 Thread Colin Coe
Hi all

I've googled but can't find an answer to my question.

I have two data centers.  Currently, I have a replica (count of 2 plus
arbiter) in one data center but is used by both.

I want to change this to be a distributed replica across the two data
centers.

There is a 20Mbps pipe and approx 22 ms latency. Is this sufficient?

I really don't want to do the geo-replication in its current form.

Thanks

CC
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] df command shows transport endpoint mount error on gluster client v.3.10.5 + core dump

2017-09-25 Thread Mauro Tridici
Dear Gluster Users,

I implemented a distributed disperse 6x(4+2) gluster (v.3.10.5) volume with the 
following options:

[root@s01 tier2]# gluster volume info
 
Volume Name: tier2
Type: Distributed-Disperse
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (4 + 2) = 36
Transport-type: tcp
Bricks:
Brick1: s01-stg:/gluster/mnt1/brick
Brick2: s02-stg:/gluster/mnt1/brick
Brick3: s03-stg:/gluster/mnt1/brick
Brick4: s01-stg:/gluster/mnt2/brick
Brick5: s02-stg:/gluster/mnt2/brick
Brick6: s03-stg:/gluster/mnt2/brick
Brick7: s01-stg:/gluster/mnt3/brick
Brick8: s02-stg:/gluster/mnt3/brick
Brick9: s03-stg:/gluster/mnt3/brick
Brick10: s01-stg:/gluster/mnt4/brick
Brick11: s02-stg:/gluster/mnt4/brick
Brick12: s03-stg:/gluster/mnt4/brick
Brick13: s01-stg:/gluster/mnt5/brick
Brick14: s02-stg:/gluster/mnt5/brick
Brick15: s03-stg:/gluster/mnt5/brick
Brick16: s01-stg:/gluster/mnt6/brick
Brick17: s02-stg:/gluster/mnt6/brick
Brick18: s03-stg:/gluster/mnt6/brick
Brick19: s01-stg:/gluster/mnt7/brick
Brick20: s02-stg:/gluster/mnt7/brick
Brick21: s03-stg:/gluster/mnt7/brick
Brick22: s01-stg:/gluster/mnt8/brick
Brick23: s02-stg:/gluster/mnt8/brick
Brick24: s03-stg:/gluster/mnt8/brick
Brick25: s01-stg:/gluster/mnt9/brick
Brick26: s02-stg:/gluster/mnt9/brick
Brick27: s03-stg:/gluster/mnt9/brick
Brick28: s01-stg:/gluster/mnt10/brick
Brick29: s02-stg:/gluster/mnt10/brick
Brick30: s03-stg:/gluster/mnt10/brick
Brick31: s01-stg:/gluster/mnt11/brick
Brick32: s02-stg:/gluster/mnt11/brick
Brick33: s03-stg:/gluster/mnt11/brick
Brick34: s01-stg:/gluster/mnt12/brick
Brick35: s02-stg:/gluster/mnt12/brick
Brick36: s03-stg:/gluster/mnt12/brick
Options Reconfigured:
features.scrub: Active
features.bitrot: on
features.inode-quota: on
features.quota: on
performance.client-io-threads: on
cluster.min-free-disk: 10
cluster.quorum-type: auto
transport.address-family: inet
nfs.disable: on
server.event-threads: 4
client.event-threads: 4
cluster.lookup-optimize: on
performance.readdir-ahead: on
performance.parallel-readdir: off
cluster.readdir-optimize: on
features.cache-invalidation: on
features.cache-invalidation-timeout: 600
performance.stat-prefetch: on
performance.cache-invalidation: on
performance.md-cache-timeout: 600
network.inode-lru-limit: 5
performance.io-cache: off
disperse.cpu-extensions: auto
performance.io-thread-count: 16
features.quota-deem-statfs: on
features.default-soft-limit: 90
cluster.server-quorum-type: server
cluster.brick-multiplex: on
cluster.server-quorum-ratio: 51%

I also started a long write test (about 69 TB to be written) from different 
gluster clients.
One of this client returned the error "Transport Endpoint is not connected" 
during the rsync copy process.
A core dump file has been generated by the issue; this is the core dump content:

GNU gdb (GDB) Red Hat Enterprise Linux (7.2-50.el6)
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Missing separate debuginfo for the main executable file
Try: yum --disablerepo='*' --enablerepo='*-debuginfo' install 
/usr/lib/debug/.build-id/fb/4d988b681faa09bb74becacd7a24f4186e8185
[New Thread 15802]
[New Thread 15804]
[New Thread 15805]
[New Thread 6856]
[New Thread 30432]
[New Thread 30486]
[New Thread 1619]
[New Thread 15806]
[New Thread 15810]
[New Thread 30412]
[New Thread 15809]
[New Thread 15799]
[New Thread 30487]
[New Thread 15795]
[New Thread 15797]
[New Thread 15798]
[New Thread 15800]
[New Thread 15801]
Core was generated by `/usr/sbin/glusterfs --volfile-server=s01-stg 
--volfile-id=/tier2 /tier2'.
Program terminated with signal 6, Aborted.
#0  0x0032a74328a5 in ?? ()
"/core.15795" is a core file.
Please specify an executable to debug.

No problem detected on the gluster servers, no brick down and so on...

Anyone of us experienced the same problem? 
If yes, how did you resolve it?

You can find the client "/var/log/messages" and "/var/log/glusterfs" log files 
content.
Thank you in advance.
Mauro Tridici

--

In /var/log/syslog-ng/messages on the client (OS=centos 6.2, 16 cores, 64GB 
RAM, gluster client v.3.10.5)

Sep 23 10:42:43 login2 tier2[15795]: pending frames:
Sep 23 10:42:43 login2 tier2[15795]: frame : type(1) op(WRITE)
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0)
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0)
Sep 23 10:42:43 login2 tier2[15795]: frame : type(1) op(WRITE)
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0)
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0)
Sep 23 10:42:43 login2 tier2[15795]: frame : type(0) op(0)
Sep 23

Re: [Gluster-users] how to verify bitrot signed file manually?

2017-09-25 Thread Amudhan P
resending mail.

On Fri, Sep 22, 2017 at 5:30 PM, Amudhan P  wrote:

> ok, from bitrot code I figured out gluster using sha256 hashing algo.
>
>
> Now coming to the problem, during scrub run in my cluster some of my files
> were marked as bad in few set of nodes.
> I just wanted to confirm bad file. so, I have used "sha256sum" tool in
> Linux to manually get file hash.
>
> here is the result.
>
> file-1, file-2 marked as bad by scrub and file-3 is healthy.
>
> file-1 sha256 and bitrot signature value matches but still it's been
> marked as bad.
>
> file-2 sha256 and bitrot signature value don't match, could be a victim of
> bitrot or bitflip.file is still readable without any issue and no errors
> found in the drive.
>
> file-3 sha256 and bitrot signature matches and healthy.
>
>
> file-1 output from
>
> "sha256sum" = "71eada9352b1352aaef0f806d3d561
> 768ce2df905ded1668f665e06eca2d0bd4"
>
>
> "getfattr -m. -e hex -d "
> # file: file-1
> trusted.bit-rot.bad-file=0x3100
> trusted.bit-rot.signature=0x01020071eada9352
> b1352aaef0f806d3d561768ce2df905ded1668f665e06eca2d0bd4
> trusted.bit-rot.version=0x020058e4f3b40006793d
> trusted.ec.config=0x080a02000200
> trusted.ec.dirty=0x
> trusted.ec.size=0x000718996701
> trusted.ec.version=0x00038c4c00038c4d
> trusted.gfid=0xf078a24134fe4f9bb953eca8c28dea9a
>
> output scrub log:
> [2017-09-02 13:02:20.311160] A [MSGID: 118023] 
> [bit-rot-scrub.c:244:bitd_compare_ckum]
> 0-qubevaultdr-bit-rot-0: CORRUPTION DETECTED: Object /file-1 {Brick:
> /media/disk16/brick16 | GFID: f078a241-34fe-4f9b-b953-eca8c28dea9a}
> [2017-09-02 13:02:20.311579] A [MSGID: 118024] 
> [bit-rot-scrub.c:264:bitd_compare_ckum]
> 0-qubevaultdr-bit-rot-0: Marking /file-1 [GFID: 
> f078a241-34fe-4f9b-b953-eca8c28dea9a
> | Brick: /media/disk16/brick16] as corrupted..
>
> file-2 output from
>
> "sha256sum" = "c41ef9c81faed4f3e6010ea67984c3
> cfefd842f98ee342939151f9250972dcda"
>
>
> "getfattr -m. -e hex -d "
> # file: file-2
> trusted.bit-rot.bad-file=0x3100
> trusted.bit-rot.signature=0x0102009162cb17d4
> f0bee676fcb7830c5286d05b8e8940d14f3d117cb90b7b1defc129
> trusted.bit-rot.version=0x020058e4f3b400019bb2
> trusted.ec.config=0x080a02000200
> trusted.ec.dirty=0x
> trusted.ec.size=0x403433f6
> trusted.ec.version=0x201a201b
> trusted.gfid=0xa50012b0a632477c99232313928d239a
>
> output scrub log:
> [2017-09-02 05:18:14.003156] A [MSGID: 118023] 
> [bit-rot-scrub.c:244:bitd_compare_ckum]
> 0-qubevaultdr-bit-rot-0: CORRUPTION DETECTED: Object /file-2 {Brick:
> /media/disk13/brick13 | GFID: a50012b0-a632-477c-9923-2313928d239a}
> [2017-09-02 05:18:14.006629] A [MSGID: 118024] 
> [bit-rot-scrub.c:264:bitd_compare_ckum]
> 0-qubevaultdr-bit-rot-0: Marking /file-2 [GFID: 
> a50012b0-a632-477c-9923-2313928d239a
> | Brick: /media/disk13/brick13] as corrupted..
>
>
> file-3 output from
>
> "sha256sum" = "a590735b3c8936cc7ca9835128a19c
> 38a3f79c8fd53fddc031a9349b7e273f27"
>
>
> "getfattr -m. -e hex -d "
> # file: file-3
> trusted.bit-rot.signature=0x010200a590735b3c
> 8936cc7ca9835128a19c38a3f79c8fd53fddc031a9349b7e273f27
> trusted.bit-rot.version=0x020058e4f3b400019bb2
> trusted.ec.config=0x080a02000200
> trusted.ec.dirty=0x
> trusted.ec.size=0x3530fc96
> trusted.ec.version=0x1a981a99
> trusted.gfid=0x10d8920e42cd42cf9448b8bf3941c192
>
>
>
> most of the bitrot bad files are in the set of new nodes and data were
> uploaded using gluster 3.10.1. no drive issues are any kind of error msgs
> in logs.
>
> what could be gone wrong?
>
> regards
> Amudhan
>
> On Thu, Sep 21, 2017 at 1:23 PM, Amudhan P  wrote:
>
>> Hi,
>>
>> I have a file in my brick which was signed by bitrot and latter when
>> running scrub it was marked as bad.
>>
>> Now, I want to verify file again manually. just to clarify my doubt
>>
>> how can I do this?
>>
>>
>> regards
>> Amudhan
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] [Gluster-infra] lists.gluster.org issues this weekend

2017-09-25 Thread Michael Scherer
Le mardi 19 septembre 2017 à 17:33 +0100, Michael Scherer a écrit :
> Le samedi 16 septembre 2017 à 20:48 +0530, Nigel Babu a écrit :
> > Hello folks,
> > 
> > We have discovered that for the last few weeks our mailman server
> > was
> > used
> > for a spam attack. The attacker would make use of the + feature
> > offered by
> > gmail and hotmail. If you send an email to exam...@hotmail.com,
> > example+...@hotmail.com, example+...@hotmail.com, it goes to the
> > same
> > inbox. We were constantly hit with requests to subscribe to a few
> > inboxes.
> > These requests overloaded our mail server so much that it gave up.
> > We
> > detected this failure because a postmortem email to
> > gluster-in...@gluster.org bounced. Any emails sent to our mailman
> > server
> > may have been on hold for the last 24 hours or so. They should be
> > processed
> > now as your email provider re-attempts.
> > 
> > For the moment, we've banned subscribing with an email address with
> > a
> > + in
> > the name. If you are already subscribed to the lists with a + in
> > your
> > email
> > address, you will continue to be able to use the lists.
> > 
> > We're looking at banning the spam IP addresses from being able to
> > hit
> > the
> > web interface at all. When we have a working alternative, we will
> > look at
> > removing the current ban of using + in address.
> 
> So we have a alternative in place, I pushed a blacklist using
> mod_security and a few DNS blacklist:
> https://github.com/gluster/gluster.org_ansible_configuration/commit/2
> f4
> c1b8feeae16e1d0b7d6073822a6786ed21dde
> 
> 
> 
> 
> > Apologies for the outage and a big shout out to Michael for taking
> > time out
> > of his weekend to debug and fix the issue.
> 
> Well, you can thanks the airport in Prague for being less interesting
> than a spammer attacking us.

So, it turned out to have a 2nd problem on the lists server.

Since the 2017 security incident, we have installed a remote syslog
server to store all logs. However, this logs server disk became full
(I still need to investigate why, but I think "lack of proper log
rotation" somewhere down the line).

In turn, the disk being full did cause slowdown on some syslog clients,
even if for now, we only seen issue on postfix for
supercolony.gluster.org. 

As a emergency measure, I did remove logs export for supercolony, and I
will be likely moving the logs server to the community cage and fix the
setup once I am back from PTO next week.
-- 
Michael Scherer
Sysadmin, Community Infrastructure and Platform, OSAS



signature.asc
Description: This is a digitally signed message part
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users