Re: [Gluster-users] Confusion supreme

2024-06-26 Thread Zenon Panoussis
I should add that in /var/lib/glusterd/vols/gv0/gv0-shd.vol and in all other configs in /var/lib/glusterd/ on all three machines the nodes are consistently named client-2: zephyrosaurus client-3: alvarezsaurus client-4: nanosaurus This is normal. It was the second time that a brick was

[Gluster-users] Confusion supreme

2024-06-26 Thread Zenon Panoussis
Hello all I have a mail store on a volume replica 3 with no arbiter. A while ago the disk of one of the bricks failed and I was several days late to notice it. When I did, I removed that brick from the volume, replaced the failed disk, updated the OS on that machine from el8 to el9 and gluster

Re: [Gluster-users] No healing, errno 22

2022-03-04 Thread Zenon Panoussis
I am continuing on a thread from March last year; please see the background in those previous postings. I am having the same problem again, but now I found the cause and the way to fix it. It looks to me like a bug, though I can't be sure. I have a live mail spool on a replica 3 volume. It has

Re: [Gluster-users] Replica bricks fungible?

2021-06-14 Thread Zenon Panoussis
> So you copy from the brick to the FUSE via rsync , > but what is the idea behind that move ? No, not from the brick. I copy from my external non-gluster datasource to the gluster volume via the fuse mount of its one and only brick. This is how I added the new data to the volume. I can list

Re: [Gluster-users] Replica bricks fungible?

2021-06-13 Thread Zenon Panoussis
> Based on your output it seems that add-brick (with force) > did not destroy the already existing data, right ? Correct, it did not. > Have you checked the data integrity after the add-brick ? I only checked cursorily with 'ls thisdir' and 'ls thatdir' to see if things looked OK and they

Re: [Gluster-users] Replica bricks fungible?

2021-06-13 Thread Zenon Panoussis
> Have you documented the procedure you followed? There was a serious error in my previous reply to you: rsync -vvaz --progress node01:/gfsroot/gv0 /gfsroot/ That should have been 'rsync -vvazH' and the "H" is very important. Gluster uses hard links to map file UUIDs to file names, but

Re: [Gluster-users] Replica bricks fungible?

2021-06-09 Thread Zenon Panoussis
> it will require quite a lot of time to *rebalance*... (my emphasis on "rebalance"). Just to avoid any misunderstandings, I am talking about pure replica. No distributed replica and no arbitrated replica. I guess that moving bricks would also work on a distributed replica within, but not

Re: [Gluster-users] Freenode takeover and GlusterFS IRC channels

2021-06-07 Thread Zenon Panoussis
I don't use IRC and there is no impact to me of this one way or anotther, but having now read a number of resignation letters, I think that this is an issue of principle: the FOSS community should be solidary to the FOSS community and not support those who destroy it by appropriating the free

Re: [Gluster-users] Replica bricks fungible?

2021-06-05 Thread Zenon Panoussis
> Are all replica (non-arbiter) bricks identical to each > other? If not, what do they differ in? > What I'm really asking is: can I physically move a brick > from one server to another such as I can now answer my own question: yes, replica bricks are identical and can be physically moved or

[Gluster-users] Messy rpm upgrade

2021-05-18 Thread Zenon Panoussis
This morning I found my gluster volume broken. Whatever 'gluster volume x gv0' commands I tried, they timed out. The logs were not very helpful. Restarting gluster on all nodes did not help. Actually nothing helped, and I didn't even know what to look for where. The volume spans over three

Re: [Gluster-users] Replica bricks fungible?

2021-04-23 Thread Zenon Panoussis
>> Are all replica (non-arbiter) bricks identical to each >> other? If not, what do they differ in? > No. At least meta-metadata is different, IIUC. Hmm, but at first sight this shouldn't be a problem as long as (a) the the "before" and the "after" configuration contain the exact same bricks,

[Gluster-users] Replica bricks fungible?

2021-04-23 Thread Zenon Panoussis
Are all replica (non-arbiter) bricks identical to each other? If not, what do they differ in? What I'm really asking is: can I physically move a brick from one server to another such as before after node1:brick1node1:brick1 node2:brick2node2 node3:brick3

[Gluster-users] Append to file

2021-04-14 Thread Zenon Panoussis
Does anyone have experience of constant and rapid appending to glusterfs files? Like, say, having active logfiles on glusterfs? Or worse, more than one process (on more than one client) appending to the same file, e.g to a syslog? Does it work at all? Does it slow down the source processes? Does

[Gluster-users] RBL mafia used by gluster-users

2021-04-09 Thread Zenon Panoussis
I tried to post here and received this bounce: Your message to the following recipients cannot be delivered: : mx2.gluster.org [8.43.85.176]: >>> RCPT TO: <<< 554 5.7.1 Service unavailable; Client host [172.104.248.218] blocked using dnsbl-3.uceprotect.net; Your ISP LINODE-AP

Re: [Gluster-users] Volume not healing

2021-03-20 Thread Zenon Panoussis
> Is it possible to speed it up? Nodes are nearly idle... When you have 0 files that need healing, gluster volume heal BigVol granular-entry-heal enable I have tested with and without granular and, empirically, without any hard statistics, I find granular considerably faster.

Re: [Gluster-users] No healing, errno 22

2021-03-17 Thread Zenon Panoussis
>> replicate-0: performing entry selfheal on >> 94aefa13-9828-49e5-9bac-6f70453c100f > Does this gfid correspond to the same directory path as last time? No, it's one of the "two unrelated directories" that I mentioned in some previous post. Both directories exist on the volume mount and

Re: [Gluster-users] No healing, errno 22

2021-03-16 Thread Zenon Panoussis
> Yes if the dataset is small, you can try rm -rf of the dir > from the mount (assuming no other application is accessing > them on the volume) launch heal once so that the heal info > becomes zero and then copy it over again . I did approximately so; the rm -rf took its sweet time and the

Re: [Gluster-users] No healing, errno 22

2021-03-15 Thread Zenon Panoussis
> Hmm, then the client4_0_mkdir_cbk  failures in the glustershd.log > must be for a parallel heal of a directory which contains subdirs. Running volume heal info gives the following results: node01: 3 gfids and one named directory, namely Maildir/.Sent/cur. Running gfid2dirname.sh on the 3

Re: [Gluster-users] No healing, errno 22

2021-03-15 Thread Zenon Panoussis
> -Was this an upgraded setup or a fresh v9.0 install? It was freshly installed 8.3 Centos rpms and upgraded to 9.0. I enabled granular after the upgrade. > - When there are entries yet to be healed, the CLI should > have prevented you toggling this option - was that not the > case?

[Gluster-users] No healing, errno 22

2021-03-15 Thread Zenon Panoussis
Does anyone know what healing error 22 "invalid argument" is and how to fix it, or at least how to troubleshoot it? while true; do date; gluster volume heal gv0 statistics heal-count; echo -e "--\n"; sleep 297; done Fri Mar 12 14:58:36 CET 2021 Gathering count of entries to be

[Gluster-users] Global threading

2021-03-05 Thread Zenon Panoussis
Some time ago I created a replica 3 volume using gluster 8.3 with the following topology for the time being: server1/brick1 \ / server3/brick3 \ ADSL 10/1 Mbits ___/ / <- down up ->\ server2/brick2 /

[Gluster-users] cluster.readdir-optimize

2021-02-17 Thread Zenon Panoussis
I am trying to understand how cluster.readdir-optimize works on a full replica N volume. https://lists.gluster.org/pipermail/gluster-devel/2016-November/051417.html suggests that this setting can be useful on distributed volumes. On full replica volumes, every brick has 100% of the information

Re: [Gluster-users] Docs (was: Replication logic)

2021-01-06 Thread Zenon Panoussis
> About the docs... it's in github You mean https://github.com/gluster/glusterdocs/tree/main/docs/Administrator-Guide ? > and if you got some time to update it - all PRs are welcome. I can make some time, it's only fair. But since I'm new to gluster and I barely understand its internals yet,

Re: [Gluster-users] Replication logic

2021-01-05 Thread Zenon Panoussis
>> https://gluster.readthedocs.io/en/latest/Administrator-Guide/Managing-Volumes/#triggering-self-heal-on-replicate >> says "NUFA should be enabled before creating any data in the >> volume". What happens if one tries to enable it with data in >> the volume? Will it refuse, or will it corrupt

[Gluster-users] Documentation volume heal info

2021-01-04 Thread Zenon Panoussis
I'm not sure whether here is the right place to report this, but it won't hurt. https://gluster.readthedocs.io/en/latest/Administrator-Guide/Managing-Volumes/ references gluster volume heal info healed and gluster volume heal info failed At least in version 8.3 both these arguments to

Re: [Gluster-users] Replication logic

2021-01-02 Thread Zenon Panoussis
>> Just take the slow brick offline during the initial sync >> and then bring it online. The heal will go in background, >> while the volume stays operational. > Yes, but the heal will then take three weeks. I meant this as an obvious exaggeration, but it seems it was not. I removed the

Re: [Gluster-users] Replication logic

2020-12-28 Thread Zenon Panoussis
> And you always got the option to reduce the quorum statically to "1" This is a very interesting tidbit of information. I was wondering if there was some way to preload data on a brick, and I think you might have just given me one. I have a volume of three peers, one brick each. Two peers

Re: [Gluster-users] Replication logic

2020-12-27 Thread Zenon Panoussis
> For such a project, I would simply configure the SMTP server do to > protocol-specific replication and use a low-TTL DNS name to publish > the IMAP/Web frontends. Either you know something about mail servers that I would love to know myself, or else this idea won't work. That's because even

[Gluster-users] Replication logic

2020-12-26 Thread Zenon Panoussis
Hello all I'm new to gluster and to this list too and, as is to be expected, I come with questions. I have set up a replica 3 arbiter 1 volume. Is there a way to turn the arbiter into a full replica without breaking the volume and losing the metadata that is already on the arbiter? Given a