On an rpm-based system the following can give you an overview which dirs are 
being used by gluster:

rpm -ql (rpm -qa | grep gluster)

Sadly I got no cluster nearby to check it myself.

Best Regards,
Strahil Nikolov






В събота, 15 май 2021 г., 10:57:40 ч. Гринуич+3, Stefan Solbrig 
<stefan.solb...@ur.de> написа: 





Hi Strahli,

It's a distributed-only volume (no replication). So heal or restore from backup 
is not an option (it's a really big FS).  The question is rather, if 
/etc/gluster  and /var/lib/glusterd are the only directories that are relevant, 
or if there are other directories.   
(I know that other distributions [commerical] have a longer support, but still, 
eventually, one will have to upgrade the server os or repace a server.)

best wishes,
Stefan


-- 
Dr. Stefan Solbrig
Universität Regensburg, Fakultät für Physik,
93040 Regensburg, Germany
Tel +49-941-943-2097



> Am 13.05.2021 um 19:27 schrieb Strahil Nikolov <hunter86...@yahoo.com>:
> 
> Hi Stefan,
> 
> add-brick requires FS without any extended file attributes which means that 
> you start "fresh". You have to first remove the brick (to shrink the volume), 
> detach the peer and after reinstall add the peer and expand the volume via 
> add-brick (don't forget to recreate the FS).
> 
> If you have a lot of data, the restore from backup approach should save you a 
> lot of healing, but you need to test it on a test setup (just to be on the 
> safe side).
> 
> You didn't mention the volume type.
> In distributed volumes, I would go with remove-brick (start/commit), detach, 
> peer probe ,add-brick .
> For replica volumes, you can try the backup/restore procedure.
> 
> Best Regards,
> Strahil Nikolov
>  
> 
>>  
>>  
>> On Wed, May 12, 2021 at 13:03, Stefan Solbrig
>> <stefan.solb...@ur.de> wrote:
>> 
>> 
>>  
>> Hi Strahli,
>> 
>> Thank you for the quick answer!  Sorry I have to ask again: as far as I can 
>> see, Gluster keeps all information about peers, bricks, in 
>> /var/lib/glusterd.  So if I migrate to a new OS, it seems that I have to 
>> restore them. Or would you suggest rather to re-generate them by repeating 
>> all "peer probe ..."  and "volume brick-add ..." commands?
>> 
>> best wishes,
>> Stefan
>> 
>>  
>> -- 
>> Dr. Stefan Solbrig
>> Universität Regensburg, Fakultät für Physik,
>> 93040 Regensburg, Germany
>> Tel +49-941-943-2097
>> 
>> 
>> 
>>> Am 11.05.2021 um 13:46 schrieb Strahil Nikolov <hunter86...@yahoo.com>:
>>> 
>>> Hi Stefan,
>>> 
>>> 
>>> I would backup Gluster's dir in /etc .
>>> You don't need to restore any configuration files after the update, but 
>>> it's good to have them backed up.
>>> 
>>> Best Regards,
>>> Strahil Nikolov
>>> 
>>> 
>>> 
>>>>  
>>>>  
>>>> On Tue, May 11, 2021 at 10:11, Stefan Solbrig
>>>> <stefan.solb...@ur.de> wrote:
>>>> 
>>>> 
>>>>  
>>>> Dear all,
>>>> 
>>>> I was wondering what is the prefered way to upgrade the server OS (not 
>>>> glusterd) for a GlusterFS. 
>>>> I'm running a distributed-only system (no replication) on centos 7, 
>>>> planning an upgrade to centos 8 stream.
>>>> 
>>>> I suppose a possible way is like this:
>>>> 
>>>> * unmount file system on all clients
>>>> * stop cluster
>>>> * copy data in /var/lib/glusterd
>>>> * upgrade all servers
>>>> * restore data in /var/lib/glusterd
>>>> 
>>>> Could you please advise me if: 
>>>> - the files in /var/lib/glusterd  are all that is needed?
>>>> - or can I regenerate these from other data?
>>>> 
>>>> best wishes,
>>>> Stefan
>>>> 
>>>> 
>>>> ________
>>>> 
>>>> 
>>>> 
>>>> Community Meeting Calendar:
>>>> 
>>>> Schedule -
>>>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>>>> Bridge: https://meet.google.com/cpu-eiue-hvk
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>> 
>>>> 
>> 
>> 
>> 

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to