Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-03-01 Thread Strahil Nikolov
Basically the lvm operations on the iSCSI target are online, and then on the 
client rescan the lun: 'iscsiadm -m node --targetname target_name -R and then 
just extend the FS.Of course, test the procedure before doing it in production.
Why do you use glusterFS on iSCSI ? You can have a shared file system on the 
same lun and mounted on multiple nodes.
Gluster is supposed to be used with local disks in order to improve resilience 
and scale to massive sizes.
Best Regards,Strahil Nikolov

 
 
  On Mon, Feb 26, 2024 at 11:37, Anant Saraswat 
wrote:   Hi Strahil,
In our setup, the Gluster brick comes from an iSCSI SAN storage and is then 
used as a brick on the Gluster server. To extend the brick, we stop the Gluster 
server, extend the logical volume (LV) on the SAN server, resize it on the 
host, mount the brick with the extended size, and finally start the Gluster 
server.
Please let me know if this process can be optimized, I will be happy to do so.
Many thanks,Anant
From: Strahil Nikolov 
Sent: 24 February 2024 12:33 PM
To: Anant Saraswat 
Cc: gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes EXTERNAL: Do not click links or open attachments if you do not 
recognize the sender.

Hi Anant,

why would you need to shutdown a brick to expand it ? This is an online 
operation.

Best Regards,
Strahil Nikolov


DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system. 

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-26 Thread Anant Saraswat
Hi Strahil,

In our setup, the Gluster brick comes from an iSCSI SAN storage and is then 
used as a brick on the Gluster server. To extend the brick, we stop the Gluster 
server, extend the logical volume (LV) on the SAN server, resize it on the 
host, mount the brick with the extended size, and finally start the Gluster 
server.

Please let me know if this process can be optimized, I will be happy to do so.

Many thanks,
Anant


From: Strahil Nikolov 
Sent: 24 February 2024 12:33 PM
To: Anant Saraswat 
Cc: gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes

EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

Hi Anant,

why would you need to shutdown a brick to expand it ? This is an online 
operation.

Best Regards,
Strahil Nikolov

DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system.

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-24 Thread Strahil Nikolov
Hi Anant,

why would you need to shutdown a brick to expand it ? This is an online 
operation.

Best Regards,
Strahil Nikolov




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-19 Thread Aravinda
You can comment line 178 and 186 in the script till that option is available.


https://github.com/gluster/glusterfs/blob/devel/extras/stop-all-gluster-processes.sh



--
Aravinda

Kadalu Technologies







 On Fri, 16 Feb 2024 18:32:19 +0530 Anant Saraswat 
 wrote ---



Okay, I understand. Yes, it would be beneficial to include an option for 
skipping the
 client processes. This way, we could utilize the 
'stop-all-gluster-processes.sh' script with that option to stop the gluster 
server process while retaining the fuse mounts.






From: Aravinda <mailto:aravi...@kadalu.tech>
 Sent: 16 February 2024 12:36 PM
 To: Anant Saraswat <mailto:anant.saras...@techblue.co.uk>
 Cc: mailto:ronny.adse...@amazinginternet.com 
<mailto:ronny.adse...@amazinginternet.com>; mailto:gluster-users@gluster.org 
<mailto:gluster-users@gluster.org>; Strahil Nikolov 
<mailto:hunter86...@yahoo.com>
 Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes  


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

No. If the script is used to update the GlusterFS packages in the node, then we 
need to stop the client processes as well (Fuse client is `glusterfs` process. 
`ps ax | grep
 glusterfs`).



The default behaviour can't be changed, but the script can be enhanced by 
adding a new option `--skip-clients` so that it can skip stopping the client 
processes.



--

Aravinda

Kadalu Technologies








 On Fri, 16 Feb 2024 16:15:22 +0530 Anant Saraswat 
<mailto:anant.saras...@techblue.co.uk> wrote ---



Hello Everyone,



We are mounting this external Gluster volume (dc.local:/docker_config)
 for docker configuration on one of the Gluster servers. When I ran the 
stop-all-gluster-processes.sh script, I wanted to stop all gluster 
server-related processes on the server, but not to unmount the external gluster 
volume mounted on the server. However,
 running stop-all-gluster-processes.sh unmounted the dc.local:/docker_config 
volume from the server.



/dev/mapper/tier1data                   6.1T  4.7T
  1.4T  78% /opt/tier1data/brick

dc.local:/docker_config  100G   81G   19G  82%
 /opt/docker_config



Do you think stop-all-gluster-processes.sh should
 unmount the fuse mount?



Thanks,

Anant


From: Gluster-users <mailto:gluster-users-boun...@gluster.org>
 on behalf of Strahil Nikolov <mailto:hunter86...@yahoo.com>
 Sent: 09 February 2024 5:23 AM
 To: mailto:ronny.adse...@amazinginternet.com 
<mailto:ronny.adse...@amazinginternet.com>; mailto:gluster-users@gluster.org 
<mailto:gluster-users@gluster.org>
 Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes  


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have
 access to my systems to check but you can extract the rpms and see it:



https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1022542*c4__;Iw!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv5u9f4cAQ$



Best Regards,

Strahil Nikolov
 

On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts

<mailto:ronny.adse...@amazinginternet.com>
 wrote:


 
 
 
 Community Meeting Calendar:
 
 Schedule -
 Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
 Bridge: 
https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv47B8Cc5w$
 Gluster-users mailing list
 mailto:Gluster-users@gluster.org
 
https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv4J669uHw$



DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you
 have received this email in error, please notify the sender. This message 
contains confidential information and is intended only for the individual 
named. If you are not the named addressee, you should not disseminate, 
distribute or copy this email. Please
 notify the sender immediately by email if you have received this email by 
mistake and delete this email from your system.
 
 If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.





DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please
 notify the sender. This message contains confidential information and is 
intended only

Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-18 Thread Anant Saraswat
This is the different scenario in which I was simply attempting to stop the 
gluster server for the expansion of the brick backend SAN LV, without needing 
to restart the entire physical server.


From: Strahil Nikolov 
Sent: 18 February 2024 1:43 PM
To: Aravinda ; Anant Saraswat 

Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes

EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

Well,

you prepare the host for shutdown, right ? So why don't you setup systemd to 
start the container and shut it down before the bricks ?

Best Regards,
Strahil Nikolov






В петък, 16 февруари 2024 г. в 18:48:36 ч. Гринуич+2, Anant Saraswat 
 написа:







Hi Strahil,



Yes, we mount the fuse to the physical host and then use bind mount to provide 
access to the container.



The same physical host also runs the gluster server. Therefore, when we stop 
gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills 
the fuse mount and impacts containers accessing this volume via bind.



Thanks,

Anant




From: Strahil Nikolov 
Sent: 16 February 2024 3:51 PM
To: Anant Saraswat ; Aravinda 

Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes



EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

Hi Anant,



Do you use the fuse client in the container ?
Wouldn't it be more reasonable to mount the fuse and then use bind mount to 
provide access to the container ?




Best Regards,

Strahil Nikolov




>
> On Fri, Feb 16, 2024 at 15:02, Anant Saraswat
>
>  wrote:
>
>
>
> Okay, I understand. Yes, it would be beneficial to include an option for 
> skipping the client processes. This way, we could utilize the 
> 'stop-all-gluster-processes.sh' script with that option to stop the gluster 
> server process while retaining the fuse mounts.
>
>
>
>
>
> 
>
> From: Aravinda 
> Sent: 16 February 2024 12:36 PM
> To: Anant Saraswat 
> Cc: ronny.adse...@amazinginternet.com ; 
> gluster-users@gluster.org ; Strahil Nikolov 
> 
> Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
> processes
>
>
>
> EXTERNAL: Do not click links or open attachments if you do not recognize the 
> sender.
>
> No. If the script is used to update the GlusterFS packages in the node, then 
> we need to stop the client processes as well (Fuse client is `glusterfs` 
> process. `ps ax | grep glusterfs`).
>
>
>
> The default behaviour can't be changed, but the script can be enhanced by 
> adding a new option `--skip-clients` so that it can skip stopping the client 
> processes.
>
>
>
> --
>
>
> Aravinda
>
> Kadalu Technologies
>
>
>
>
>
>
>
>
>  On Fri, 16 Feb 2024 16:15:22 +0530  Anant Saraswat 
>  wrote ---
>
>
>
>
>  Hello Everyone,
>
>
>
>
> We are mounting this external Gluster volume (dc.local:/docker_config) for 
> docker configuration on one of the Gluster servers. When I ran the 
> stop-all-gluster-processes.sh script, I wanted to stop all gluster 
> server-related processes on the server, but not to unmount the external 
> gluster volume mounted on the server. However, running 
> stop-all-gluster-processes.sh unmounted the dc.local:/docker_config volume 
> from the server.
>
>
>
> /dev/mapper/tier1data   6.1T  4.7T  1.4T  78% 
> /opt/tier1data/brick
>
> dc.local:/docker_config  100G   81G   19G  82% /opt/docker_config
>
>
>
> Do you think stop-all-gluster-processes.sh should unmount the fuse mount?
>
>
>
> Thanks,
>
> Anant
> 
> From: Gluster-users  on behalf of Strahil 
> Nikolov 
> Sent: 09 February 2024 5:23 AM
> To: ronny.adse...@amazinginternet.com ;  
> gluster-users@gluster.org 
> Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
> processes
>
>
>
> EXTERNAL: Do not click links or open attachments if you do not recognize the 
> sender.
>
> I think the service that shutdowns the bricks on EL systems is something like 
> this - right now I don't have access to my systems to check but you can 
> extract the rpms and see it:
>
>
>
> https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1022542*c4__;Iw!!I_DbfM1H!CadZuXuensBJrynpw7Z22Af6-hJIBtaRnuHTffxfi3HmMIUedRfnm6q_BY20nFQJ-j1xtZ9TtftPtvhAdoD2HFgXrcbctg$
>
>
>
> Best Regards,
>
> Strahil Nikolov
>
>>
>> On Wed, Feb 7, 2024 at 19:51, Ronny Adse

Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-18 Thread Strahil Nikolov
Well,

you prepare the host for shutdown, right ? So why don't you setup systemd to 
start the container and shut it down before the bricks ?

Best Regards,
Strahil Nikolov






В петък, 16 февруари 2024 г. в 18:48:36 ч. Гринуич+2, Anant Saraswat 
 написа: 







Hi Strahil,



Yes, we mount the fuse to the physical host and then use bind mount to provide 
access to the container.



The same physical host also runs the gluster server. Therefore, when we stop 
gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills 
the fuse mount and impacts containers accessing this volume via bind.



Thanks,

Anant



 
From: Strahil Nikolov 
Sent: 16 February 2024 3:51 PM
To: Anant Saraswat ; Aravinda 

Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes 
 


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

Hi Anant,



Do you use the fuse client in the container ? 
Wouldn't it be more reasonable to mount the fuse and then use bind mount to 
provide access to the container ?




Best Regards,

Strahil Nikolov




>  
> On Fri, Feb 16, 2024 at 15:02, Anant Saraswat
> 
>  wrote:
> 
>  
>  
> Okay, I understand. Yes, it would be beneficial to include an option for 
> skipping the client processes. This way, we could utilize the 
> 'stop-all-gluster-processes.sh' script with that option to stop the gluster 
> server process while retaining the fuse mounts.
> 
> 
> 
>  
> 
>  
>  
> From: Aravinda 
> Sent: 16 February 2024 12:36 PM
> To: Anant Saraswat 
> Cc: ronny.adse...@amazinginternet.com ; 
> gluster-users@gluster.org ; Strahil Nikolov 
> 
> Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
> processes 
>  
> 
> 
> EXTERNAL: Do not click links or open attachments if you do not recognize the 
> sender.
> 
> No. If the script is used to update the GlusterFS packages in the node, then 
> we need to stop the client processes as well (Fuse client is `glusterfs` 
> process. `ps ax | grep glusterfs`).
> 
> 
> 
> The default behaviour can't be changed, but the script can be enhanced by 
> adding a new option `--skip-clients` so that it can skip stopping the client 
> processes.
> 
> 
> 
> --
> 
>  
> Aravinda
> 
> Kadalu Technologies
> 
> 
> 
> 
>  
> 
> 
> 
>  On Fri, 16 Feb 2024 16:15:22 +0530  Anant Saraswat 
>  wrote ---
> 
> 
> 
> 
>  Hello Everyone,
> 
>  
> 
> 
> We are mounting this external Gluster volume (dc.local:/docker_config) for 
> docker configuration on one of the Gluster servers. When I ran the 
> stop-all-gluster-processes.sh script, I wanted to stop all gluster 
> server-related processes on the server, but not to unmount the external 
> gluster volume mounted on the server. However, running 
> stop-all-gluster-processes.sh unmounted the dc.local:/docker_config volume 
> from the server.
> 
> 
> 
> /dev/mapper/tier1data                   6.1T  4.7T  1.4T  78% 
> /opt/tier1data/brick
> 
> dc.local:/docker_config  100G   81G   19G  82% /opt/docker_config
> 
> 
> 
> Do you think stop-all-gluster-processes.sh should unmount the fuse mount?
> 
> 
> 
> Thanks,
> 
> Anant
>  
> From: Gluster-users  on behalf of Strahil 
> Nikolov 
> Sent: 09 February 2024 5:23 AM
> To: ronny.adse...@amazinginternet.com ;  
> gluster-users@gluster.org 
> Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
> processes 
>  
> 
> 
> EXTERNAL: Do not click links or open attachments if you do not recognize the 
> sender.
> 
> I think the service that shutdowns the bricks on EL systems is something like 
> this - right now I don't have access to my systems to check but you can 
> extract the rpms and see it:
> 
> 
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4
> 
> 
> 
> Best Regards,
> 
> Strahil Nikolov
> 
>>  
>> On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts
>> 
>>  wrote:
>> 
>>  Community Meeting Calendar:Schedule -Every 2nd and 4th Tuesday at 
>>14:30 IST / 09:00 UTCBridge:  
>>https://meet.google.com/cpu-eiue-hvkGluster-users mailing 
>>listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>> 
> 
> 
> 
> DISCLAIMER: This email and any files transmitted with it are confidential and 
> intended solely for the use of the individual or entity to whom they are 
> addressed. If you have received this email in error, please notify the 
> sender. Thi

Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-16 Thread Anant Saraswat
Hi Strahil,

Yes, we mount the fuse to the physical host and then use bind mount to provide 
access to the container.

The same physical host also runs the gluster server. Therefore, when we stop 
gluster using 'stop-all-gluster-processes.sh' on the physical host, it kills 
the fuse mount and impacts containers accessing this volume via bind.

Thanks,
Anant


From: Strahil Nikolov 
Sent: 16 February 2024 3:51 PM
To: Anant Saraswat ; Aravinda 

Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

Hi Anant,

Do you use the fuse client in the container ?
Wouldn't it be more reasonable to mount the fuse and then use bind mount to 
provide access to the container ?

Best Regards,
Strahil Nikolov

On Fri, Feb 16, 2024 at 15:02, Anant Saraswat
 wrote:
Okay, I understand. Yes, it would be beneficial to include an option for 
skipping the client processes. This way, we could utilize the 
'stop-all-gluster-processes.sh' script with that option to stop the gluster 
server process while retaining the fuse mounts.


From: Aravinda 
Sent: 16 February 2024 12:36 PM
To: Anant Saraswat 
Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org ; Strahil Nikolov 

Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

No. If the script is used to update the GlusterFS packages in the node, then we 
need to stop the client processes as well (Fuse client is `glusterfs` process. 
`ps ax | grep glusterfs`).

The default behaviour can't be changed, but the script can be enhanced by 
adding a new option `--skip-clients` so that it can skip stopping the client 
processes.

--
Aravinda
Kadalu Technologies



 On Fri, 16 Feb 2024 16:15:22 +0530 Anant Saraswat 
 wrote ---

Hello Everyone,

We are mounting this external Gluster volume (dc.local:/docker_config) for 
docker configuration on one of the Gluster servers. When I ran the 
stop-all-gluster-processes.sh script, I wanted to stop all gluster 
server-related processes on the server, but not to unmount the external gluster 
volume mounted on the server. However, running stop-all-gluster-processes.sh 
unmounted the dc.local:/docker_config volume from the server.

/dev/mapper/tier1data   6.1T  4.7T  1.4T  78% 
/opt/tier1data/brick
dc.local:/docker_config  100G   81G   19G  82% /opt/docker_config

Do you think stop-all-gluster-processes.sh should unmount the fuse mount?

Thanks,
Anant

From: Gluster-users 
mailto:gluster-users-boun...@gluster.org>> 
on behalf of Strahil Nikolov 
mailto:hunter86...@yahoo.com>>
Sent: 09 February 2024 5:23 AM
To: ronny.adse...@amazinginternet.com<mailto:ronny.adse...@amazinginternet.com> 
mailto:ronny.adse...@amazinginternet.com>>; 
gluster-users@gluster.org<mailto:gluster-users@gluster.org> 
mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have access to my systems to check but you can extract 
the rpms and see it:

https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4<https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1022542*c4__;Iw!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv5u9f4cAQ$>

Best Regards,
Strahil Nikolov

On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts
mailto:ronny.adse...@amazinginternet.com>> 
wrote:




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: 
https://meet.google.com/cpu-eiue-hvk<https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv47B8Cc5w$>
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv4J669uHw$>


DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you sho

Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-16 Thread Strahil Nikolov
Hi Anant,
Do you use the fuse client in the container ?Wouldn't it be more reasonable to 
mount the fuse and then use bind mount to provide access to the container ?
Best Regards,Strahil Nikolov 
 
  On Fri, Feb 16, 2024 at 15:02, Anant Saraswat 
wrote:   Okay, I understand. Yes, it would be beneficial to include an option 
for skipping the client processes. This way, we could utilize the 
'stop-all-gluster-processes.sh' script with that option to stop the gluster 
server process while retaining the fuse mounts.
From: Aravinda 
Sent: 16 February 2024 12:36 PM
To: Anant Saraswat 
Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org ; Strahil Nikolov 

Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes 
EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.
No. If the script is used to update the GlusterFS packages in the node, then we 
need to stop the client processes as well (Fuse client is `glusterfs` process. 
`ps ax | grep glusterfs`).
The default behaviour can't be changed, but the script can be enhanced by 
adding a new option `--skip-clients` so that it can skip stopping the client 
processes.
--AravindaKadalu Technologies


 On Fri, 16 Feb 2024 16:15:22 +0530 Anant Saraswat 
 wrote ---

Hello Everyone,
We are mounting this external Gluster volume (dc.local:/docker_config) for 
docker configuration on one of the Gluster servers. When I ran the 
stop-all-gluster-processes.sh script, I wanted to stop all gluster 
server-related processes on the server, but not to unmount the external gluster 
volume mounted on the server. However, running stop-all-gluster-processes.sh 
unmounted the dc.local:/docker_config volume from the server.
/dev/mapper/tier1data                   6.1T  4.7T  1.4T  78% 
/opt/tier1data/brickdc.local:/docker_config  100G   81G   19G  82% 
/opt/docker_config
Do you think stop-all-gluster-processes.sh should unmount the fuse mount?
Thanks,AnantFrom: Gluster-users  on behalf 
of Strahil Nikolov 
Sent: 09 February 2024 5:23 AM
To: ronny.adse...@amazinginternet.com 
;gluster-users@gluster.org 

Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes 
EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.
I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have access to my systems to check but you can extract 
the rpms and see it:
https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4
Best Regards,Strahil Nikolov


On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts 
wrote:



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system.

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.





DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system. 

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-16 Thread Anant Saraswat
Okay, I understand. Yes, it would be beneficial to include an option for 
skipping the client processes. This way, we could utilize the 
'stop-all-gluster-processes.sh' script with that option to stop the gluster 
server process while retaining the fuse mounts.


From: Aravinda 
Sent: 16 February 2024 12:36 PM
To: Anant Saraswat 
Cc: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org ; Strahil Nikolov 

Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

No. If the script is used to update the GlusterFS packages in the node, then we 
need to stop the client processes as well (Fuse client is `glusterfs` process. 
`ps ax | grep glusterfs`).

The default behaviour can't be changed, but the script can be enhanced by 
adding a new option `--skip-clients` so that it can skip stopping the client 
processes.

--
Aravinda
Kadalu Technologies



 On Fri, 16 Feb 2024 16:15:22 +0530 Anant Saraswat 
 wrote ---

Hello Everyone,

We are mounting this external Gluster volume (dc.local:/docker_config) for 
docker configuration on one of the Gluster servers. When I ran the 
stop-all-gluster-processes.sh script, I wanted to stop all gluster 
server-related processes on the server, but not to unmount the external gluster 
volume mounted on the server. However, running stop-all-gluster-processes.sh 
unmounted the dc.local:/docker_config volume from the server.

/dev/mapper/tier1data   6.1T  4.7T  1.4T  78% 
/opt/tier1data/brick
dc.local:/docker_config  100G   81G   19G  82% /opt/docker_config

Do you think stop-all-gluster-processes.sh should unmount the fuse mount?

Thanks,
Anant

From: Gluster-users 
mailto:gluster-users-boun...@gluster.org>> 
on behalf of Strahil Nikolov 
mailto:hunter86...@yahoo.com>>
Sent: 09 February 2024 5:23 AM
To: ronny.adse...@amazinginternet.com<mailto:ronny.adse...@amazinginternet.com> 
mailto:ronny.adse...@amazinginternet.com>>; 
gluster-users@gluster.org<mailto:gluster-users@gluster.org> 
mailto:gluster-users@gluster.org>>
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have access to my systems to check but you can extract 
the rpms and see it:

https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4<https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1022542*c4__;Iw!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv5u9f4cAQ$>

Best Regards,
Strahil Nikolov

On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts
mailto:ronny.adse...@amazinginternet.com>> 
wrote:




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: 
https://meet.google.com/cpu-eiue-hvk<https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv47B8Cc5w$>
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv4J669uHw$>


DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system.

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.



DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system.

If you are not 

Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-16 Thread Aravinda
No. If the script is used to update the GlusterFS packages in the node, then we 
need to stop the client processes as well (Fuse client is `glusterfs` process. 
`ps ax | grep glusterfs`).



The default behaviour can't be changed, but the script can be enhanced by 
adding a new option `--skip-clients` so that it can skip stopping the client 
processes.



--
Aravinda

Kadalu Technologies







 On Fri, 16 Feb 2024 16:15:22 +0530 Anant Saraswat 
 wrote ---



Hello Everyone,



We are mounting this external Gluster volume (dc.local:/docker_config) for 
docker configuration on one of the
 Gluster servers. When I ran the stop-all-gluster-processes.sh script, I wanted 
to stop all gluster server-related processes on the server, but not to unmount 
the external gluster volume mounted on the server. However, running 
stop-all-gluster-processes.sh
 unmounted the dc.local:/docker_config volume from the server.



/dev/mapper/tier1data                   6.1T  4.7T  1.4T  78% 
/opt/tier1data/brick

dc.local:/docker_config  100G   81G   19G  82% /opt/docker_config



Do you think stop-all-gluster-processes.sh should unmount the fuse mount?



Thanks,

Anant


From: Gluster-users <mailto:gluster-users-boun...@gluster.org> on behalf of 
Strahil Nikolov <mailto:hunter86...@yahoo.com>
 Sent: 09 February 2024 5:23 AM
 To: mailto:ronny.adse...@amazinginternet.com 
<mailto:ronny.adse...@amazinginternet.com>; mailto:gluster-users@gluster.org 
<mailto:gluster-users@gluster.org>
 Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes  


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have access to my systems to check but you can extract 
the rpms and see it:



https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1022542*c4__;Iw!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv5u9f4cAQ$



Best Regards,

Strahil Nikolov
 

On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts

<mailto:ronny.adse...@amazinginternet.com> wrote:


 
 
 
 Community Meeting Calendar:
 
 Schedule -
 Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
 Bridge: 
https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv47B8Cc5w$
 Gluster-users mailing list
 mailto:Gluster-users@gluster.org
 
https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv4J669uHw$



DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please
 notify the sender. This message contains confidential information and is 
intended only for the individual named. If you are not the named addressee, you 
should not disseminate, distribute or copy this email. Please notify the sender 
immediately by email if
 you have received this email by mistake and delete this email from your 
system. 
 
 If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-16 Thread Anant Saraswat
Hello Everyone,

We are mounting this external Gluster volume (dc.local:/docker_config) for 
docker configuration on one of the Gluster servers. When I ran the 
stop-all-gluster-processes.sh script, I wanted to stop all gluster 
server-related processes on the server, but not to unmount the external gluster 
volume mounted on the server. However, running stop-all-gluster-processes.sh 
unmounted the dc.local:/docker_config volume from the server.

/dev/mapper/tier1data   6.1T  4.7T  1.4T  78% 
/opt/tier1data/brick
dc.local:/docker_config  100G   81G   19G  82% /opt/docker_config

Do you think stop-all-gluster-processes.sh should unmount the fuse mount?

Thanks,
Anant

From: Gluster-users  on behalf of Strahil 
Nikolov 
Sent: 09 February 2024 5:23 AM
To: ronny.adse...@amazinginternet.com ; 
gluster-users@gluster.org 
Subject: Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster 
processes


EXTERNAL: Do not click links or open attachments if you do not recognize the 
sender.

I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have access to my systems to check but you can extract 
the rpms and see it:

https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4<https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=1022542*c4__;Iw!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv5u9f4cAQ$>

Best Regards,
Strahil Nikolov

On Wed, Feb 7, 2024 at 19:51, Ronny Adsetts
 wrote:




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: 
https://meet.google.com/cpu-eiue-hvk<https://urldefense.com/v3/__https://meet.google.com/cpu-eiue-hvk__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv47B8Cc5w$>
Gluster-users mailing list
Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users<https://urldefense.com/v3/__https://lists.gluster.org/mailman/listinfo/gluster-users__;!!I_DbfM1H!ANXDSlATEqmc-hZeQJEeNr5LKu6Z4rpDBonAviThOtduuz84ZfDwxEkQHcrf6CFS8dpRpb0zbRYgH6UMwahQLv4J669uHw$>

DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system.

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-08 Thread Strahil Nikolov
I think the service that shutdowns the bricks on EL systems is something like 
this - right now I don't have access to my systems to check but you can extract 
the rpms and see it:
https://bugzilla.redhat.com/show_bug.cgi?id=1022542#c4
Best Regards,Strahil Nikolov
 
 
  On Wed, Feb 7, 2024 at 19:51, Ronny 
Adsetts wrote:   



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users
  




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-07 Thread Ronny Adsetts
If I might chip in here, this can cause an issue when rebooting nodes unless 
you make sure to stop the Gluster processes first. If you don't stop the 
processes, then the Gluster volumes can pause for the default 42 seconds(?) 
until the other nodes time out the rebooting node. This is of course long 
enough to cause any VMs running their volumes on gluster to show I/O errors and 
re-mount them as read-only potentially causing all sorts of mischief.

I *think* there's a systemd solution to this somewhere, perhaps the RedHat 
packages?, that does stop the gluster processes prior to a reboot or halt.

Certainly the Debian packages *don't* have this solution in place. I wish they 
did but I've never mastered enough systemd foo to sort if out myself. :-).

Ronny


Aravinda wrote on 05/02/2024 19:09:
> Hi Anant,
>
> It was intentional design decision to not stop any gluster processes if 
> Glusterd need to be upgraded or if Glusterd crashes. Because of this Volume 
> availability will not be affected if any issues with Glusterd or Glusterd is 
> upgraded. All the mounts will reconnect once the Glusterd comes back up. CLI 
> operations from that node may not be available but IO will not be affected if 
> the Glusterd is down (New mounts can't be created, but existing mounts should 
> work without Glusterd).
>
> stop-all-gluster-processes.sh is available as part of installation, it can be 
> used to stop all the processes (Check in /usr/share/glusterfs/scripts)
>
> --
> Thanks and Regards
> Aravinda
> Kadalu Technologies
>
>
>
>  On Mon, 05 Feb 2024 22:40:30 +0530 *Anant Saraswat 
> * wrote ---
>
> Hello Everyone,
>
>
> I am using GlusterFS 9.4, and whenever we use the systemctl command to 
> stop the Gluster server, it leaves many Gluster processes running. So, I just 
> want to check how to shut down the Gluster server in a graceful manner.
>
>
> Is there any specific sequence or trick I need to follow? Currently, I am 
> using the following command:
>
>
> [root@master2 ~]# systemctl stop glusterd.service
>
> [root@master2 ~]# ps aux | grep gluster
> root     2710138 14.1  0.0 2968372 216852 ?      Ssl  Jan27 170:27 
> /usr/sbin/glusterfsd -s master2 --volfile-id 
> tier1data.master2.opt-tier1data2019-brick -p 
> /var/run/gluster/vols/tier1data/master2-opt-tier1data2019-brick.pid -S 
> /var/run/gluster/97da28e3d5c23317.socket --brick-name 
> /opt/tier1data2019/brick -l 
> /var/log/glusterfs/bricks/opt-tier1data2019-brick.log --xlator-option 
> *-posix.glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --process-name 
> brick --brick-port 49152 --xlator-option tier1data-server.listen-port=49152
> root     2710196  0.0  0.0 1298116 11544 ?       Ssl  Jan27   0:01 
> /usr/sbin/glusterfs -s localhost --volfile-id shd/tier1data -p 
> /var/run/gluster/shd/tier1data/tier1data-shd.pid -l 
> /var/log/glusterfs/glustershd.log -S /var/run/gluster/1ac2284f75671ffa.socket 
> --xlator-option *replicate*.node-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
> --process-name glustershd --client-pid=-6
> root     3730742  0.0  0.0 288264 14388 ?        Ssl  18:44   0:00 
> /usr/bin/python3 /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py 
> --path=/opt/tier1data2019/brick  --monitor -c 
> /var/lib/glusterd/geo-replication/tier1data_drtier1data_drtier1data/gsyncd.conf
>  --iprefix=/var :tier1data 
> --glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d drtier1data::drtier1data
> root     3730763  2.4  0.0 2097216 35904 ?       Sl   18:44   0:09 
> python3 /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py worker tier1data 
> drtier1data::drtier1data --feedback-fd 9 --local-path 
> /opt/tier1data2019/brick --local-node master2 --local-node-id 
> c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --slave-id 
> eca32e08-c3f8-4883-bef5-84bfb89f4d56 --subvol-num 1 --resource-remote 
> drtier1data --resource-remote-id 28f3e75b-56aa-43a1-a0ea-a0e5d44d59ea
> root     3730768  0.7  0.0  50796  9668 ?        S    18:44   0:02 ssh 
> -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
> /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
> /tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock 
> drtier1data /nonexistent/gsyncd slave tier1data drtier1data::drtier1data 
> --master-node master2 --master-node-id c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
> --master-brick /opt/tier1data2019/brick --local-node drtier1data 
> --local-node-id 28f3e75b-56aa-43a1-a0ea-a0e5d44d59ea --slave-timeout 120 
> --slave-log-level INFO --slave-gluster-log-level INFO 
> --slave-gluster-command-dir /usr/sbin --master-dist-count 1
> root     3730795  1.1  0.0 1108268 55596 ?       Ssl  18:44   0:04 
> /usr/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO 
> --log-file=/var/log/glusterfs/geo-replication/tier1data_drtier1data_drtier1data/mnt-opt-tier1data2019-brick.log
>  --volfile-server=localhost --volfile-id=tier1data --client-pid=-1 
> /tmp/gsyncd-aux-mount-9210kh43
> root     

Re: [Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-05 Thread Aravinda
Hi Anant,



It was intentional design decision to not stop any gluster processes if 
Glusterd need to be upgraded or if Glusterd crashes. Because of this Volume 
availability will not be affected if any issues with Glusterd or Glusterd is 
upgraded. All the mounts will reconnect once the Glusterd comes back up. CLI 
operations from that node may not be available but IO will not be affected if 
the Glusterd is down (New mounts can't be created, but existing mounts should 
work without Glusterd).



stop-all-gluster-processes.sh is available as part of installation, it can be 
used to stop all the processes (Check in /usr/share/glusterfs/scripts)


--

Thanks and Regards
Aravinda

Kadalu Technologies







 On Mon, 05 Feb 2024 22:40:30 +0530 Anant Saraswat 
 wrote ---



Hello Everyone,



I am using GlusterFS 9.4, and whenever we
 use the systemctl command to stop the Gluster server, it leaves many Gluster 
processes running. So, I just want to check how to shut down the Gluster server 
in a graceful manner.



Is there any specific sequence or trick I need to follow? Currently, I am using 
the following command:



[root@master2 ~]# systemctl stop glusterd.service

[root@master2 ~]# ps aux | grep gluster

root     2710138 14.1  0.0 2968372 216852 ?      Ssl  Jan27 170:27 
/usr/sbin/glusterfsd -s master2 --volfile-id
 tier1data.master2.opt-tier1data2019-brick -p 
/var/run/gluster/vols/tier1data/master2-opt-tier1data2019-brick.pid -S 
/var/run/gluster/97da28e3d5c23317.socket --brick-name /opt/tier1data2019/brick 
-l /var/log/glusterfs/bricks/opt-tier1data2019-brick.log --xlator-option
 *-posix.glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --process-name 
brick --brick-port 49152 --xlator-option tier1data-server.listen-port=49152

root     2710196  0.0  0.0 1298116 11544 ?       Ssl  Jan27   0:01 
/usr/sbin/glusterfs -s localhost --volfile-id
 shd/tier1data -p /var/run/gluster/shd/tier1data/tier1data-shd.pid -l 
/var/log/glusterfs/glustershd.log -S /var/run/gluster/1ac2284f75671ffa.socket 
--xlator-option *replicate*.node-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
--process-name glustershd --client-pid=-6

root     3730742  0.0  0.0 288264 14388 ?        Ssl  18:44   0:00 
/usr/bin/python3 /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py
 --path=/opt/tier1data2019/brick  --monitor -c 
/var/lib/glusterd/geo-replication/tier1data_drtier1data_drtier1data/gsyncd.conf 
--iprefix=/var :tier1data --glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
drtier1data::drtier1data

root     3730763  2.4  0.0 2097216 35904 ?       Sl   18:44   0:09 python3 
/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py
 worker tier1data drtier1data::drtier1data --feedback-fd 9 --local-path 
/opt/tier1data2019/brick --local-node master2 --local-node-id 
c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --slave-id 
eca32e08-c3f8-4883-bef5-84bfb89f4d56 --subvol-num 1 --resource-remote 
drtier1data
 --resource-remote-id 28f3e75b-56aa-43a1-a0ea-a0e5d44d59ea

root     3730768  0.7  0.0  50796  9668 ?        S    18:44   0:02 ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no
 -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock drtier1data 
/nonexistent/gsyncd slave tier1data drtier1data::drtier1data --master-node 
master2 --master-node-id
 c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --master-brick /opt/tier1data2019/brick 
--local-node drtier1data --local-node-id 28f3e75b-56aa-43a1-a0ea-a0e5d44d59ea 
--slave-timeout 120 --slave-log-level INFO --slave-gluster-log-level INFO 
--slave-gluster-command-dir
 /usr/sbin --master-dist-count 1

root     3730795  1.1  0.0 1108268 55596 ?       Ssl  18:44   0:04 
/usr/sbin/glusterfs --aux-gfid-mount --acl
 --log-level=INFO 
--log-file=/var/log/glusterfs/geo-replication/tier1data_drtier1data_drtier1data/mnt-opt-tier1data2019-brick.log
 --volfile-server=localhost --volfile-id=tier1data --client-pid=-1 
/tmp/gsyncd-aux-mount-9210kh43

root     3772665  0.0  0.0  12208  2400 ?        S    18:51   0:00 rsync -aR0 
--inplace --files-from=- --super
 --stats --numeric-ids --no-implied-dirs --existing --xattrs --acls 
--ignore-missing-args . -e ssh -oPasswordAuthentication=no 
-oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 
22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock
 drtier1data:/proc/897118/cwd

root     3772667  0.0  0.0  44156  5640 ?        S    18:51   0:00 ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no
 -i /var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock drtier1data 
rsync --server -logDtpAXRe.LsfxC --super --stats --numeric-ids --existing 
--inplace --no-implied-dirs
 . /proc/897118/cwd



For now, We are using 
https://github.com/gluster/glusterfs/blob/master/extras/stop-all-gluster-processes.sh
 to kill all the remaining 

[Gluster-users] Graceful shutdown doesn't stop all Gluster processes

2024-02-05 Thread Anant Saraswat
Hello Everyone,


I am using GlusterFS 9.4, and whenever we use the systemctl command to stop the 
Gluster server, it leaves many Gluster processes running. So, I just want to 
check how to shut down the Gluster server in a graceful manner.


Is there any specific sequence or trick I need to follow? Currently, I am using 
the following command:


[root@master2 ~]# systemctl stop glusterd.service

[root@master2 ~]# ps aux | grep gluster
root 2710138 14.1  0.0 2968372 216852 ?  Ssl  Jan27 170:27 
/usr/sbin/glusterfsd -s master2 --volfile-id 
tier1data.master2.opt-tier1data2019-brick -p 
/var/run/gluster/vols/tier1data/master2-opt-tier1data2019-brick.pid -S 
/var/run/gluster/97da28e3d5c23317.socket --brick-name /opt/tier1data2019/brick 
-l /var/log/glusterfs/bricks/opt-tier1data2019-brick.log --xlator-option 
*-posix.glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --process-name brick 
--brick-port 49152 --xlator-option tier1data-server.listen-port=49152
root 2710196  0.0  0.0 1298116 11544 ?   Ssl  Jan27   0:01 
/usr/sbin/glusterfs -s localhost --volfile-id shd/tier1data -p 
/var/run/gluster/shd/tier1data/tier1data-shd.pid -l 
/var/log/glusterfs/glustershd.log -S /var/run/gluster/1ac2284f75671ffa.socket 
--xlator-option *replicate*.node-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
--process-name glustershd --client-pid=-6
root 3730742  0.0  0.0 288264 14388 ?Ssl  18:44   0:00 
/usr/bin/python3 /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py 
--path=/opt/tier1data2019/brick  --monitor -c 
/var/lib/glusterd/geo-replication/tier1data_drtier1data_drtier1data/gsyncd.conf 
--iprefix=/var :tier1data --glusterd-uuid=c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
drtier1data::drtier1data
root 3730763  2.4  0.0 2097216 35904 ?   Sl   18:44   0:09 python3 
/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py worker tier1data 
drtier1data::drtier1data --feedback-fd 9 --local-path /opt/tier1data2019/brick 
--local-node master2 --local-node-id c1591bde-df1c-41b4-8cc3-5eaa02c5b89d 
--slave-id eca32e08-c3f8-4883-bef5-84bfb89f4d56 --subvol-num 1 
--resource-remote drtier1data --resource-remote-id 
28f3e75b-56aa-43a1-a0ea-a0e5d44d59ea
root 3730768  0.7  0.0  50796  9668 ?S18:44   0:02 ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock drtier1data 
/nonexistent/gsyncd slave tier1data drtier1data::drtier1data --master-node 
master2 --master-node-id c1591bde-df1c-41b4-8cc3-5eaa02c5b89d --master-brick 
/opt/tier1data2019/brick --local-node drtier1data --local-node-id 
28f3e75b-56aa-43a1-a0ea-a0e5d44d59ea --slave-timeout 120 --slave-log-level INFO 
--slave-gluster-log-level INFO --slave-gluster-command-dir /usr/sbin 
--master-dist-count 1
root 3730795  1.1  0.0 1108268 55596 ?   Ssl  18:44   0:04 
/usr/sbin/glusterfs --aux-gfid-mount --acl --log-level=INFO 
--log-file=/var/log/glusterfs/geo-replication/tier1data_drtier1data_drtier1data/mnt-opt-tier1data2019-brick.log
 --volfile-server=localhost --volfile-id=tier1data --client-pid=-1 
/tmp/gsyncd-aux-mount-9210kh43
root 3772665  0.0  0.0  12208  2400 ?S18:51   0:00 rsync -aR0 
--inplace --files-from=- --super --stats --numeric-ids --no-implied-dirs 
--existing --xattrs --acls --ignore-missing-args . -e ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock 
drtier1data:/proc/897118/cwd
root 3772667  0.0  0.0  44156  5640 ?S18:51   0:00 ssh 
-oPasswordAuthentication=no -oStrictHostKeyChecking=no -i 
/var/lib/glusterd/geo-replication/secret.pem -p 22 -oControlMaster=auto -S 
/tmp/gsyncd-aux-ssh-ep7a14up/75785990b3233f5dbbab9f43cc3ed895.sock drtier1data 
rsync --server -logDtpAXRe.LsfxC --super --stats --numeric-ids --existing 
--inplace --no-implied-dirs . /proc/897118/cwd

For now, We are using 
https://github.com/gluster/glusterfs/blob/master/extras/stop-all-gluster-processes.sh
 to kill all the remaining processes.

Thanks,
Anant

DISCLAIMER: This email and any files transmitted with it are confidential and 
intended solely for the use of the individual or entity to whom they are 
addressed. If you have received this email in error, please notify the sender. 
This message contains confidential information and is intended only for the 
individual named. If you are not the named addressee, you should not 
disseminate, distribute or copy this email. Please notify the sender 
immediately by email if you have received this email by mistake and delete this 
email from your system.

If you are not the intended recipient, you are notified that disclosing, 
copying, distributing or taking any action in reliance on the contents of this 
information is strictly prohibited. Thanks for your cooperation.