Re: [Gluster-users] Monitoring tools for GlusterFS

2020-09-03 Thread Sachidananda Urs
On Fri, Sep 4, 2020 at 10:00 AM Artem Russakovskii 
wrote:

> Great, thanks. Thoughts on distributing updates via various repos for
> package managers? For example, I'd love to be able to update it via zypper
> on OpenSUSE.
>

Artem, I'm not thinking of creating packages deb/rpm/pkg. Will be glad to
help if any volunteers want to create packages.

-sac

>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Monitoring tools for GlusterFS

2020-09-03 Thread Artem Russakovskii
Great, thanks. Thoughts on distributing updates via various repos for
package managers? For example, I'd love to be able to update it via zypper
on OpenSUSE.

On Sun, Aug 30, 2020, 6:55 AM Sachidananda Urs  wrote:

>
>
> On Sat, Aug 29, 2020 at 11:10 PM Artem Russakovskii 
> wrote:
>
>> Another small tweak: in your README, you have this:
>> "curl -LO v1.0.3 gstatus (download)
>> "
>> This makes it impossible to just easily copy paste. You should just put
>> the link in there, and wrap in code formatting blocks.
>>
>
> Ack. PR: https://github.com/gluster/gstatus/pull/48 should fix the  issue.
>




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] systemd kill mode

2020-09-03 Thread Adrian Quintero
Amar,

I am fairly new here, but am in constant communication with Strahil and
would also like to mention he has helped me greatly on many occasions with
his guidance and knowledge with GLUSTERhe rocks !

Thank you Strahil.

Regards,

Adrian

On Thu, Sep 3, 2020 at 11:37 PM Amar Tumballi  wrote:

> [Off topic]
>
> On Thu, Sep 3, 2020 at 8:30 PM Strahil Nikolov 
> wrote:
>
>> I'm not a gluster developer
>
>
> Wanted to take time to appreciate your efforts to make sure the users of
> Gluster are happy!
>
> Strahil, you are doing a wonderful job to keep a opensource project
> progress. Users like you make up for community (and thanks to many others
> who take time to respond to fellow users)! Thank you :-)
>
> Regards,
> Amar
>
> and I haven't taken that decision.
>> As we are talking about storage - it's better to play safe and avoid any
>> risks . Imagine a distributed volume and you restart the service instead of
>> reload ... Data will be unavailable and the situation can go ugly.
>>
>> Best Regards,
>> Strahil Nikolov
>>
>>
>>
>> В четвъртък, 3 септември 2020 г., 10:11:42 Гринуич+3, Ward Poelmans <
>> wpoel...@gmail.com> написа:
>>
>>
>>
>>
>>
>> Hi Strahil,
>>
>> On 2/09/2020 21:30, Strahil Nikolov wrote:
>>
>> > you shouldn't do that,as it is intentional - glusterd is just a
>> management layer and you might need to restart it in order to reconfigure a
>> node. You don't want to kill your bricks to introduce a change, right?
>>
>> Starting up daemons in one systemd unit and killing them with another is
>> a bit weird? Can't a reconfigure happen through a ExecReload? Or let the
>> management daemon and the actual brick daemons run under different
>> systemd units?
>>
>> > In CentOS there is a dedicated service that takes care to shutdown all
>> processes and avoid such freeze .
>>
>> Thanks, that should fix the issue too.
>>
>>
>> Ward
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>> 
>>
>>
>>
>> Community Meeting Calendar:
>>
>> Schedule -
>> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
>> Bridge: https://bluejeans.com/441850968
>>
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
> --
> --
> https://kadalu.io
> Container Storage made easy!
>
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
Adrian Quintero




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] systemd kill mode

2020-09-03 Thread Amar Tumballi
[Off topic]

On Thu, Sep 3, 2020 at 8:30 PM Strahil Nikolov 
wrote:

> I'm not a gluster developer


Wanted to take time to appreciate your efforts to make sure the users of
Gluster are happy!

Strahil, you are doing a wonderful job to keep a opensource project
progress. Users like you make up for community (and thanks to many others
who take time to respond to fellow users)! Thank you :-)

Regards,
Amar

and I haven't taken that decision.
> As we are talking about storage - it's better to play safe and avoid any
> risks . Imagine a distributed volume and you restart the service instead of
> reload ... Data will be unavailable and the situation can go ugly.
>
> Best Regards,
> Strahil Nikolov
>
>
>
> В четвъртък, 3 септември 2020 г., 10:11:42 Гринуич+3, Ward Poelmans <
> wpoel...@gmail.com> написа:
>
>
>
>
>
> Hi Strahil,
>
> On 2/09/2020 21:30, Strahil Nikolov wrote:
>
> > you shouldn't do that,as it is intentional - glusterd is just a
> management layer and you might need to restart it in order to reconfigure a
> node. You don't want to kill your bricks to introduce a change, right?
>
> Starting up daemons in one systemd unit and killing them with another is
> a bit weird? Can't a reconfigure happen through a ExecReload? Or let the
> management daemon and the actual brick daemons run under different
> systemd units?
>
> > In CentOS there is a dedicated service that takes care to shutdown all
> processes and avoid such freeze .
>
> Thanks, that should fix the issue too.
>
>
> Ward
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
>
>
>
> Community Meeting Calendar:
>
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://bluejeans.com/441850968
>
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>


-- 
--
https://kadalu.io
Container Storage made easy!




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] performance

2020-09-03 Thread Computerisms Corporation

Hi Strahil,

For the sake of completeness I am reporting back that your suspicions 
seem to have been validated.  I talked to the data center, they made 
some changes.  we talked again some days later, and they made some more 
changes, and for several days now load average on both machines is 
staying consistently below 5 on both servers.  I still have some issues 
to deal with, but the performance of the machines are now no longer a 
problem.


I owe you a steak and a beer, at the least; I sincerely hope chance 
allows me to pay up on that at some point in the future.


On 2020-08-21 12:02 a.m., Computerisms Corporation wrote:

Hi Strahil,


You can use 'virt-what' binary to find if and what type of 
Virtualization is used.


cool, did not know about that.  trouble server:

root@moogle:/# virt-what
hyperv
kvm

good server:
root@mooglian:/# virt-what
kvm



I have a suspicion you are ontop of Openstack (which uses CEPH), so I 
guess you can try to get more  info.
For example, an Openstack instance can have '0x1af4' in 
'/sys/block/vdX/device/vendor' (replace X with actual device letter).

Another check could be:
/usr/lib/udev/scsi_id -g -u -d /dev/vda


This command returns no output on the bad server.  Good server returns:

root@mooglian:/# /usr/lib/udev/scsi_id -g -u -d /dev/vda
-bash: /usr/lib/udev/scsi_id: No such file or directory

And also, you can try to take a look with smartctl from smartmontools 
package:

smartctl -a /dev/vdX


Both servers return:
/dev/vda: Unable to detect device type

When I asked them about this earlier this week I was told the two 
servers are identical, but I guess there is something different about 
the server giving me trouble.  I will go back to them and see what they 
have to say.  Thanks for pointing me at this...






Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users





Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] systemd kill mode

2020-09-03 Thread Strahil Nikolov
I'm not a gluster developer and I haven't taken that decision.
As we are talking about storage - it's better to play safe and avoid any risks 
. Imagine a distributed volume and you restart the service instead of reload 
... Data will be unavailable and the situation can go ugly.

Best Regards,
Strahil Nikolov



В четвъртък, 3 септември 2020 г., 10:11:42 Гринуич+3, Ward Poelmans 
 написа: 





Hi Strahil,

On 2/09/2020 21:30, Strahil Nikolov wrote:

> you shouldn't do that,as it is intentional - glusterd is just a management 
> layer and you might need to restart it in order to reconfigure a node. You 
> don't want to kill your bricks to introduce a change, right? 

Starting up daemons in one systemd unit and killing them with another is
a bit weird? Can't a reconfigure happen through a ExecReload? Or let the
management daemon and the actual brick daemons run under different
systemd units?

> In CentOS there is a dedicated service that takes care to shutdown all 
> processes and avoid such freeze .

Thanks, that should fix the issue too.


Ward




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] systemd kill mode

2020-09-03 Thread Ward Poelmans
Hi Joe,

On 2/09/2020 23:08, Joe Julian wrote:
>> In CentOS there is a dedicated service that takes care to shutdown all
>> processes and avoid such freeze
> If you didn't stop your network interfaces as part of the shutdown, this
> wouldn't happen either. The final kill will kill the glusterfsd
> processes, closing the TCP connections properly and preventing the
> clients from waiting for the server to come back.

Yes, a `pkill gluster` also gives a 'clean' shutdown.

> The problem you're seeing is that the network is being shut down -
> preventing the clients from getting the proper TCP termination.

I guess systemd kills the network first before it terminates all
remaining processes that were not stopped by a unit.

Ward




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] systemd kill mode

2020-09-03 Thread Ward Poelmans
Hi Strahil,

On 2/09/2020 21:30, Strahil Nikolov wrote:

> you shouldn't do that,as it is intentional - glusterd is just a management 
> layer and you might need to restart it in order to reconfigure a node. You 
> don't want to kill your bricks to introduce a change, right? 

Starting up daemons in one systemd unit and killing them with another is
a bit weird? Can't a reconfigure happen through a ExecReload? Or let the
management daemon and the actual brick daemons run under different
systemd units?

> In CentOS there is a dedicated service that takes care to shutdown all 
> processes and avoid such freeze .

Thanks, that should fix the issue too.

Ward




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://bluejeans.com/441850968

Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users