Thanks Jacob and Anthony for sharing the details.

I have tried to understand list_of_mbeans supported but finding it tough to
understand completely. I can see "DiskStoreMXBean" and document says it can
provide region(s) specific disk usage.

For my experiment, looking for *mbeans that provide data node's (member)
specific heap and disk usage*.. It would be great help if someone can guide
me about these MBeans and how to use it to get the required stats (or point
me some reference outlining this details.)

Thanks
-Steve

On Wed, Jul 8, 2020 at 10:27 PM Anthony Baker <bak...@vmware.com> wrote:

> Another option is JMX, see
> https://geode.apache.org/docs/guide/19/managing/management/list_of_mbeans.html
> .
>
> Anthony
>
>
> On Jul 8, 2020, at 9:24 AM, Jacob Barrett <jabarr...@vmware.com<mailto:
> jabarr...@vmware.com>> wrote:
>
> Steve,
>
> Geode is in a transition from its on disk proprietary stats format to
> utilizing Micrometer.io<
> https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmicrometer.io%2F&data=02%7C01%7Cbakera%40vmware.com%7C839050aa773449021fec08d8235b6e94%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637298222885245763&sdata=akhmzrYaO3ARYP5mV6vhSAjcoudT1kp1QLeJVCjTdFM%3D&reserved=0>.
> Some of what you are looking for may already be exposed via Micrometer. If
> so you can just use whatever registry of your choice to publish those
> stats. If the metric you need is not converted to Micrometer its pretty
> easy in most cases to refactor it and we would welcome a JIRA or even
> better a PR.
>
> -Jake
>
>
> On Jul 7, 2020, at 9:58 PM, steve mathew <steve.mathe...@gmail.com<mailto:
> steve.mathe...@gmail.com>> wrote:
>
> Hello Geode Dev and users
>
> We have a requirement to constantly monitor the resource utilization (Disk
> and Heap usage) for the cluster nodes from external processes.
> Seeking help to understand the recommended way (or APIs available ) to get
> this in a separate process...We need to trigger some actions/custom logic
> if it goes above some threshold..
>
> Thanks
> Steve.
>
>
>

Reply via email to