Hi,

Ahh, those graphs aren’t related to mempools. Mempools are derrived a single “usage” value exposed a different mib snmp, they’re sort of useless on computers because they generally just show the memory being full all of the time, because of how the snmp daemon exposes the metric.

It only gives us a single datapoint for “used” without any breakdown of it. It’s exposed much the same way you’d expect a file systems usage to be exposed (I actually think it’s in the filesystem mib!)

We might be able to synthesise a better memory pool entity which shows some other variant of this data generated from the more granular data, like we do with the fake “average” processor entity on unix systems.

Im not sure how successful this would be on multiple systems, last time I looked different versions of Linux were exposing different sets of data and it was messy to try to be consistent between them.

Adam.

Sent from my iPhone

On 18 Aug 2023, at 00:35, NGS Webmaster via observium <observium@lists.observium.org> wrote:


Good Morning Adam,

Thank you for your response.  Just to be clear, you are stating that there is no way for Observium to calculate the amount of RAM being used without including the amount of RAM used by shared/buff/cache?  The reason I ask is because we have alerts setup for the mempool_perc metric and a few of our systems hold a lot of RAM in shared/buff/cache which produces constant alerts even though the amount of RAM actually being used (excluding RAM in shared/buff/cache) is below the alert threshold.

Regards

On Sun, Jul 23, 2023 at 2:51 PM Adam Armstrong via observium <observium@lists.observium.org> wrote:
You're reading the graph wrong. The second like is "excluding cached, shared, buffers". The actual cached number is towards the bottom of the legend.

It's functionally not possible to get the same data from SNMP that you get from free, it just doesn't expose the numbers in the same way. There is a lot of messy maths happening to try to approximate the free output from the data we get from SNMP, but it never matches. I think we're missing a value or two needed to calculate everything properly.

Long ago I was going to expose it with custom code, but the Linux kernel changed the way it exposed the numbers a couple of times too, so I didn't bother.

adam.

NGS Webmaster via observium wrote on 20/07/2023 17:43:


---------- Forwarded message ---------
From: NGS Webmaster <ngs.webmas...@noaa.gov>
Date: Fri, Jun 30, 2023 at 12:34 PM
Subject: Memory plots are mislabeled or incorrect in Observium
To: <observium-subscr...@lists.observium.org>


Hello,

The memory plots appear to be mislabeled in Observium.   Below is a plot for one of our systems.
<image.png>

It reports that 32.19GB of memory is being used and that 5.67GB of memory is cached.  However when I run the free -h command on the system the values for these fields are different.
<image.png>

This makes it difficult to accurately monitor the memory usage and setup accurate alerts for memory usage for systems.
Can this bug be fixed?  Our version information is below.
<image.png>

Thank you 


_______________________________________________
observium mailing list -- observium@lists.observium.org
To unsubscribe send an email to observium-le...@lists.observium.org

_______________________________________________
observium mailing list -- observium@lists.observium.org
To unsubscribe send an email to observium-le...@lists.observium.org
_______________________________________________
observium mailing list -- observium@lists.observium.org
To unsubscribe send an email to observium-le...@lists.observium.org
_______________________________________________
observium mailing list -- observium@lists.observium.org
To unsubscribe send an email to observium-le...@lists.observium.org

Reply via email to