The memory that a kafka broker uses is the java heap + the page cache. If
you’re able to split your memory metrics by memory-used and memory-cached, you
should see that the majority of a broker’s memory usage is cached memory.
As a broker receives data from producers, the data first enters the p
Hi Steve,
We are using Prometheus jmx exporter and Prometheus to scrape metrics based
on memory metric we are measuring.
Jmx exporter:
https://github.com/prometheus/jmx_exporter/blob/master/README.md
Thanks,
Ramm.
On Fri, Apr 12, 2019 at 12:43 PM Steve Howard
wrote:
> Hi Rammohan,
>
> How are
Hi Rammohan,
How are you measuring "Kafka seems to be reserving most of the memory"?
Thanks,
Steve
On Thu, Apr 11, 2019 at 11:53 PM Rammohan Vanteru
wrote:
> Hi Users,
>
> As per the article here:
> https://docs.confluent.io/current/kafka/deployment.html#memory memory
> requirement is roughl
Hi Users,
As per the article here:
https://docs.confluent.io/current/kafka/deployment.html#memory memory
requirement is roughly calculated based on formula: write throughput*30
(buffer time in seconds), which fits in our experiment i.e. 30MB/s*30~
900MB. Followup questions here:
- How do we e