Hi,

Thanks for the update,
I agree to have the JVM heap usage enhancement.

+1

Penghui

On Mon, Feb 28, 2022 at 5:33 PM Jiuming Tao <jm...@streamnative.io.invalid>
wrote:

> Hi Penghui,
>
> I had read `io.prometheus.client.exporter.HttpServer` source code, in
> `HttpMetricsHandler#handle` method, it uses thread local cached
> `ByteArrayOutputStream` , it’s similar with our current implemention(with
> heap memory resizes and mem_copy).
> It will spend a plenty of heap memory, and even worse, these heap memory
> will never be released(cached in thread local).
>
> Thanks,
> Tao Jiuming
>
> > 2022年2月28日 下午5:00,PengHui Li <peng...@apache.org> 写道:
> >
> > Hi Jiuming,
> >
> > Could you please check if the Prometheus client
> > can be used to reduce the JVM heap usage?
> > If yes, I think we can consider using the Prometheus
> > client instead of the current implementation together.
> > Otherwise, we'd better focus on the heap memory usage
> > enhancement for this discussion. Using the Prometheus
> > client to refactor the current implementation will be a
> > big project.
> >
> > Thanks,
> > Penghui
> >
> > On Sun, Feb 27, 2022 at 12:22 AM Jiuming Tao
> <jm...@streamnative.io.invalid>
> > wrote:
> >
> >> Hi all,
> >> https://github.com/apache/pulsar/pull/14453 <
> >> https://github.com/apache/pulsar/pull/14453>  please take a look.
> >>
> >> Thanks,
> >> Tao Jiuming
> >>
> >>> 2022年2月24日 上午1:05,Jiuming Tao <jm...@streamnative.io> 写道:
> >>>
> >>> Hi all,
> >>>>
> >>>> 2. When there are hundreds MB metrics data collected, it causes high
> >> heap memory usage, high CPU usage and GC pressure. In the
> >> `PrometheusMetricsGenerator#generate` method, it uses
> >> `ByteBufAllocator.DEFAULT.heapBuffer()` to allocate memory for writing
> >> metrics data. The default size of
> `ByteBufAllocator.DEFAULT.heapBuffer()`
> >> is 256 bytes, when the buffer resizes, the new buffer capacity is 512
> >> bytes(power of 2) and with `mem_copy` operation.
> >>>> If I want to write 100 MB data to the buffer, the current buffer size
> >> is 128 MB, and the total memory usage is close to 256 MB (256bytes + 512
> >> bytes + 1k + .... + 64MB + 128MB). When the buffer size is greater than
> >> netty buffer chunkSize(16 MB), it will be allocated as
> UnpooledHeapByteBuf
> >> in the heap. After writing metrics data into the buffer, return it to
> the
> >> client by jetty, jetty will copy it into jetty's buffer with memory
> >> allocation in the heap, again!
> >>>> In this condition, for the purpose of saving memory, avoid high CPU
> >> usage(too much memory allocations and `mem_copy` operations) and
> reducing
> >> GC pressure, I want to change `ByteBufAllocator.DEFAULT.heapBuffer()` to
> >> `ByteBufAllocator.DEFAULT.compositeDirectBuffer()`, it wouldn't cause
> >> `mem_copy` operations and huge memory
> allocations(CompositeDirectByteBuf is
> >> a bit slowly in read/write, but it's worth). After writing data, I will
> >> call the `HttpOutput#write(ByteBuffer)` method and write it to the
> client,
> >> the method won't cause `mem_copy` (I have to wrap ByteBuf to
> ByteBuffer, if
> >> ByteBuf wrapped, there will be zero-copy).
> >>>
> >>> The jdk in my local is jdk15, I just noticed that in jdk8, ByteBuffer
> >> cannot be extended and implemented. So, if allowed, I will write metrics
> >> data to temp files and send it to client by jetty’s send_file. It will
> be
> >> turned out a better performance than `CompositeByteBuf`, and takes lower
> >> CPU usage due to I/O blocking.(The /metrics endpoint will be a bit
> slowly,
> >> I believe it’s worth).
> >>> If not allowed, it’s no matter and it also has a better performance
> than
> >> `ByteBufAllocator.DEFAULT.heapBuffer()`(see the first image in original
> >> mail).
> >>>
> >>> Thanks,
> >>> Tao Jiuming
> >>
> >>
>
>

Reply via email to