Sure. I would be doing that.
I have seen that if I have 5-7 topics with 256 partitions each on a machine
with 4CPUs, 8GB RAM, the jvm crashes with OutOfMemoryError
And, this happens in many machines in the cluster. (I'll update the exact
number as well)

I was wondering how I could tune the JVM to its limits, for handling such
scenario.

Regards,
Prabhjot

On Tue, Jul 28, 2015 at 12:27 PM, Darion Yaphet <darion.yap...@gmail.com>
wrote:

> Kafka store it meta data in Zookeeper Cluster so evaluate "how many total
> number of topics and partitions can be created in a cluster "  maybe same
> as to test Zookeeper's expansibility  and disk IO performance .
>
> 2015-07-28 13:51 GMT+08:00 Prabhjot Bharaj <prabhbha...@gmail.com>:
>
> > Hi,
> >
> > I'm looking forward to a benchmark which can explain how many total
> number
> > of topics and partitions can be created in a cluster of n nodes, given
> the
> > message size varies between x and y bytes and how does it vary with
> varying
> > heap sizes and how it affects the system performance.
> >
> > e.g. the result should look like: t topics with p partitions each can be
> > supported in a cluster of n nodes with a heap size of h MB, before the
> > cluster sees things like JVM crashes or high mem usage or system slowdown
> > etc.
> >
> > I think such benchmarks must exist so that we can make better decisions
> on
> > ops side
> > If these details dont exist, I'll be doing this test myself on varying
> the
> > values of parameters described above. I would be happy to share the
> numbers
> > with the community
> >
> > Thanks,
> > prabcs
> >
>
>
>
> --
>
> long is the way and hard  that out of Hell leads up to light
>



-- 
---------------------------------------------------------
"There are only 10 types of people in the world: Those who understand
binary, and those who don't"

Reply via email to