Hi,

@Manoj/@Pete: Thanks for the inputs. I am already aware of the parallelism
provided by kafka. My use case needed single topic per user, but I came up
with a workaround for that and so the problem is solved.

@Guozhang: I agree, kafka stores data related to partitions and topics(
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper)
in zookeeper. But if its just the memory constraint why can't we increase
the memory footprint for zookeeper?. It's configurable isn't it?. Any
thoughts on this, please let me know?


Thank you,
Siva.

On Thu, Apr 16, 2015 at 9:18 AM, Guozhang Wang <wangg...@gmail.com> wrote:

> Siva,
>
> For Kafka brokers as long as the total #.partitions is not too much I think
> it should be fine (we have been hosting 600+ partitions on a single node).
> You have to pay some attention on your ZK nodes though since with #.topics
> increasing its metadata storage will take more and more space.
>
> Guozhang
>
> On Tue, Apr 14, 2015 at 2:37 PM, Sivananda Reddy <sivananda2...@gmail.com>
> wrote:
>
> > Hi,
> >
> >     # I looked the documents of kafka and I see that there is no way a
> > consume instance can
> >        read specific messages from partition.
> >     # I have an use case where I need to spawn a topic(single partition)
> > for each user,
> >        so there would be 10k online users at a time, there would be very
> > less data per topic.
> >        My questions is, what are limitations of having multiple
> topics(with
> > 1 partition), I think
> >        this situation would cause heavy memory consumption and are they
> any
> > other limitations?.
> >        Basically the problem boils down to what are the scalability
> > limitations of having
> >        multiple topics(hardware/software)?
> >
> > Thanks a lot in advance.
> >
> > Regards,
> > Siva.
> >
>
>
>
> --
> -- Guozhang
>

Reply via email to