I am trying to consume kafka messages from all partitions(dynamically
assign all partitions) using simpleConsumer.
Is there a way to read messages dynamically from *all partitions* using
SimpleConsumer ?
Thanks in advance!
Regards,
Rafeeq S
*(“What you do is what matters, not what you think o
Hi,
I have a set up of 3 kafka servers, with a replication factor of 2.
I have only one topic in this setup as of now.
bin/kafka-list-topic.sh --zookeeper
server1:2181,server2:2181,server3:2181 --topic topic1
topic: topic1partition: 0leader: 1replicas: 2,1isr: 1
topic: topic1
Short answer is no. Simple consumer has to know the partition ids in
advance to fetch the data. If you want dynamic partition assignments you
need to use high-level consumer.
Guozhang
On Thu, Jun 19, 2014 at 1:30 AM, rafeeq s wrote:
> I am trying to consume kafka messages from all partitions(d
Could you check the controller log to see if broker 2 once has a soft
failure and hence its leadership been migrated to other brokers?
On Thu, Jun 19, 2014 at 6:57 AM, Arjun wrote:
> Hi,
>
> I have a set up of 3 kafka servers, with a replication factor of 2.
> I have only one topic in this setu
Hello Kyle,
For your first question, the first option would be preferable: it may use
little bit more memory, and have more ZK writes. In 0.9 though, the offsets
will be stored in Kafka servers instead of ZK, so you will no longer
bombard ZK.
For the third question, our designed usage pattern for
There were actually several patches against trunk since 0.8.1.1 that may
impact latency however, especially when using acks=-1. So those results in
the blog may be a bit better than what you would see in 0.8.1.1.
-Jay
On Wed, Jun 18, 2014 at 7:58 PM, Supun Kamburugamuva
wrote:
> My machine con
Hi All,
Thanks for your valuable comments.
Sure, I will give a try with Samza and Data Torrent.
Meanwhile, I sharing screenshot of Storm UI. Please have a look at it.
Kafka producer is able to push 35 million messages to broker in two hours
with the of approx. 4k messages per second. On other s
To clarify for my last email, by 10 nodes, I mean 10 kafka partitions
distributed in 10 different brokers. In my test, datatorrent can scale up
linearly with kafka partitions without any problem. Whatever you produce to
kafka, it can easily take into your application. And I'm quite sure it can
hand
Hi
The controller log doesn't say much. It just says the following. The
only thing I got from the logs is at start things were fine. There are
soem partitions who has the broker 3 as leader. But after that there is
no log and nothing is there to see.
(sry for the long log, i dont know what may
I think I found something related to this. This i found in some other
nodes controller log. Am i correct in suspecting this as the issue. what
might have gone wrong. From log it seems, the third node just got added
and some error occurred while handling the broker change.
There are no errors in
It seems the third broker went down at around 10:30:57, then back up at
12:27:00,351, but the new controller trying to update its status and
failed. I suspect it is hitting this issue.
https://issues.apache.org/jira/browse/KAFKA-1096
Guozhang
On Thu, Jun 19, 2014 at 9:23 PM, Arjun wrote:
> I
One small doubt on this. If we keep on monitoring the "number of under
replicated partitions" and "ISR shrinks and Expansions", could we have
found this error earlier?
Can you please suggest me what should i be monitoring so that i can get
earlier.
Thanks
Arjun Narasimha K
On Friday 20 June 2
The number of URP is a good metric to monitor, if it becomes non-zero then
it usually indicates a broker failure (either soft failure or total
crashed).
Guozhang
On Thu, Jun 19, 2014 at 10:17 PM, Arjun wrote:
> One small doubt on this. If we keep on monitoring the "number of under
> replicated
Hi Guozhang,
I am using 0.8.0 release
Regards,
Abhinav
On Thu, Jun 19, 2014 at 2:06 AM, Guozhang Wang wrote:
> Hello Abhinav,
>
> Which Kafka version are you using?
>
> Guozhang
>
>
> On Wed, Jun 18, 2014 at 1:40 AM, Abhinav Anand wrote:
>
> > Hi Guys,
> > We have a 6 node cluster. Due to ne
Thanks for the advice, Guozhang.
Jagbir: I'll report back on my progress. I intend to have quite a few
threads across many machines. We'll see how well it performs with a whole
high-level consumer per thread.
On Thu, Jun 19, 2014 at 9:30 AM, Guozhang Wang wrote:
> Hello Kyle,
>
> For your firs
15 matches
Mail list logo