Hi Jiangjie,
Thanks for the reply.
A little concern is on that: kafka-0.8.1 builds with scala-2.9.1
while kafka-0.8.2 builds with scala-2.10.
Anyway, I will test it soon to confirm. Thanks.
Best Regards,
Zhuo Liu
Ph.D. Student, CSSE department
Auburn University, AL 36849
http://www.auburn.edu/~
Hi Ho,
I'm trying to increase my replication factor from 1 to 2.
I used the tool
kafka-reassign-partitions.sh
I see the replica factor change, but my replicas are not syncing up. There
is pretty much no data in this topic. Will this replication get triggered
at some point??
root@b3b4b5d71b48:/
It should work, but usually we prefer the server version to be not lower
than client version.
On 5/27/15, 3:12 PM, "Zhuo Liu" wrote:
>Dear all,
>
>
>In 0.8.2.1 Kafka, there is new Producer API (KafkaProducer etc.).
>
>My question is: will 0.8.2.1 new Producer API
>
>be able to successfully talk
Can you turn on TRACE level logging for kafka-request.log and see if
broker received the producer request or not?
You can go to KAKFA_FOLDER/config/log4j.properties and change
log4j.logger.kafka.network.RequestChannels to TRACE.
Jiangjie (Becket) Qin
On 5/27/15, 12:12 PM, "Charlie Mason" wrote:
This should be just a message fetch failure. The socket was disconnected
when broker was writing to it. There should not be data loss.
Jiangjie (Becket) Qin
On 5/27/15, 11:00 AM, "Andrey Yegorov" wrote:
>I've noticed a few exceptions in the logs like the one below, does it
>indicate data loss?
Dear all,
In 0.8.2.1 Kafka, there is new Producer API (KafkaProducer etc.).
My question is: will 0.8.2.1 new Producer API
be able to successfully talk to a Kafka server cluster running
with 0.8.1.1 Kafka?
Thanks very much!
Best Regards,
Zhuo Liu
Ph.D. Student, CSSE department
Auburn Univer
I have a similar issue, let me know how it goes. :)
-Original Message-
From: Andrew Otto [mailto:ao...@wikimedia.org]
Sent: Wednesday, May 27, 2015 3:12 PM
To: users@kafka.apache.org
Subject: Kafka partitions unbalanced
Hi all,
I’ve recently noticed that our broker log.dirs are using up
In Kafka 0.8.3, what is the equivalent of PartitionOffsetRequestInfo? I’m
looking through the old SimpleConsumer example
(https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+simpleconsumer+example).
Thanks.
Hi all,
I’ve recently noticed that our broker log.dirs are using up different amounts
of storage. We use JBOD for our brokers, with 12 log.dirs, 1 on each disk.
One of our topics is larger than the others, and has 12 partitions.
Replication factor is 3, and we have 4 brokers. Each broker th
On May 27, 2015, at 11:23 AM, Joel Koshy wrote:
> That's right - it should not help significantly assuming even
> distribution of leaders and even distribution of partition volume
> (average inbound messages/sec).
>
Aditya, Joel,
Oh, right, that makes sense. If I had a 10 partition topic acro
Hi All,
So I have done some more tests and found something I really don't
understand.
I found a simple example of the Kafka Java producer so I ran that pointing
at the same topic as my last test. That failed when run from my local
machine. I uploaded it to the VM where Kafka is installed and it w
> > Out of curiosity - what's the typical latency (distribution) you see
> between zones?
>
> Unfortunately I don't have any good numbers on that. Since we're publishing
> both in the same AZ and to other AZs the latency metrics reflect both. If I
> figure out a good way to measure this I will rep
That's right - it should not help significantly assuming even
distribution of leaders and even distribution of partition volume
(average inbound messages/sec).
Theo's use-case is a bit different though in which you want to avoid
cross-zone consumer reads especially if you have a high fan-out in
nu
I've noticed a few exceptions in the logs like the one below, does it
indicate data loss? should I worry about this?
What is the possible reason for this to happen?
I am using kafka 0.8.1.1
ERROR Closing socket for /xx.xxx.xxx.xxx because of error
(kafka.network.Processor)
kafka.common.KafkaExcep
Is that necessarily the case? On a cluster hosting partitions, assuming the
leaders are evenly distributed, every node should receive a roughly equal share
of the traffic. It does help a lot when the consumer throughput of a single
partition exceeds the capacity of a single leader but at that po
On May 26, 2015, at 1:44 PM, Joel Koshy wrote:
>> Apologies if this question has been asked before. If I understand things
>> correctly a client can only fetch from the leader of a partition, not from
>> an (in-sync) replica. I have a use case where it would be very beneficial
>> if it were poss
Thanks for the response Ewen!
On Tue, May 26, 2015 at 10:52 PM, Ewen Cheslack-Postava
wrote:
> It's not being switched in this case because the broker hasn't failed. It
> can still connect to all the other brokers and zookeeper. The only failure
> is of the link between a client and the broker.
Hi,
We have the following setup -
Number of brokers: 3
Number of zookeepers: 3
Default replication factor: 3
Offets Storage: kafka
When one of our brokers ran out of disk space, we started seeing lot of
errors in the broker logs at an alarming rate. This caused the other
brokers also to run ou
Thank you for your response Joel,
> Can you file a jira for this?
I've created https://issues.apache.org/jira/browse/KAFKA-2225
> Out of curiosity - what's the typical latency (distribution) you see
between zones?
Unfortunately I don't have any good numbers on that. Since we're publishing
both
I guess adding a new component will increase the complexity of the system
structure. And if the new component consists of one or a few nodes, it may
becomes the bottleneck of the whole system, if it consists of many nodes,
it will make the system even more complex.
Although every solution has its
It could be a separate server component, does not have to be
monolith/coupled with broker.
Such solution would have benefits - single API, pluggable implementations.
On Wed, May 27, 2015 at 8:57 AM, Shady Xu wrote:
> Storing and managing offsets by broker will leave high pressure on the
> broker
21 matches
Mail list logo