Yes, your understanding is correct. The reason we have to recompress the
messages is to assign a unique offset to messages inside a compressed
message. Some preliminary load testing shows 30% increase in CPU, but that
is using GZIP which is known to be CPU intensive. By this week, we will
know the
Using the Offsets API, you can get the latest offset by setting time to
-1. Then you subtract 1
There is no guarantee that 10k prior messages exist of course, so you'd
need to handle that case.
-David
On 3/19/13 11:04 AM, James Englert wrote:
Hi,
I'm using Kafka 0.8. I would like to
I'm still a bit lost. Where is the offsets API? I.e. which class?
On Tue, Mar 19, 2013 at 11:16 AM, David Arthur mum...@gmail.com wrote:
Using the Offsets API, you can get the latest offset by setting time to
-1. Then you subtract 1
There is no guarantee that 10k prior messages exist
This API is exposed through the SimpleConsumer scala class. See
https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L60
You will need to set earliestOrLatest to -1 for the latest offset.
There is also a command line tool
Connect reset by peer means the other side of the socket has closed the
connection for some reason. Could you provide the error/exception in both
the producer and the broker when a produce request fails?
Thanks,
Jun
On Tue, Mar 19, 2013 at 1:34 AM, Yonghui Zhao zhaoyong...@gmail.com wrote:
Hi, David,
At LinkedIn, committers are too busy to write a Kafka book right now. I
think this is a good idea to pursue. So, if you want to do it, we'd be
happy to help. The only request that I have for you is while writing the
book, it would be good if you can use this opportunity to also help us
I thought since the offsets in .8 are numeric and not byte offsets like in
0.7x, you can simply just take say the current offset - 1.
On Tue, Mar 19, 2013 at 12:16 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
Jim,
You can leverage the ExportZkOffsets/ImportZkOffsets tools to do this.
I guess I missed a step between 4 and 5 -
4. Replace the exported offsets with these offsets
*Use ImportZkOffsets to import the offsets from the modified export file.*
5. Restart the consumer.
Thanks,
Neha
On Tue, Mar 19, 2013 at 11:00 AM, S Ahmed sahmed1...@gmail.com wrote:
I thought since
I guess the challenge would be that kafka is still in version 0.8, so by
the time your book comes out they might be at version 1.0 i.e. its a moving
target
Sounds like a great idea though!
On Tue, Mar 19, 2013 at 12:20 PM, Jun Rao jun...@gmail.com wrote:
Hi, David,
At LinkedIn, committers
Hi Jun,
I've been thinking for a while about how to contribute to the project and
thought that working on some documentation for the website might be a good
way. Do you have an outline of what you'd like the site to look like that I
(AND OTHERS hint, hint) could pick a topic, write the article
I need to upgrade some kafka broker servers. So I need to seamlessly
migrate traffic from the old brokers to the new ones, without losing data,
and without stopping producers. I can temporarily stop consumers, etc.
Is there a strategy for this?
Also, because of the way we are embedding kafka
Hello, I am a complete newbie to Kafka and am trying to evaluate its usefulness
for our particular application. I plan to have a lot of consumers in a single
group, and it seems like the best way to load balance messages across consumers
without knowing ahead of time exactly how many consumers
Can you do the following -
1. Start a mirror Kafka cluster with the new version on a separate
zookeeper namespace. Configure this to mirror data from the existing kafka
cluster.
2. Move your consumers to pull data from the mirror
3. For each producer, one at a time, change the zookeeper namespace
Ian,
1500 partitions for a topic should be fine, assuming that you don't have
too many topics. In general, the more partitions, the more open file
handles are required in broker and the more more space is required in ZK.
Thanks,
Jun
On Tue, Mar 19, 2013 at 2:53 PM, Ian Friedman i...@flurry.com
14 matches
Mail list logo