Re: changing broker hosts with 0.7.2

2013-03-19 Thread Jason Rosenberg
I can do most of that I presume. It looks like to set up a separate namespace for zk, I can add /path at the end of each node:port in my zkconnect string, e.g.: zkhost1:123/newnamespace,zkhost2:123/newnamespace right? For mirroring, there's some vague documentation here: https://cwiki.apache.org

Re: Large number of Partitions

2013-03-19 Thread Jun Rao
Ian, 1500 partitions for a topic should be fine, assuming that you don't have too many topics. In general, the more partitions, the more open file handles are required in broker and the more more space is required in ZK. Thanks, Jun On Tue, Mar 19, 2013 at 2:53 PM, Ian Friedman wrote: > Hello

Re: Replicas for partition are dead

2013-03-19 Thread Jun Rao
Did the broker get restarted with the same broker id? Thanks, Jun On Tue, Mar 19, 2013 at 1:34 PM, Jason Huang wrote: > Hello, > > My kafka (0.8) server went down today for unknown reason and when I > restarted both zookeeper and kafka server I got the following error at > the kafka server log

Re: changing broker hosts with 0.7.2

2013-03-19 Thread Neha Narkhede
Can you do the following - 1. Start a mirror Kafka cluster with the new version on a separate zookeeper namespace. Configure this to mirror data from the existing kafka cluster. 2. Move your consumers to pull data from the mirror 3. For each producer, one at a time, change the zookeeper namespace

Large number of Partitions

2013-03-19 Thread Ian Friedman
Hello, I am a complete newbie to Kafka and am trying to evaluate its usefulness for our particular application. I plan to have a lot of consumers in a single group, and it seems like the best way to load balance messages across consumers without knowing ahead of time exactly how many consumers

Re: Anyone working on a Kafka book?

2013-03-19 Thread Neha Narkhede
That's a great idea, Chris! How about picking the quickstart document ?That is the most important information that users moving to 0.8 will need. Thanks, Neha On Tue, Mar 19, 2013 at 1:31 PM, Chris Curtin wrote: > Hi Jun, > > I've been thinking for a while about how to contribute to the project

changing broker hosts with 0.7.2

2013-03-19 Thread Jason Rosenberg
I need to upgrade some kafka broker servers. So I need to seamlessly migrate traffic from the old brokers to the new ones, without losing data, and without stopping producers. I can temporarily stop consumers, etc. Is there a strategy for this? Also, because of the way we are embedding kafka in

Replicas for partition are dead

2013-03-19 Thread Jason Huang
Hello, My kafka (0.8) server went down today for unknown reason and when I restarted both zookeeper and kafka server I got the following error at the kafka server log: [2013-03-19 13:39:16,131] INFO [Partition state machine on Controller 1]: Invoking state change to OnlinePartition for partitions

Re: Anyone working on a Kafka book?

2013-03-19 Thread Chris Curtin
Hi Jun, I've been thinking for a while about how to contribute to the project and thought that working on some documentation for the website might be a good way. Do you have an outline of what you'd like the site to look like that I (AND OTHERS hint, hint) could pick a topic, write the article and

Re: Anyone working on a Kafka book?

2013-03-19 Thread S Ahmed
I guess the challenge would be that kafka is still in version 0.8, so by the time your book comes out they might be at version 1.0 i.e. its a moving target Sounds like a great idea though! On Tue, Mar 19, 2013 at 12:20 PM, Jun Rao wrote: > Hi, David, > > At LinkedIn, committers are too busy to

Re: Consume from X messages ago

2013-03-19 Thread Neha Narkhede
I guess I missed a step between 4 and 5 - 4. Replace the exported offsets with these offsets *Use ImportZkOffsets to import the offsets from the modified export file.* 5. Restart the consumer. Thanks, Neha On Tue, Mar 19, 2013 at 11:00 AM, S Ahmed wrote: > I thought since the offsets in .8 ar

Re: Consume from X messages ago

2013-03-19 Thread S Ahmed
I thought since the offsets in .8 are numeric and not byte offsets like in 0.7x, you can simply just take say the current offset - 1. On Tue, Mar 19, 2013 at 12:16 PM, Neha Narkhede wrote: > Jim, > > You can leverage the ExportZkOffsets/ImportZkOffsets tools to do this. > ExportZkOffsets exp

Re: Anyone working on a Kafka book?

2013-03-19 Thread Jun Rao
Hi, David, At LinkedIn, committers are too busy to write a Kafka book right now. I think this is a good idea to pursue. So, if you want to do it, we'd be happy to help. The only request that I have for you is while writing the book, it would be good if you can use this opportunity to also help us

Re: Consume from X messages ago

2013-03-19 Thread Neha Narkhede
Jim, You can leverage the ExportZkOffsets/ImportZkOffsets tools to do this. ExportZkOffsets exports the consumer offsets for your group to a file in a certain format. You can then place the desired offset per partition you want to reset your consumer to in the exported file. 1. Shutdown the consu

Re: Connection reset by peer

2013-03-19 Thread Jun Rao
"Connect reset by peer" means the other side of the socket has closed the connection for some reason. Could you provide the error/exception in both the producer and the broker when a produce request fails? Thanks, Jun On Tue, Mar 19, 2013 at 1:34 AM, Yonghui Zhao wrote: > Connection reset exc

Re: Kafka throw InvalidMessageException and lost data

2013-03-19 Thread Jun Rao
It basically means that the broker is expecting to read certain number of bytes in a buffer received from socket, but there are fewer bytes than expected in the buffer. Possible causes are (1) a bug in Kafka request serialization/deserialization logic; (2) corruption in the underlying system such a

Re: Consume from X messages ago

2013-03-19 Thread David Arthur
This API is exposed through the SimpleConsumer scala class. See https://github.com/apache/kafka/blob/trunk/core/src/main/scala/kafka/consumer/SimpleConsumer.scala#L60 You will need to set earliestOrLatest to -1 for the latest offset. There is also a command line tool https://github.com/apache/

Re: Consume from X messages ago

2013-03-19 Thread James Englert
I'm still a bit lost. Where is the offsets API? I.e. which class? On Tue, Mar 19, 2013 at 11:16 AM, David Arthur wrote: > Using the Offsets API, you can get the latest offset by setting time to > -1. Then you subtract 1 > > There is no guarantee that 10k prior messages exist of course, so

Re: Consume from X messages ago

2013-03-19 Thread David Arthur
Using the Offsets API, you can get the latest offset by setting time to -1. Then you subtract 1 There is no guarantee that 10k prior messages exist of course, so you'd need to handle that case. -David On 3/19/13 11:04 AM, James Englert wrote: Hi, I'm using Kafka 0.8. I would like to s

Consume from X messages ago

2013-03-19 Thread James Englert
Hi, I'm using Kafka 0.8. I would like to setup a consumer to fetch the last 10,000 messages and then start consuming messages. I see the configuration autooffset.reset, but that isn't quite what I want. I want only the last 10,000 messages. Is there a good way to achieve this in 0.8, besides j

Anyone working on a Kafka book?

2013-03-19 Thread David Arthur
I was approached by a publisher the other day to do a book on Kafka - something I've actually thought about pursuing. Before I say yes (or consider saying yes), I wanted to make sure no one else was working on a book. No sense in producing competing texts at this point. So, anyone working on a

Re: java.io.IOException: Broken pipe

2013-03-19 Thread Yonghui Zhao
Hi Neha, How can I enable all kafka consumer log in senseidb? Btw: I am using kafka 0.7.2 java client. 2013/3/19 Neha Narkhede > The logs show that senseidb is prematurely closing the socket connection to > the Kafka broker. I would enable atleast INFLO logging for Kafka in > Senseidb to see wh

Re: Uncompress / re-compress of messages in the message server

2013-03-19 Thread Ross Black
HI Neha, Thanks for the info. I will be most interested to see what your testing shows. Thanks, Ross On 19 March 2013 17:10, Neha Narkhede wrote: > Yes, your understanding is correct. The reason we have to recompress the > messages is to assign a unique offset to messages inside a compress

Re: Connection reset by peer

2013-03-19 Thread Yonghui Zhao
Connection reset exception reproed. [2013-03-19 16:30:45,814] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor) [2013-03-19 16:30:55,253] ERROR Closing socket for /127.0.0.1 because of error (kafka.network.Processor) java.io.IOException: Connection reset by peer at sun.n

Re: Connection reset by peer

2013-03-19 Thread Yonghui Zhao
Thanks Jun. Now I use onebox to test kafka, kafka server ip on zk is 127.0.0.1, network is not affected by external factors. Reset connection is not reproed, but I still find Broken pipe exceptions and a few zk exceptions. [2013-03-19 15:23:28,660] INFO Closed socket connection for client / 127.