Re: [VOTE] 0.8.2.1 Candidate 1

2015-02-18 Thread Connie Yang
+1 On Feb 18, 2015 7:23 PM, "Matt Narrell" wrote: > +1 > > > On Feb 18, 2015, at 7:56 PM, Jun Rao wrote: > > > > This is the first candidate for release of Apache Kafka 0.8.2.1. This > > only fixes one critical issue (KAFKA-1952) in 0.8.2.0. > > > > Release Notes for the 0.8.2.1 release > > > ht

Re: [VOTE] 0.8.2.1 Candidate 1

2015-02-18 Thread Matt Narrell
+1 > On Feb 18, 2015, at 7:56 PM, Jun Rao wrote: > > This is the first candidate for release of Apache Kafka 0.8.2.1. This > only fixes one critical issue (KAFKA-1952) in 0.8.2.0. > > Release Notes for the 0.8.2.1 release > https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/RELEASE_NOTE

Re: Broker w/ high memory due to index file sizes

2015-02-18 Thread Jay Kreps
40G is really huge, generally you would want more like 4G. Are you sure you need that? Not sure what you mean by lsof and index files being too large, but the index files are memory mapped so they should be able to grow arbitrarily large and their memory usage is not counted in the java heap (in fa

Re: Consuming a snapshot from log compacted topic

2015-02-18 Thread Jay Kreps
Yeah I was thinking either along the lines Joel was suggesting or else adding a logEndOffset(TopicPartition) method or something like that. As Joel says the consumer actually has this information internally (we return it with the fetch request) but doesn't expose it. -Jay On Wed, Feb 18, 2015 at

[VOTE] 0.8.2.1 Candidate 1

2015-02-18 Thread Jun Rao
This is the first candidate for release of Apache Kafka 0.8.2.1. This only fixes one critical issue (KAFKA-1952) in 0.8.2.0. Release Notes for the 0.8.2.1 release https://people.apache.org/~junrao/kafka-0.8.2.1-candidate1/RELEASE_NOTES.html *** Please download, test and vote by Saturday, Feb 21,

Re: Kakfa question about starting kafka with same broker id

2015-02-18 Thread Harsha
Deepak, You should getting following error and kafka server will shutdown itself it it sees same brokerid registered in zookeeper. " Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.lang.RuntimeException: A broker is already registered on the p

Re: Kakfa question about starting kafka with same broker id

2015-02-18 Thread Steve Morin
Why would you want to ever do that? > On Feb 18, 2015, at 15:16, Deepak Dhakal wrote: > > Hi, > > My name is Deepak and I work for salesforce. We are using kafka 8.11 and > have a question about starting kafka with same broker id. > > Steps: > > Start a kakfa broker with broker id =1 -> it s

Kakfa question about starting kafka with same broker id

2015-02-18 Thread Deepak Dhakal
Hi, My name is Deepak and I work for salesforce. We are using kafka 8.11 and have a question about starting kafka with same broker id. Steps: Start a kakfa broker with broker id =1 -> it starts fine with external ZK Start another kafka with same broker id =1 .. it does not start the kafka which

Re: Hold off on 0.8.2 upgrades

2015-02-18 Thread Jun Rao
Yes, that makes sense. I will try to roll out 0.8.2.1 for vote later today. Thanks, Jun On Wed, Feb 18, 2015 at 4:15 PM, Jay Kreps wrote: > Well, I guess what I was thinking is that since we have the long timeout on > the vote anyway, no reason not to call the vote now, should anything else >

Re: Consuming a snapshot from log compacted topic

2015-02-18 Thread Joel Koshy
> > 2. Make the log end offset available more easily in the consumer. > > Was thinking something would need to be added in LogCleanerManager, in the > updateCheckpoints function. Where would be best to publish the information > to make it more easily available, or would you just expose the > offse

Re: Consuming a snapshot from log compacted topic

2015-02-18 Thread Will Funnell
> Do you have to separate the snapshot from the "normal" update flow. We are trying to avoid using another datasource if possible to have one source of truth. > I think what you are saying is that you want to create a snapshot from the > Kafka topic but NOT do continual reads after that point. Fo

Broker w/ high memory due to index file sizes

2015-02-18 Thread Zakee
I am running a cluster of 5 brokers with 40G ms/mx for each. I found one of the brokers is constantly using above ~90% of memory for jvm.heapUsage. I checked from lsof output that the size of the index files for this broker is too large. Not sure what is going on with this one broker in the cluste

Re: Hold off on 0.8.2 upgrades

2015-02-18 Thread Jay Kreps
Well, I guess what I was thinking is that since we have the long timeout on the vote anyway, no reason not to call the vote now, should anything else pop up we can cancel the vote. -Jay On Wed, Feb 18, 2015 at 4:04 PM, Jun Rao wrote: > Well, KAFKA-1952 only introduces high CPU overhead if the n

Re: Hold off on 0.8.2 upgrades

2015-02-18 Thread Jun Rao
Well, KAFKA-1952 only introduces high CPU overhead if the number of partitions in a fetch request is high, say more than a couple of hundreds. So, it may not show up in every installation. For example, if you have 1000 leader replicas in a broker, but have a 20 node cluster, each replica fetch requ

Re: Consuming a snapshot from log compacted topic

2015-02-18 Thread Joel Koshy
> You are also correct and perceptive to notice that if you check the end of > the log then begin consuming and read up to that point compaction may have > already kicked in (if the reading takes a while) and hence you might have > an incomplete snapshot. Isn't it sufficient to just repeat the che

Re: Thread safety of Encoder implementations

2015-02-18 Thread Elizabeth Bennett
Hi Guozhang, Sorry for the delayed response. The code we use for the producer send call looks like this: We instantiate the producer like this: Producer producer = new Producer(config, context.getEventSerializer(), new ProducerHandlerWrapper(config, context.getCallback()), null, context.getPart

Re: Consuming a snapshot from log compacted topic

2015-02-18 Thread Jay Kreps
If you catch up off a compacted topic and keep consuming then you will become consistent with the log. I think what you are saying is that you want to create a snapshot from the Kafka topic but NOT do continual reads after that point. For example you might be creating a backup of the data to a fil

Re: Hold off on 0.8.2 upgrades

2015-02-18 Thread Jay Kreps
Does it make sense to wait, I don't think people will upgrade without the patched version and I think we should release it to unblock people. -Jay On Wed, Feb 18, 2015 at 1:43 PM, Jun Rao wrote: > We have fixed the issue in KAFKA-1952. We will wait for a few more days to > see if any new issue

Re: Hold off on 0.8.2 upgrades

2015-02-18 Thread Jun Rao
We have fixed the issue in KAFKA-1952. We will wait for a few more days to see if any new issue comes up. After that, we will do an 0.8.2.1 release. Thanks, Jun On Fri, Feb 13, 2015 at 3:28 PM, Jay Kreps wrote: > Hey all, > > We found an issue in 0.8.2 that can lead to high CPU usage on broker

Re: Consuming a snapshot from log compacted topic

2015-02-18 Thread svante karlsson
Do you have to separate the snapshot from the "normal" update flow. I've used a compacting kafka topic as the source of truth to a solr database and fed the topic both with real time updates and "snapshots" from a hive job. This worked very well. The nice point is that there is a seamless transiti

Re: Resetting Offsets

2015-02-18 Thread Suren
Reading offsets looks like it's compatible across 0.8.1 and 0.8.2. However, we cannot use the update logic in ImportZkOffsets, since we want to store offsets in the broker in 0.8.2. It looks like SimpleConsumer.commitOffsets() would work with either version. Is there a better way? -Suren

Re: Having trouble with the simplest remote kafka config

2015-02-18 Thread Richard Spillane
OK. Steve Miller helped me solve the problem. I needed to explicitly set advertised.host.name to advertised.host.name=192.168.241.128. The logs showed the producer could connect to 9092 but when it was told which hosts to connect to to queue messages it got unresolvable hosts. By setting this ex

Consuming a snapshot from log compacted topic

2015-02-18 Thread Will Funnell
We are currently using Kafka 0.8.1.1 with log compaction in order to provide streams of messages to our clients. As well as constantly consuming the stream, one of our use cases is to provide a snapshot, meaning the user will receive a copy of every message at least once. Each one of these messag

Re: Having trouble with the simplest remote kafka config

2015-02-18 Thread Richard Spillane
I also tried running the producer from the Mac client again, but this time with TRACE and DEBUG options un-commented from the log4j.properties file on the VM server. It seems that the connection is established (on port 50045) and bytes are being read from the client (192.168.241.1). Then subsequ

Re: Having trouble with the simplest remote kafka config

2015-02-18 Thread Richard Spillane
Sure no problem. I actually did run the ‘localhost’ command to generate output for the e-mail, but that was a typo: I always use the right IP. But I ran it again, this time with the IP of the VM. I ran the command from my Mac (the client). Thanks for taking a look! Here it is: Richards-MacBook-

Re: Having trouble with the simplest remote kafka config

2015-02-18 Thread Jiangjie Qin
I think your log did show that your are connecting to localhost:9092: [2015-02-17 20:43:32,622] WARN Fetching topic metadata with correlation id 0 for topics [Set(test)] from broker [id:0,host:localhost,port:9092] failed (kafka.client.ClientUtils$) java.nio.channels.ClosedChannelException Can yo

Re: Having trouble with the simplest remote kafka config

2015-02-18 Thread Richard Spillane
Yes, the topic I am producing to exists. I can produce to it fine when running the kafa-console-producer.sh tool from the Kafka node (the VM) itself (in which case I can either use localhost or the public-facing IP): Here is where I produce messages (running on the Kafka node): =

Re: Resetting Offsets

2015-02-18 Thread Michal Michalski
See https://cwiki.apache.org/confluence/display/KAFKA/System+Tools and check the following: GetOffsetShell (not very accurate - will set your offsets to much smaller values than you really need; we log offsets frequently in application logs and get it from there) ImportZkOffsets Kind regards, Mich

Resetting Offsets

2015-02-18 Thread Surendranauth Hiraman
We are using the High Level Consumer API to interact with Kafka. However, on restart in the case of failures, we want to be able to manually reset offsets in certain situations. What is the recommended way to do this? Should we use the Simple Consumer API just for this restart case? Ideally, it

Re: Broker Shutdown / Leader handoff issue

2015-02-18 Thread Philippe Laflamme
After further investigation, I've figured out that the issue is caused by the follower not processing messages from the controller until its ReplicaFetcherThread has shutdown completely (which only happens when the socket times out). If the test waits for the socket to timeout, the logs show that

Flafska, a python Flask HTTP REST client to kafka

2015-02-18 Thread Christophe-Marie Duquesne
Hi list, There were already 2 HTTP REST clients to Kafka (namely kafka-http [1] and dropwizard-kakfa-http [2]). I implemented another one in python with Flask (named flasfka): https://github.com/travel-intelligence/flasfka I tend to trust projects better when they come with tests and continuous