RE: New consumer API waits indefinitely

2016-04-12 Thread Lohith Samaga M
Dear All, After a system restart, the new consumer is working as expected. Best regards / Mit freundlichen Grüßen / Sincères salutations M. Lohith Samaga -Original Message- From: Lohith Samaga M [mailto:lohith.sam...@mphasis.com] Sent: Tuesday, April 12, 2016 17.00 To: users@k

Re: Kafka Newbie question

2016-04-12 Thread Pradeep Bhattiprolu
I tried both the approaches stated above, with no luck :(. Let me give concrete examples of what i am trying to achieve here : 1) Kafka Producer adds multiple JSON messages to a particular topic in the message broker (done, this part works) 2) I want to have multiple consumers identified under a s

Re: Spikes in kafka bytes out (while bytes in remain the same)

2016-04-12 Thread Asaf Mesika
Where exactly do you get the measurement from? Your broker? Do you have only one? Your producer? Your spark job? On Mon, 11 Apr 2016 at 23:54 Jorge Rodriguez wrote: > We are running a kafka cluster for our real-time pixel processing > pipeline. The data is produced from our pixel servers into ka

Re: New consumer: OutOfMemoryError: Direct buffer memory

2016-04-12 Thread Asaf Mesika
Track the nio direct memory buffer size on your client jvm. Connect to it using VisualVM or Jolokia with halt.io and see where it breaks. This error means your host ran out of memory unless you limit it. On Tue, 12 Apr 2016 at 03:18 Kanak Biscuitwala wrote: > Hi, > > I'm running Kafka's new consu

Re: dumping JMX data

2016-04-12 Thread Asaf Mesika
If I'm not mistaken jmxtrans does not let you take All metrics beans or just a group of them using wildcard. You are forced to specify the exact bean name which this email shows how cumbersome this is. Also jmxtrans issue an rpc call per bean while using Jolokia you do one post request to get all t

Re: Issue with Kafka Broker

2016-04-12 Thread Mudit Agarwal
Can someone help on below issue? > On Apr 11, 2016, at 12:27 AM, Mudit Agarwal > wrote: > > Hi Guys, > I have 3 node kafka setup.Version is 0.9.0.1.I bounced the kafka-server and > zookeeper services on broker 3 node.Post that i am seeing below messages in > logs continuosly.Any help will b

Re: RocksDB on Windows

2016-04-12 Thread Guozhang Wang
Hi Kevin, You can use the Processor API to create multiple state stores and associate these stores to your processors, some guidance can be found here: http://kafka.apache.org/0100/documentation.html#streams_processor As for your specific case, unfortunately I cannot tell much from your code sni

users@kafka.apache.org

2016-04-12 Thread BigData dev
Hi All, I am facing issue with kafka kerberoized cluster. After following the steps how to enables SASL on kafka by using below link. http://docs.confluent.io/2.0.0/kafka/sasl.html After this,when i start the kafka-server I am getting below error. [2016-04-12 16:59:26,201] ERROR [KafkaApi-1001]

What is the best way to publish and consume different type of messages?

2016-04-12 Thread Ratha v
Hi all; Im using kafka 0.0.8V. I want to publish /consume byte[] objects, java bean objects, serializable objects and much more.. What is the best way to define a publisher and consumer for this type scenario? When I consume a message from the consumer iterator, I do not know what type of the mes

Kafka consumer timing out while connecting to Zookeeper

2016-04-12 Thread Mhaskar, Tushar
Hi , I have a 8 node Kafka broker(0.9) with 5 zookeeper machines. Kafka consumer (0.8.2.1) is not able to connect to zookeeper. I am able to telnet to the machine on port 2181 where the zookeeper is running. Output of nc nc -zv 10.196.196.63 2181 Connection to 10.196.196.63 2181 port [tcp/eforw

Re: [0.10.1.0-SNAPSHOT] KafkaProducer.close() not committing

2016-04-12 Thread Greg Zoller
Sorry the formatting was all messed up.I re-tested this code with 0.9.0.1 and it worked fine--KafkaProducer closed and committed the number of records expected into the partitions. So this seems like a SNAPSHOT issue.  Will continue looking. From: Greg Zoller To: "users@kafka.apache.org"

[0.10.1.0-SNAPSHOT] KafkaProducer.close() not committing

2016-04-12 Thread Greg Zoller
Hello, I'm trying to run the latest master build in github.  I've got producer code like below:    val props = Map(      "bootstrap.servers" -> host,       "key.serializer" -> "org.apache.kafka.common.serialization.ByteArraySerializer",      "value.serializer" -> "org.apache.kafka.common.seriali

RocksDB on Windows

2016-04-12 Thread Kevin Niemann
I've built Kafka from trunk 0.10 intending to build a streaming application and I'm wondering it if supports Windows. I tried increasing the Maven dependency to RocksDB 4.2.0. Jay's post mentions Kafka support multiple state stores. Is it documented how to configure these or do you recommend ru

Re: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Ismael Juma
Hi Jonathan, That makes sense, thanks. Phil, do you know if your metadata requests are succeeding? Retrying every retry.backoff.ms is expected if they fail, but not if they succeed. In terms of posting a comment to JIRA, you can create an account if you don't have one: https://issues.apache.org/

RE: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Phil Luckhurst
Ismael, I don't see how I can add any information to KAFKA-3358 as I haven't got an account? Phil -Original Message- From: isma...@gmail.com [mailto:isma...@gmail.com] On Behalf Of Ismael Juma Sent: 12 April 2016 14:59 To: users@kafka.apache.org Subject: Re: KafkaProducer 0.9.0.1 contin

Re: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Jonathan Bond
Ismael, In our case KAFKA-3358 caused frequent metadata requests because we ended up swamping the broker. Getting all metadata is a very expensive request, once all requests threads spent their majority of time handling metadata requests, they'd start timing out. Timing out would cause a new reque

Re: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Ismael Juma
Sorry, I should say that KAFKA-3306 will fix the issue where we request data for all topics instead of no topics. However, it seems like there is an additional issue where it seems like we are performing metadata requests too frequently. Phil, can you please add this information to KAFKA-3358 so th

Re: Consumers disappearing form __consumer_offsets

2016-04-12 Thread tao xiao
My understanding is that offset topic is a compact topic which should never be deleted but compacted. Is this true? If this is the case what does offsets.retention.minutes here really mean? On Tue, 12 Apr 2016 at 20:15 Sean Morris (semorris) wrote: > I have been seeing the same thing and now I u

Re: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Ismael Juma
Note that this should be fixed as part of https://issues.apache.org/jira/browse/KAFKA-3306 Ismael On Tue, Apr 12, 2016 at 2:17 PM, Phil Luckhurst wrote: > Thanks Jonathan, I didn't spot that JIRA. > > Phil > > -Original Message- > From: Jonathan Bond [mailto:jb...@netflix.com.INVALID] >

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Manikumar Reddy
Yes. Manikumar On Tue, Apr 12, 2016 at 6:59 PM, Oleg Zhurakousky < ozhurakou...@hortonworks.com> wrote: > Thanks Manikumar > > So, but for now I guess the only way to at least influence it (as I’ve > observed) would be ‘max.partition.fetch.bytes’, correct? > > Cheers > Oleg > > On Apr 12, 2016,

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Oleg Zhurakousky
Thanks Manikumar So, but for now I guess the only way to at least influence it (as I’ve observed) would be ‘max.partition.fetch.bytes’, correct? Cheers Oleg > On Apr 12, 2016, at 9:22 AM, Manikumar Reddy > wrote: > > New consumer config property "max.poll.records" is getting introduced in >

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Manikumar Reddy
New consumer config property "max.poll.records" is getting introduced in upcoming 0.10 release. This property can be used to control the no. of records in each poll. Manikumar On Tue, Apr 12, 2016 at 6:26 PM, Oleg Zhurakousky < ozhurakou...@hortonworks.com> wrote: > Is there a way to specify

RE: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Phil Luckhurst
Thanks Jonathan, I didn't spot that JIRA. Phil -Original Message- From: Jonathan Bond [mailto:jb...@netflix.com.INVALID] Sent: 12 April 2016 14:08 To: users@kafka.apache.org Subject: Re: KafkaProducer 0.9.0.1 continually sends metadata requests Phil, In our case this bug placed signific

Re: KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Jonathan Bond
Phil, In our case this bug placed significant load on our brokers. We raised a bug https://issues.apache.org/jira/browse/KAFKA-3358 to get this resolved. On Tue, Apr 12, 2016 at 5:39 AM Phil Luckhurst wrote: > With debug logging turned on we've sometimes seen our logs filling up with > the kafk

Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Oleg Zhurakousky
Is there a way to specify in KafkaConsumer up to how many messages do I want o receive? I am operation under premise that Consumer.poll(..) returns a batch of messages, but not sure if there is a way to control batch amount. Cheers Oleg

KafkaProducer 0.9.0.1 continually sends metadata requests

2016-04-12 Thread Phil Luckhurst
With debug logging turned on we've sometimes seen our logs filling up with the kafka producer sending metadata requests every 100ms e.g. 2016-04-08 10:39:33,592 DEBUG [kafka-producer-network-thread | phil-pa-1] org.apache.kafka.clients.NetworkClient: Sending metadata request ClientRequest(expec

Re: Consumers disappearing form __consumer_offsets

2016-04-12 Thread Sean Morris (semorris)
I have been seeing the same thing and now I understand why. Question is what happens to these topics then after the 24 hours? If I start publishing to them again do the consumers behave properly and only get the new records? Thanks, Sean On 4/11/16, 5:16 PM, "James Cheng" wrote: >This may be re

RE: New consumer API waits indefinitely

2016-04-12 Thread Lohith Samaga M
Dear All, I installed Kafka on a Linux VM. Here too: 1. The producer is able to store messages in the topic (sent from Windows host). 2. The consumer is unable to read it either from Windows host or from kafka-console-consumer on the Linux VM console. In t