Consumer pause/resume & partition assignment race condition

2016-06-24 Thread Elias Levy
While performing some prototyping on 0.10.0.0 using the new client API I noticed that some some clients fail to drain their topic partitions. The Kafka cluster is comprised of 3 nodes. The topic in question has been preloaded with messages. The topic has 50 partitions. The messages were loaded

Re: Kafka streams for out of order density aggregation

2016-06-24 Thread Matthias J. Sax
I just want to add something: If I understand the question correctly, you are asking for a strong ordering guarantee. I personally doubt that out-of-order on count-based windows can be supported with strong consistency guarantee in an efficient manner. If a late record arrives for window X, the

Re: log4j setting for embedded kafka server

2016-06-24 Thread Guozhang Wang
Siyuan, log4j.properties only gets read by the kafka-run-class.sh scripts: KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/config/tools-log4j.properties If you start the server within your Java application, you need to try to pass "log4j.configuration" to Kafka. Guozhang On Fri, Jun

Re: issue in SimpleConsumerDemo

2016-06-24 Thread Guozhang Wang
Did you run it multiple times, and did not clean the committed offsets? This may be a common root cause of seeing fewer messages. Guozhang On Fri, Jun 24, 2016 at 2:51 AM, hengheng0h...@163.com < hengheng0h...@163.com> wrote: > hi, > I got an issue when i run SimpleConsumerDemo(source:kafka >

Producer Properties

2016-06-24 Thread Chris Barlock
I started porting our code from Kafka 0.8.2.1 to 0.10.0.0 and found my producer code blowing up because of some changes to the config. For example, metadata.broker.list is now bootstrap.servers. I discovered the ProducerConfig class which has, at least, some of the config keys. Before I

Re: Kafka producer metadata issue

2016-06-24 Thread Shekar Tippur
Intersting. So if we introduce a sleep after the first send then it produces properly? Here is my log. Clearly there is a conn reset. [2016-06-24 13:42:48,620] ERROR Closing socket for /127.0.0.1 because of error (kafka.network.Processor) java.io.IOException: Connection reset by peer at

Re: Kafka producer metadata issue

2016-06-24 Thread Fumo, Vincent
I'm seeing similar with the v9 producer. Here is some test code: @Test public void test1() throws InterruptedException { Producer producer = createProducer(BROKER_DEV); producer.send(new ProducerRecord<>(TOPIC, "value")); producer.send(new ProducerRecord<>(TOPIC,

Re: Kafka producer metadata issue

2016-06-24 Thread Shekar Tippur
I just see this on kafka.log file [2016-06-24 13:27:14,346] INFO Closing socket connection to /127.0.0.1. (kafka.network.Processor) On Fri, Jun 24, 2016 at 1:05 PM, Shekar Tippur wrote: > Hello, > > I have a simple Kafka producer directly taken off of > > >

Kafka producer metadata issue

2016-06-24 Thread Shekar Tippur
Hello, I have a simple Kafka producer directly taken off of https://kafka.apache.org/090/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html I have changed the bootstrap.servers property. props.put("bootstrap.servers", "localhost:9092"); I dont see any events added to the

Re: log.retention.bytes

2016-06-24 Thread Alex Loddengaard
Hi Dave, log.retention.bytes is per partition. If you change it after the topic was created, you'll see the behavior you expect -- namely that the new value is used when the log is cleaned. The frequency that the log is cleaned is controlled by log.retention.check.interval.ms, with a default

log4j setting for embedded kafka server

2016-06-24 Thread hsy...@gmail.com
Hi guys, I start server grammatically in my application using KafkaStatableServer.startup() method. And in the log4j.properties setting. I add this log4j.logger.org.apacke.kafka=WARN log4j.logger.kafka=WARN But I always get INFO log, Do you guys know how to enforce the log level here? Thanks!

log.retention.bytes

2016-06-24 Thread Tauzell, Dave
Is the log.retention.bytes setting per partition or for the whole topic?If I change it after a topic has been created do the changes apply to the existing topics? Thanks, Dave This e-mail and any files transmitted with it are confidential, may contain sensitive information, and are

issue in SimpleConsumerDemo

2016-06-24 Thread hengheng0h...@163.com
hi, I got an issue when i run SimpleConsumerDemo(source:kafka/examples/src/main/java/kafka/examples/SimpleConsumerDemo.java) I only got 2 messages when i set fetchSize to 100. thanks! hengheng0h...@163.com

RE: Setting max fetch size for the console consumer

2016-06-24 Thread Tauzell, Dave
Thanks! I also had to pass --consumer.config=/etc/kafka/consumer.properties to the command line consumer. -Dave Dave Tauzell | Senior Software Engineer | Surescripts O: 651.855.3042 | www.surescripts.com | dave.tauz...@surescripts.com Connect with us: Twitter I LinkedIn I Facebook I

Re: Setting max fetch size for the console consumer

2016-06-24 Thread Ben Stopford
It’s actually more than one setting: http://stackoverflow.com/questions/21020347/kafka-sending-a-15mb-message B > On 24 Jun 2016, at 14:31, Tauzell, Dave wrote: > > How do I set the

Setting max fetch size for the console consumer

2016-06-24 Thread Tauzell, Dave
How do I set the maximum fetch size for the console consumer? I'm getting this error when doing some testing with large messages: kafka.common.MessageSizeTooLargeException: Found a message larger than the maximum fetch size of this consumer on topic replicated_twice partition 28 at fetch

Re: is kafka the right choice

2016-06-24 Thread Ben Stopford
correction: elevates => alleviates > On 24 Jun 2016, at 11:13, Ben Stopford wrote: > > Kafka uses a long poll > . So requests > effectively block on the server, if there is insufficient data available. > This elevates

kafka lost data when use scala API to send data.

2016-06-24 Thread DuanSky
Hello With Respect, Here I met a problem when use scala API to send/receive data to/from kafka brokers. I write a very simple producer and consumer code(just like the official examples), I found the code with Java API can work correctly, but the code with Scala API may lost data. Here is

Re: is kafka the right choice

2016-06-24 Thread Ben Stopford
Kafka uses a long poll . So requests effectively block on the server, if there is insufficient data available. This elevates many of the issues associated with traditional polling approaches. Service-based applications often require