Kafka cluster test cases

2016-06-30 Thread Ghosh, Prabal Kumar
Hi Everyone, Is there any test suite available for testing cluster health and some functional tests. Regards, Prabal K Ghosh

Heartbeating during long processing times

2016-06-30 Thread Elias Levy
What is the officially recommended method to heartbeat using the new Java consumer during long message processing times? I thought I could accomplish this by setting max.poll.records to 1 in the client, calling consumer.pause(consumer.assignment()) when starting to process a record, calling consum

Re: broker randomly shuts down

2016-06-30 Thread allen chan
Hi Shikhar, I do not see stderr log file anywhere. Can you point me to where kafka would write such a file? On Thu, Jun 30, 2016 at 5:10 PM, Shikhar Bhushan wrote: > Perhaps it's a JVM crash? You might not see anything in the standard > application-level logs, you'd need to look for the stderr.

Re: broker randomly shuts down

2016-06-30 Thread Shikhar Bhushan
Perhaps it's a JVM crash? You might not see anything in the standard application-level logs, you'd need to look for the stderr. On Thu, Jun 30, 2016 at 5:07 PM allen chan wrote: > Anyone else have ideas? > > This is still happening. I moved off zookeeper from the server to its own > dedicated VM

Re: broker randomly shuts down

2016-06-30 Thread allen chan
Anyone else have ideas? This is still happening. I moved off zookeeper from the server to its own dedicated VMs. Kakfa starts with 4G of heap and gets nowhere near that much consumed when it crashed. i bumped up the zookeeper timeout settings but that has not solved it. I also disconnected all th

Non blocking Kafka producer

2016-06-30 Thread Dan Bahir (BLOOMBERG/ 120 PARK)
Hi, I have an application that needs to be low latency writing to Kafka. With the 0.81 producer I set queue.buffering.max.messages to the number of messages I would like to producer to store in memory and queue.enqueue.timeout.ms to 0 to have the producer throw an exception if the server was

console-consumer show offset

2016-06-30 Thread Fumo, Vincent
is there a way to show the console consumer offset value with the messages like we do the key? I tried --property print.offset=true but it didn't work

Re: Streams RocksDB State Store Disk Usage

2016-06-30 Thread Avi Flax
On Jun 29, 2016, at 22:44, Guozhang Wang wrote: > > One way to mentally quantify your state store usage is to consider the > total key space in your reduceByKey() operator, and multiply by the average > key-value pair size. Then you need to consider the RocksDB write / space > amplification facto

Re: How many connections per consumer/producer

2016-06-30 Thread Ben Stopford
Hi Dhiaraj That shouldn’t be the case. As I understand it both the producer and consumer hold a single connection to each broker they need to communicate with. Multiple produce requests can be sent through a single connection in the producer (the number being configurable with max.in.flight.req

Re: streaming-enabled SQL in Kafka Streams?

2016-06-30 Thread Matthias J. Sax
Hi Alex, we do have SQL layer on the long term roadmap (also considering Calcite). Thanks! -Matthias On 06/30/2016 09:41 AM, Alex Glikson wrote: > Did folks consider adding support in Kafka Streams for Apache Calcite [1], > for streaming-enabled SQL (potentially on top of existing DSL)? Sounds

Re: what is use of __consumer_offsets

2016-06-30 Thread Tom Crayford
No. This is used for tracking consumer offsets. Kafka manages cleaning it up itself. On Thu, Jun 30, 2016 at 1:52 PM, Snehalata Nagaje < snehalata.nag...@harbingergroup.com> wrote: > > But does it create folder for every message we put in kafka, for every > offset? > > And do we need to clean tho

Re: what is use of __consumer_offsets

2016-06-30 Thread Snehalata Nagaje
But does it create folder for every message we put in kafka, for every offset? And do we need to clean those folders? is there any configuration? - Original Message - From: "Tom Crayford" To: "Users" Sent: Thursday, June 30, 2016 6:11:03 PM Subject: Re: what is use of __consumer_offset

Re: what is use of __consumer_offsets

2016-06-30 Thread Tom Crayford
Hi there, Kafka uses this topic internally for consumer offset commits. Thanks Tom Crayford Heroku Kafka On Thu, Jun 30, 2016 at 1:36 PM, Snehalata Nagaje < snehalata.nag...@harbingergroup.com> wrote: > > Hi All, > > > I am using kafka 9 version with publish subscribe pattern, one consumer is

what is use of __consumer_offsets

2016-06-30 Thread Snehalata Nagaje
Hi All, I am using kafka 9 version with publish subscribe pattern, one consumer is listening to particular topic What is use __consumer_offsets, folders created in log files? Does it have any impact on offset commiting? Thanks, Snehalata

Re: Log retention just for offset topic

2016-06-30 Thread Sathyakumar Seshachalam
Thanks Tom. I think thats good enough for my needs On Thu, Jun 30, 2016 at 4:20 PM, Tom Crayford wrote: > The default cleanup policy is delete, which is the regular time based > retention. > > On Thursday, 30 June 2016, Sathyakumar Seshachalam < > sathyakumar_seshacha...@trimble.com> wrote: > >

Re: Log retention just for offset topic

2016-06-30 Thread Tom Crayford
The default cleanup policy is delete, which is the regular time based retention. On Thursday, 30 June 2016, Sathyakumar Seshachalam < sathyakumar_seshacha...@trimble.com> wrote: > Or may be am wrong, and Log cleaner only picks up topics with a > cleanup.policy. > From the documentation it is not

How many connections per consumer/producer

2016-06-30 Thread dhiraj prajapati
Hi, I am using new Kafka Consumer and Producer APIs (version 0.9.0.1) I see that my consumer as well as producer has multiple connections established with kafka brokers. Why is this so? Does the consumer and producer APIs use connection pooling? If yes, where do I configure the pool size? Regards,

Re: streaming-enabled SQL in Kafka Streams?

2016-06-30 Thread Damian Guy
Hi Alex, Yes SQL support is something we'd like to add in the future. I'm not sure when at this stage. Thanks, Damian On Thu, 30 Jun 2016 at 08:41 Alex Glikson wrote: > Did folks consider adding support in Kafka Streams for Apache Calcite [1], > for streaming-enabled SQL (potentially on top of

Re: Kafka Streams reducebykey and tumbling window - need final windowed KTable values

2016-06-30 Thread Clive Cox
Hi Eno, I've looked at KIP-67. It looks good but its not clear what calls I would make to do what I presently need: Get access to each windowed store at some time soon after window end time. I can then use the methods specified to iterate over keys and values. Can you point me to the relevant me

What happens after connections.max.idle.ms | Kafka Producer

2016-06-30 Thread dhiraj prajapati
Hi, >From the document for producer configs: connections.max.idle.ms is the time after which idle connections will be closed. I wish to know what will happen if my connections are idle for long, and after that if the producer produces message? I dont see any exception. How does the producer client

streaming-enabled SQL in Kafka Streams?

2016-06-30 Thread Alex Glikson
Did folks consider adding support in Kafka Streams for Apache Calcite [1], for streaming-enabled SQL (potentially on top of existing DSL)? Sounds like such a support could be useful to open Kafka Streams capabilities to an even broader audience. Thanks, Alex [1] https://calcite.apache.org/doc

Re: Consumer Group, relabancing and partition uniqueness

2016-06-30 Thread Spico Florin
Hi! The partitioner (load-)balance the partitions among consumers like this: 1. if your number of consumer = number of partitions then you'll get 1 consumer with one partition 2. if no of consumer < number of partitions then partitions they are not allocated randomly to the consumers but followin

RE: Running kafka connector application

2016-06-30 Thread Andrew Stevenson
The twitter connector pom builds a fat jar with all dependencies. You need to add this to the classpath before you start Connect. This is what the Confluent scripts are doing. Regards Andrew From: Ewen Cheslack-Postava Sent: ‎14/‎06/‎20