Re: Prioritized Topics for Kafka

2019-01-17 Thread Tobias Adamson
Use cases: prioritise current data When processing messages sometimes there is a need to re process old data. It would be nice to be abled to send the old data as messages to a separate topic and that would only be processed when the current topic doesn’t have any messages left to process. This

Re: Process and punctuate contract

2017-10-23 Thread Tobias Adamson
ate. > > > -Matthias > > On 10/22/17 11:43 PM, Tobias Adamson wrote: >> Hi >> What is the contract around Processor.process and punctuate. >> >> When will Kafka streams commit the offset >> After the process method is called successfully or not until punctuate is >> called? >> >> Regards >> Toby >> >

Process and punctuate contract

2017-10-23 Thread Tobias Adamson
Hi What is the contract around Processor.process and punctuate. When will Kafka streams commit the offset After the process method is called successfully or not until punctuate is called? Regards Toby

Re: Consumer Balancing multi topic, single partition

2017-01-12 Thread Tobias Adamson
efault partition assignor is range assignor which assigns works on a > per-topic basis. If you topics with one partition only they will be > assigned to the same consumer. You can change the assignor to > org.apache.kafka.clients.consumer.RoundRobinAssignor > > On Thu, 12 Jan 2017 at 22:33

Consumer Balancing multi topic, single partition

2017-01-12 Thread Tobias Adamson
Hi We have a scenario where we have a lot of single partition topics for ordering purposes. We then want to use multiple consumer processes listening to many topics. During testing it seems like one consumer process will always end up with all topics/partitions assigned to it and there is no

Re: Writing a consumer offset checker

2016-12-02 Thread Tobias Adamson
Hi Jon We have written an offset check in python for the new consumer groups. I've filed a bug against pykafka with some admin command support here https://github.com/Parsely/pykafka/issues/620#issuecomment-264258141 That you could use right now If not I should be able to release an offset

offset topics growing huge

2016-10-03 Thread Tobias Adamson
Hi We are using Kafka 0.10.1 with offsets commits being stored inside of Kafka After a while these topics become extremely large and we are wondering if we need to enable log.cleaner.enable=true (currently false) to make sure the internal offset topics get compacted and keep their size down?

Re: Question about applicability of Kafka Streams

2016-05-26 Thread Tobias Adamson
Hi Kim Would maybe this example work for you? https://github.com/apache/kafka/tree/trunk/streams/examples/src/main/java/org/apache/kafka/streams/examples/pageview It included

Re: Kafka Streams / Processor

2016-05-26 Thread Tobias Adamson
rval that is configured via > "commit.interval.ms" > > 3) Yes. > > 4) Yes. > > -Matthias > > > On 05/26/2016 02:36 PM, Tobias Adamson wrote: >> Hi >> We have a scenario where we could benefit from the new API’s instead of our >>

Kafka Streams / Processor

2016-05-26 Thread Tobias Adamson
Hi We have a scenario where we could benefit from the new API’s instead of our in house ones. However we have a couple of questions 1. Is it feasible to save 2-3MB size values in the RocksDBStorage? 2. When is the offset committed as processed when using a custom Processor, is it when you call

No replica allocated to new Topic

2016-04-07 Thread Tobias Adamson
Hi We have an issue where a Topic is created (auto create) but no partition replicas are allocated. This issue has been replicated on 0.9.0.1 and 0.8.2.2 The environment have multiple concurrent consumers and producers accessing the Kafka cluster and when we create topics they randomly but

Error for Kafka / Topic has not leader and can't be listed with kaka-topic --describe

2016-03-08 Thread Tobias Adamson
Hi I posted this to IRC but maybe someone here has seen this before Hello I'm having some weird issues with kafka 0.9 and can't find anythign in jira I'm running Kafka inside Docker/Kubernets All works fine when I deploy But after a while I get the following in the publisher WARN: Error while