Re: Message throughput per day - Kafka

2018-02-19 Thread siva prasad
Thanks for Responding. I am not looking for throughput achieved by the broker. Metrics is needed to see how the platform is used for business, to know daily usage trends and variation after every release. It also helps to do capacity planning. I am not producer/consumer. My part is to host

Re: Message throughput per day - Kafka

2018-02-19 Thread Sharninder
> > > > > I am wondering if there is way to get messages through put of Kafka > > brokers. > > > > Ex: > > 1) Number of messages sent per day to a Broker/Cluster > > 2) Number of messages consumed per day by a Broker/Cluster > > > I don't think it makes sense to have total metrics per day for

Re: Message throughput per day - Kafka

2018-02-19 Thread Sharath Gururaj
There are no statistics on a per-day basis. Kafka exposes metrics for producer throughput per sec. Both in terms of byte rate as well as number of messages. you'll have to write some sort of a cron to periodically sample it. If you want exact numbers, then write a cron job to get the current

Re: Message throughput per day - Kafka

2018-02-19 Thread siva prasad
Guys, You have any details on queried ? Cheers, Siva On Fri, Feb 16, 2018 at 4:56 PM, siva prasad wrote: > Hey Guys, > > > I am wondering if there is way to get messages through put of Kafka > brokers. > > Ex: > 1) Number of messages sent per day to a Broker/Cluster

Re: Testing with MockConsumer

2018-02-19 Thread Ted Yu
For #3, a better example would be in ConsumerCoordinator (around line 632). commitOffsetsAsync(allConsumedOffsets, new OffsetCommitCallback() { @Override public void onComplete(Map offsets, Exception exception) { FYI On Mon, Feb

Re: KafkaUtils.createStream(..) is removed for API

2018-02-19 Thread Cody Koeninger
I can't speak for committers, but my guess is it's more likely for DStreams in general to stop being supported before that particular integration is removed. On Sun, Feb 18, 2018 at 9:34 PM, naresh Goud wrote: > Thanks Ted. > > I see createDirectStream is

Re: timestamp-oriented API

2018-02-19 Thread Matthias J. Sax
The broker maintains a timestamp index and uses this timestamp index to answer the "offsetForTimes" request. The returned offset guarantees, that there is no record with a smaller timestamp and smaller offset in the topic. Thus, if there are out-of-order records in the topic, and you start

Re: Aggregation events Stream Kafka

2018-02-19 Thread Matthias J. Sax
Using Kafka's Streams API sound like a very good solution to your problem. I'd recommend to check out the docs and examples: https://kafka.apache.org/10/documentation/streams/ https://github.com/confluentinc/kafka-streams-examples -Matthias On 2/19/18 1:19 AM, Maria Pilar wrote: > Hi > > I

connecting to 2 different clusters with different sasl realms in a single process

2018-02-19 Thread Michal Hariš
Hi all, I have one cluster with kerberos authenticator and another with simple authenticator. I need to be able to consume certain topics from the kerberized cluster and produce into the cluster with the simple auth. The ACLs on both cluster work well for the purpose but I can't see the way how

Re: Testing with MockConsumer

2018-02-19 Thread Gabriel Giussi
Hi Ted, my mistake was believe that commited offsets are used on the next poll, but is not the case . > The offsets committed using this

Re: timestamp-oriented API

2018-02-19 Thread Steve Jang
If you set *message.timestamp.type* (or *log.message.timestamp.type*) to be LogAppendTime, this would make sense. I am new to Kafka, too, and if this was set to CreateTime, I don't know what the behavior would be. There is *message.timestamp.difference.max.ms

Kafka broker throwing ConfigException Invalid value configuration log.segment.bytes: Not a number of type INT

2018-02-19 Thread Debraj Manna
Cross-posting from stackoverflow I have a single node kafka broker and single node zookeeper with the server.properties like below broker.id=0 num.network.threads=3

timestamp-oriented API

2018-02-19 Thread Xavier Noria
In the mental model I am building of how Kafka works (new to this), the broker keeps offsets by consumer group, and individual consumers basically depend on the offset of the consumer group they join. Also consumer groups may opt to start from the beginning. OK, in that mental model there is a

Re: Testing with MockConsumer

2018-02-19 Thread Ted Yu
For #2, I think the assumption is that the records are processed by the loop: https://github.com/apache/kafka/blob/73be1e1168f91ee2a9d68e1d1c75c14018cf7d3a/clients/src/main/java/org/apache/kafka/clients/consumer/MockConsumer.java#L164 On Mon, Feb 19, 2018 at 4:39 AM, Gabriel Giussi

RE: Kafka control messages

2018-02-19 Thread adrien ruffie
Hi Ben, it's depend on your consumer group configuration. If you have all consumer arein differents group (only one consumer for each consumer group), you can use 2) because all the consumer instances have different consumer group, then the control records will be broadcasted to all your

Kafka control messages

2018-02-19 Thread Young, Ben
Hi, We have a situation where we are streaming data in a Kafka topic to be processed. The topic has multiple partitions. From time to time we need to add control messages to the stream and this needs to be coherent with the data steam (e.g. "replay from point X to this new location"). Because

Testing with MockConsumer

2018-02-19 Thread Gabriel Giussi
Hi, I'm trying to use MockConsumer to test my application code but I've faced a couple of limitations and I want to know if there are workarounds or something that I'm overlooking. Note: I'm using kafka-clients v 0.11.0.2 1. Why the addRecord

Re: Zookeeper Error

2018-02-19 Thread M. Manna
Just a heads up. Windows doesn’t cleanup logs. There’s a pull req pending for issue #1194. Regards, On Mon, 19 Feb 2018 at 09:14, Maria Pilar wrote: > Now It´s working properly, I have changed the server.id in to the > zookeeper. properties and I have created topics into

Aggregation events Stream Kafka

2018-02-19 Thread Maria Pilar
Hi I need to create aggretions events and publish them in other topic for a Stream Kafka API. I usually i have done aggregates events with Apache Spark, however it requires include a new bussines layer into our E2e solution. I have checked the possibility to use aggreate method with KTABLE. Do

Re: Zookeeper Error

2018-02-19 Thread Maria Pilar
Yes, I made a spelling mistake. I have changed it. Thanks On 18 February 2018 at 11:49, Gerd König wrote: > Hi, > > in your message there is "locahost", but I am pretty sure you wanted to use > "localhost", including the "l", right? > This one will usually be

Re: Zookeeper Error

2018-02-19 Thread Maria Pilar
Now It´s working properly, I have changed the server.id in to the zookeeper. properties and I have created topics into mutinode. I´m using windows because it´s a simple proof of concept. Thanks On 18 February 2018 at 03:15, Ted Yu wrote: > What are the entries in