Re: Max message size and compression

2017-06-22 Thread Eli Jordan
Thanks for the reply Mayank. Do you know if this is documented somewhere? I wasnt able to find mention of it. Thanks Eli > On 22 Jun 2017, at 05:50, mayank rathi wrote: > > If you are compressing messages than size of "compressed" message should be > less than what's specified in these paramet

Re: ticketing system Design

2017-06-22 Thread Sameer Kumar
Hi Abhimanya, You can very well do it through Kafka, KafkaStreams and something like redis. I would design it to be something like this:- 1. Topic 1 - Pending tasks 2. Topic 2 - Reassigned Tasks. 3. Topic 3- Task To Resource Mapping. Some other components could be:- 4. Redis Hash(task progress

Re: [DISCUSS] Streams DSL/StateStore Refactoring

2017-06-22 Thread Eno Thereska
Note that while I agree with the initial proposal (withKeySerdes, withJoinType, etc), I don't agree with things like .materialize(), .enableCaching(), .enableLogging(). The former maintain the declarative DSL, while the later break the declarative part by mixing system decisions in the DSL. I

Re: [DISCUSS] Streams DSL/StateStore Refactoring

2017-06-22 Thread Jan Filipiak
Hi Eno, I am less interested in the user facing interface but more in the actual implementation. Any hints where I can follow the discussion on this? As I still want to discuss upstreaming of KAFKA-3705 with someone Best Jan On 21.06.2017 17:24, Eno Thereska wrote: (cc’ing user-list too)

Kafka 0.11.0 release

2017-06-22 Thread Raghav
Hi Would anyone know when is the Kafka 0.11.0 scheduled to be released ? Thanks. -- Raghav

Re: Handling 2 to 3 Million Events before Kafka

2017-06-22 Thread SenthilKumar K
Thanks Barton.. I'll look into these .. On Thu, Jun 22, 2017 at 7:12 AM, Garrett Barton wrote: > Getting good concurrency in a webapp is more than doable. Check out these > benchmarks: > https://www.techempower.com/benchmarks/#section=data-r14&hw=ph&test=db > I linked to the single query one be

Re: Handling 2 to 3 Million Events before Kafka

2017-06-22 Thread SenthilKumar K
Hi Barton - I think we can use Async Producer with Call Back api(s) to keep track on which event failed .. --Senthil On Thu, Jun 22, 2017 at 4:58 PM, SenthilKumar K wrote: > Thanks Barton.. I'll look into these .. > > On Thu, Jun 22, 2017 at 7:12 AM, Garrett Barton > wrote: > >> Getting good

Re: Max message size and compression

2017-06-22 Thread mayank rathi
Hello Eli, This is from Kafka: Definitive Guide ( by Neha Narkhede , Gwen Shapira , and Todd Palino) , Chapter 2. Installing Kafka "The Kafka broker limits the maximum size of a message that can be produced, configured by the message.max.bytes parameter which defaults to 100, or 1 megabyte. A

consume ***-changelog topic encounter IllegalArgumentException: Window startMs time cannot be negative

2017-06-22 Thread sy.pan
Hi: when call KGroupedStream.count(Windows windows , String storeName ) storeName-changelog is auto created as internal topic, and key type : windowed , value type: Long I try to consume from the internal storeName-changelog, code sample like: final Deserializer> windowedDeserializer = new

Re: consume ***-changelog topic encounter IllegalArgumentException: Window startMs time cannot be negative

2017-06-22 Thread sy.pan
I explicitly call KTable.to(Serde>, Serdes.Long(), String topic), save the same data to another topic(manually created by myself), then the excp is gone. so the **-changelog internal topic has special key format ? (even the key type is same = windowed )

help!Kafka failover do not work as expected in Kafka quick start tutorial

2017-06-22 Thread 夏昀
hello: I am trying the quickstart of kafka documentation,link is, https://kafka.apache.org/quickstart. when I moved to Step 6: Setting up a multi-broker cluster,I have deployed 3 kafka broker instance.I killed either server-1 or server-2, everything goes well as the document says. But when I ki

Re: consume ***-changelog topic encounter IllegalArgumentException: Window startMs time cannot be negative

2017-06-22 Thread Damian Guy
Hi, Yes the key format used by a window store changelog is the same format as is stored in RocksDB. You can see what the format is by looking here: https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/state/internals/WindowStoreUtils.java Thanks, Damian On Thu

Re: consume ***-changelog topic encounter IllegalArgumentException: Window startMs time cannot be negative

2017-06-22 Thread sy.pan
Thank you very much , Damian ^_^ > 在 2017年6月22日,22:43,Damian Guy 写道: > > Hi, > Yes the key format used by a window store changelog is the same format as > is stored in RocksDB. You can see what the format is by looking here: > https://github.com/apache/kafka/blob/trunk/streams/src/main/java/or

Re: help!Kafka failover do not work as expected in Kafka quick start tutorial

2017-06-22 Thread Hans Jespersen
Do you list all three brokers on your consumers bootstrap-server list? -hans > On Jun 22, 2017, at 5:15 AM, 夏昀 wrote: > > hello: > I am trying the quickstart of kafka documentation,link is, > https://kafka.apache.org/quickstart. when I moved to Step 6: Setting up a > multi-broker cluster,I ha

Re: [VOTE] 0.11.0.0 RC1

2017-06-22 Thread Ismael Juma
Hi Tom, We are going to do another RC to include Apurva's significant performance improvement when transactions are enabled: https://github.com/apache/kafka/commit/f239f1f839f8bcbd80cce2a4a8643e15d340be8e Given that, we can also include the ProducerPerformance changes that Apurva did to find and

Re: [VOTE] 0.11.0.0 RC1

2017-06-22 Thread Tom Crayford
That's fair, and nice find with the transaction performance improvement! Once the RC is out, we'll do a final round of performance testing with the new ProducerPerformance changes enabled. I think it's fair that this shouldn't delay the release. Is there an official stance on what should and shou

Re: Kafka 0.11.0 release

2017-06-22 Thread Guozhang Wang
Raghav, We are going through the voting process now, expecting to have another RC and release in a few more days. Guozhang On Thu, Jun 22, 2017 at 3:59 AM, Raghav wrote: > Hi > > Would anyone know when is the Kafka 0.11.0 scheduled to be released ? > > Thanks. > > -- > Raghav > -- -- Guoz

Aggregation operations and Joins not working as I would expect.

2017-06-22 Thread Daniel Del Castillo Perez
Hi all, I’m playing with Kafka Streams 0.10.2.1 and I’m having some issues here which I hope you can help me to clarify/understand. In a hypothetical scenario, I have 2 source streams – clicks and orders – which I’m trying to join to match determine from which page the purchase has been made.

Re: [DISCUSS]: KIP-161: streams record processing exception handlers

2017-06-22 Thread Eno Thereska
Answers inline: > On 22 Jun 2017, at 03:26, Guozhang Wang wrote: > > Thanks for the updated KIP, some more comments: > > 1.The config name is "default.deserialization.exception.handler" while the > interface class name is "RecordExceptionHandler", which is more general > than the intended purp

Can mirror maker automatically compress messages based on source settings

2017-06-22 Thread tao xiao
Hi team, As per my experimentation mirror maker doesn't compress messages and send to target broker if it is not configured to do so even the messages in source broker are compressed. I understand the current implementation of mirror maker has no visibility to what compression codec the source me

Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread karan alang
Hi All - How do i go about deleting data from Kafka Topics ? I've Kafka 0.10 installed. I tried setting the parameter of the topic as shown below -> $KAFKA10_HOME/bin/kafka-topics.sh --zookeeper localhost:2161 --alter --topic mmtopic6 --config retention.ms=1000 I was expecting to have the data p

Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread karan alang
Hi All - version - kafka 0.10 I'm publishing data into Kafka topic using command line, and reading the data using kafka console consumer *Publish command ->* $KAFKA_HOME/bin/kafka-verifiable-producer.sh --topic mmtopic1 --max-messages 100 --broker-list localhost:9092,localhost:9093,localhost:909

Re: Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread Subhash Sriram
How many partitions are in your topic? On Thu, Jun 22, 2017 at 3:33 PM, karan alang wrote: > Hi All - > > version - kafka 0.10 > I'm publishing data into Kafka topic using command line, > and reading the data using kafka console consumer > > *Publish command ->* > > $KAFKA_HOME/bin/kafka-verifia

Re: Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread Paolo Patierno
Kafka guarantees messages ordering at partition level not across partitions at topic level. Having out of order reading maybe possible If your topic has more than one partition. From: Subhash Sriram Sent: Thursday, 22 June, 21:37 Subject: Re: Kafka 0.10 - kafka console consumer not reading the d

Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread Vahid S Hashemian
Hi Karan, The other broker config that plays a role here is "log.retention.check.interval.ms". For a low log retention time like in your example if this broker config value is much higher, then the broker doesn't delete old logs regular enough. --Vahid From: karan alang To: users@kaf

Re: Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread karan alang
Hi Subhash, number of partitions - 3 On Thu, Jun 22, 2017 at 12:37 PM, Subhash Sriram wrote: > How many partitions are in your topic? > > On Thu, Jun 22, 2017 at 3:33 PM, karan alang > wrote: > > > Hi All - > > > > version - kafka 0.10 > > I'm publishing data into Kafka topic using command lin

Re: Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread Subhash Sriram
Hi Karan, Yeah, so as to Paolo's point, keep in mind that Kafka does not guarantee order across partitions, only within a partition. If you publish messages to a topic with 3 partitions, it will only be guaranteed that they are consumed in order within the partition. You can retry your test by pu

Re: Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread karan alang
got it, thanks! On Thu, Jun 22, 2017 at 12:40 PM, Paolo Patierno wrote: > Kafka guarantees messages ordering at partition level not across > partitions at topic level. Having out of order reading maybe possible If > your topic has more than one partition. > > From: Subhash Sriram > Sent: Thursda

Re: Kafka 0.10 - kafka console consumer not reading the data in order that it was published

2017-06-22 Thread karan alang
Hey Subhash, thanks, i was able to test this out with 1 partition topic & verify this. On Thu, Jun 22, 2017 at 1:39 PM, Subhash Sriram wrote: > Hi Karan, > > Yeah, so as to Paolo's point, keep in mind that Kafka does not guarantee > order across partitions, only within a partition. If you publi

Re: Aggregation operations and Joins not working as I would expect.

2017-06-22 Thread Matthias J. Sax
Hi, there are two things: 1) aggregation operator produce an output record each time the aggregate is is updates. Thus, you would get 6 record in you example. At the same time, we deduplicate consecutive outputs with an internal cache. And the cache is flushed non-mechanistically (either partly f

How does Zookeeper node failure impact Kafka cluster?

2017-06-22 Thread mayank rathi
Hello All, Let's assume I have a 3-Node Zookeeper ensemble and a 3-Node Kafka Cluster in my Kafka environment and one of ZK node goes down. What would be the impact of 1 ZK node failure on Kafka Cluster? I am just trying to understand difference between 2 node Zookeeper ensemble and a 3 node Zoo

Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread karan alang
Hi Vahid, somehow, the changes suggested don't seem to be taking effect, and i dont see the data being purged from the topic. Here are the steps i followed - 1) topic is set with param -- retention.ms=1000 $KAFKA10_HOME/bin/kafka-topics.sh --describe --topic topicPurge --zookeeper localhost:216

[VOTE] 0.11.0.0 RC2

2017-06-22 Thread Ismael Juma
Hello Kafka users, developers and client-developers, This is the third candidate for release of Apache Kafka 0.11.0.0. This is a major version release of Apache Kafka. It includes 32 new KIPs. See the release notes and release plan ( https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+

Re: [VOTE] 0.11.0.0 RC2

2017-06-22 Thread Ismael Juma
A quick note on notable changes since rc1: 1. A significant performance improvement if transactions are enabled: https://github.com/apache/kafka/commit/f239f1f839f8bcbd80cce2a4a8643e15d340be8e 2. Fixed a controller regression if many brokers are started simultaneously: https://github.com/apache/ka

Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread Vahid S Hashemian
Hi Karan, I think the issue is in verification step. Because the start and end offsets are not going to be reset when messages are deleted. Have you checked whether a consumer would see the messages that are supposed to be deleted? Thanks. --Vahid From: karan alang To: users@kafka.apa

答复: kafka version 0.10.2.1 consumer can not get the message

2017-06-22 Thread Caokun (Jack, Platform)
The issue is in zookeeper and kafka configuration Kafka server.proterties #advertised.host.name=10.179.165.7 # commnent at 20170621 #advertised.listeners=PLAINTEXT://0.0.0.0:9080 # commnent at 20170621 #port=9080 #comment at 20170621 listeners=PLAINTEXT://10.179.165.7:9080 #changed from 0.0.0.0 to

Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread Vahid S Hashemian
Hi Karan, Just to clarify, with `--time -1` you are getting back the latest offset of the partition. If you do `--time -2` you'll get the earliest valid offset. So, let's say the latest offset of partition 0 of topic 'test' is 100. When you publish 5 messages to the partition, and before retenti

Re: Deleting/Purging data from Kafka topics (Kafka 0.10)

2017-06-22 Thread karan alang
Hi Vahid, here is the output of the GetOffsetShell commands (with --time -1 & -2) $KAFKA10_HOME/bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:6092,localhost:6093,localhost:6094,localhost:6095 --topic topicPurge --time -2 --partitions 0,1,2 topicPurge:0:67 topicPurge:1