kafka-plugin in ranger didn't success

2016-10-06 Thread Duan Xiong
Dear Sir: I'm sorry to bother you.I am a sudent,and I have great interest in apache kafka,I find this email address in http://kafka.apache.org/contact , and I findthis is a very professional club,So I send a letter for help.In CLI,My Kafka can work success,So I very confused.I

Re: Printing to stdin from KStreams?

2016-10-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 If you restart your application, it will resume where is left off (same as any other Kafka consumer that does use group management and commits offsets). If you want to reprocess data from scratch, you need to reset your application using

Re: Printing to stdin from KStreams?

2016-10-06 Thread Ali Akhtar
Thanks. I'm encountering a strange issue. If I create messages thru console-producer.sh on a new topic, things work fine. But on the topic that I need to consume, the messages are being produced via the go kafka plugin. On this topic, at first, nothing happens when the stream starts (i.e it

Re: Printing to stdin from KStreams?

2016-10-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Sure. Just use #print() or #writeAsText() - -Matthias On 10/6/16 6:25 PM, Ali Akhtar wrote: > What the subject says. For dev, it would be a lot easier if > debugging info can be printed to stdin instead of another topic, > where it will persist.

Printing to stdin from KStreams?

2016-10-06 Thread Ali Akhtar
What the subject says. For dev, it would be a lot easier if debugging info can be printed to stdin instead of another topic, where it will persist. Any ideas if this is possible?

Re: [VOTE] 0.10.1.0 RC0

2016-10-06 Thread Henry Cai
Why is this feature in the release note? - [KAFKA-264 ] - Change the consumer side load balancing and distributed co-ordination to use a consumer co-ordinator I thought this was already done in 2015. On Thu, Oct 6, 2016 at 4:55 PM,

Re: [VOTE] 0.10.1.0 RC0

2016-10-06 Thread Vahid S Hashemian
Jason, Thanks a lot for managing this release. I ran the quick start (Steps 2-8) with this release candidate on Ubuntu, Windows, and Mac and they mostly look great. These are some, hopefully, minor items and gaps I noticed with respect to the existing quick start documentation (and the updated

Re: Handling out of order messages without KTables

2016-10-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Yes, that should work. On 10/6/16 3:54 PM, Ali Akhtar wrote: > Thanks! That looks perfect. > > Last q.. is there any shortcut to having the json string messages > automatically get serialized to their equivalent Java class via > Jackson, or such?

Re: Handling out of order messages without KTables

2016-10-06 Thread Ali Akhtar
Thanks! That looks perfect. Last q.. is there any shortcut to having the json string messages automatically get serialized to their equivalent Java class via Jackson, or such? Perhaps I can write a Serde impl which takes the java.lang.Class of the class to be mapped, and maps it via Jackson? On

Re: Handling out of order messages without KTables

2016-10-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Exactly. You need to set the key using KStream#selectKey() and re-distribute data via #through(). About timestamps: you can provide a custom TimestampExtractor that returns the JSON embedded TS instead of record TS (as DefaultTimestampExtractor

Re: Handling out of order messages without KTables

2016-10-06 Thread Ali Akhtar
Sorry, to be clear: - Producers post to topic A - Consumers of topic A receive the data, parse it to find the keys, and post the correct key + message to Topic B - Topic B is treated as a KTable by 2nd consumer layer, and its this layer which does the writes to ensure 'last one wins' (Assuming

Re: Handling out of order messages without KTables

2016-10-06 Thread Ali Akhtar
Thanks for the reply. Its not possible to provide keys, unfortunately. (Producer is written by a colleague, and said colleague just wants to provide whatever data the API gives, and leave all processing of the data to me). Perhaps I can use an intermediate kafka topic, and have producers post to

Re: Kafka 10 Consumer Reading from Kafka 8 Cluster?

2016-10-06 Thread Craig Swift
Ok great - thanks for the clarification. Exactly what I needed. :) Craig On Thu, Oct 6, 2016 at 2:09 PM, Scott Reynolds wrote: > you cannot use a k10 client with a k8 cluster. The protocol changed > > You CAN use a k8 client with a k10 cluster. > > On Thu, Oct 6, 2016 at

Re: Handling out of order messages without KTables

2016-10-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 It is not global in this sense. Thus, you need to ensure that records updating the same product, go to the same instance. You can ensure this, by given all records of the same product the same key and "groupByKey" before processing the data. -

Re: Snazzy new look to our website

2016-10-06 Thread Jason Gustafson
Thanks Mickael and Jonathon for reporting the problem with the javdocs. The links should be fixed now. -Jason On Thu, Oct 6, 2016 at 10:59 AM, Jonathan Bond wrote: > Hi, > > I'm having a problem with the new website. Whenever I try to follow a link > to the Kafka

Re: Kafka 10 Consumer Reading from Kafka 8 Cluster?

2016-10-06 Thread Scott Reynolds
you cannot use a k10 client with a k8 cluster. The protocol changed You CAN use a k8 client with a k10 cluster. On Thu, Oct 6, 2016 at 12:00 PM Craig Swift wrote: > We're doing some fairly intensive data transformations in the current > workers so it's not

Re: kafka stream to new topic based on message key

2016-10-06 Thread Gary Ogden
Thanks Guozhang. I've gotten an example to work using your tips. So, is there no other way in streams to create a topic if "auto.topic.create.enabled" is set to false? Maybe by creating a record in zookeeper for that topic? On 5 October 2016 at 17:20, Guozhang Wang wrote:

Re: Kafka 10 Consumer Reading from Kafka 8 Cluster?

2016-10-06 Thread Craig Swift
We're doing some fairly intensive data transformations in the current workers so it's not as straight forward as just reading/producing to another topic. However, if you mean can we mirror the source topic to the kafka 10 cluster and then update the worker to read/write to 10 - that could be an

Re: Kafka 10 Consumer Reading from Kafka 8 Cluster?

2016-10-06 Thread David Garcia
Any reason you can’t use mirror maker? https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330 -David On 10/6/16, 1:32 PM, "Craig Swift" wrote: Hello, We're in the process of upgrading several of our clusters to Kafka 10. I

Kafka 10 Consumer Reading from Kafka 8 Cluster?

2016-10-06 Thread Craig Swift
Hello, We're in the process of upgrading several of our clusters to Kafka 10. I was wondering if it's possible to use the Kafka 10 client code (old or new) to read from a source Kafka 8 cluster and then use the new 10 producer to write to a destination Kafka 10 cluster? I know there's a

Re: Kafka Streams dynamic partitioning

2016-10-06 Thread Michael Noll
> I think this should be ' pick number of partitions that matches max number > of possible keys in stream to be partitioned '. > At least in my usecase , in which I am trying to partition stream by key > and make windowed aggregations, if there are less number of topic > partitions than possible

Re: Snazzy new look to our website

2016-10-06 Thread Jonathan Bond
Hi, I'm having a problem with the new website. Whenever I try to follow a link to the Kafka javadoc, either from within the website or a google link - it always takes me to the top of the Kafka 0.10 documentation page. I can't figure out how to get to the javadoc. Thanks, Jonathan On Tue, Oct

Re: Handling out of order messages without KTables

2016-10-06 Thread Ali Akhtar
Thank you, State Store seems promising. But, is it distributed, or limited to the particular instance of my application? I.e if there are 3 messages, setting product 1's price to $1, $3, and $5, and all 3 of them go to a different instance of my application, will they be able to correctly

Re: Handling out of order messages without KTables

2016-10-06 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 What do you mean by "message keys are random" -- do you effectively have no keys and want all messages to be processed as if they all have the same key? To access record TS in general, you need to use Processor API. The given ProcessorContext

Handling out of order messages without KTables

2016-10-06 Thread Ali Akhtar
Heya, I have some Kafka producers, which are listening to webhook events, and for each webhook event, they post its payload to a Kafka topic. Each payload contains a timestamp from the webhook source. This timestamp is the source of truth about which events happened first, which happened last,

Re: difficulty to delete a topic because of its syntax

2016-10-06 Thread Ismael Juma
On Thu, Oct 6, 2016 at 2:51 PM, Avi Flax wrote: > > Does this mean that the next release (after 0.10.1.0, maybe ~Feb?) might > remove altogether the requirement that Streams apps be able to access > ZooKeeper directly? That's the plan. See the following PR for details:

Re: difficulty to delete a topic because of its syntax

2016-10-06 Thread Avi Flax
> On Oct 6, 2016, at 06:24, Ismael Juma wrote: > > It's worth mentioning that Streams is in the process of transitioning from > updating ZooKeeper directly to using the newly introduced create topics and > delete topics protocol requests. It was too late for 0.10.1.0, but

RE: difficulty to delete a topic because of its syntax

2016-10-06 Thread Hamza HACHANI
Yes in fact, The topic in question was a name of a store. Ok i will do it for th matter of JIRA. De : isma...@gmail.com de la part de Ismael Juma Envoyé : mercredi 5 octobre 2016 22:24:53 À : users@kafka.apache.org Objet

Text error in documentation.html web page

2016-10-06 Thread Mazhar Shaikh
Visit: http://kafka.apache.org/documentation.html#quickstart and scroll down. Please find the image attached. [image: Inline image 1] Browser : IE & Firefox

Re: difficulty to delete a topic because of its syntax

2016-10-06 Thread Ismael Juma
It's worth mentioning that Streams is in the process of transitioning from updating ZooKeeper directly to using the newly introduced create topics and delete topics protocol requests. It was too late for 0.10.1.0, but should land in trunk soonish. Ismael On Thu, Oct 6, 2016 at 11:15 AM, Yuto

Re: difficulty to delete a topic because of its syntax

2016-10-06 Thread Yuto KAWAMURA
I guess this topic is created by Kafka Streams. Kafka Streams has it's own topic creation(zookeeper node creation) implementation and not using core's AdminUtils to create internal use topics such as XX-changelog:

Re: difficulty to delete a topic because of its syntax

2016-10-06 Thread Rajini Sivaram
Hamza, Can you raise a JIRA with details on how the topic was created by Kafka with an invalid name? Sounds like there might be a missing validation somewhere. Regards, Rajini On Thu, Oct 6, 2016 at 10:12 AM, Hamza HACHANI wrote: > Thanks Todd, > > > I've resolved it

RE: difficulty to delete a topic because of its syntax

2016-10-06 Thread Hamza HACHANI
Thanks Todd, I've resolved it by suing what you told me. Thanks very much. But i think that there is a problem with kafka by letting the saving names of topic and logs where there is a space as i showes in the images. Have a good day to you all. Hamza De :

RE: difficulty to delete a topic because of its syntax

2016-10-06 Thread Hamza HACHANI
Hi, Attached the files showing what i'm talking about. Hamza De : Todd S Envoyé : mercredi 5 octobre 2016 07:25:48 À : users@kafka.apache.org Objet : Re: difficulty to delete a topic because of its syntax You *could* go in to zookeeper and