Re: Mirror Maker - Message Format Issue?

2016-10-12 Thread Craig Swift
Hello, Just to close this issue out. The 8 producer going to the 10 cluster was the root issue. The mirror maker by default was unable to produce the message to the destination cluster. The work around was to include a MirrorMakerMessageHandler that did nothing but repackage the message again. In

Re: HL7 messages to Kafka consumer

2016-10-12 Thread Artem Ervits
Nifi HL7 processor is built using HAPI API, which supports z-segments http://hl7api.sourceforge.net/xref/ca/uhn/hl7v2/examples/CustomModelClasses.html On Wed, Oct 12, 2016 at 10:10 PM, Martin Gainty wrote: > > > > > From: dbis...@gmail.com > > Date: Wed, 12 Oct 2016 20:42:04 -0400 > > Subject:

RE: HL7 messages to Kafka consumer

2016-10-12 Thread Martin Gainty
> From: dbis...@gmail.com > Date: Wed, 12 Oct 2016 20:42:04 -0400 > Subject: RE: HL7 messages to Kafka consumer > To: users@kafka.apache.org > > I did it with HAPI API and Kafka producer way back when and it worked well. > Times have changed, If you consider using Apache Nifi, besides native HL

RE: HL7 messages to Kafka consumer

2016-10-12 Thread Artem Ervits
I did it with HAPI API and Kafka producer way back when and it worked well. Times have changed, If you consider using Apache Nifi, besides native HL7 processor, you can push to Kafka by dragging a processor on canvas. HL7 processor also is built on HAPI API. Here's an example but instead of Kafka i

Manually update consumer offset stored in Kafka

2016-10-12 Thread Yifan Ying
Hi, In old consumers, we use the following command line tool to manually update offsets stored in zk: *./kafka-run-class.sh kafka.tools.UpdateOffsetsInZK [latest | earliest] [consumer.properties file path] [topic]* But it doesn't work with offsets stored in Kafka. How can I update the Kafka offs

RE: HL7 messages to Kafka consumer

2016-10-12 Thread Martin Gainty
provisionally accomplished task by embedding A01,A03 and A08 HL7 Event-types into SOAP 1.2 Envelopes I remember having difficulty transporting over a non-dedicated transport such as what Kafka implements Producer Embeds Fragment1 into SOAPEnvelope Producer Sends Fragment1-SOAPEnvelope of A01 Cons

Re: Understanding out of order message processing w/ Streaming

2016-10-12 Thread Ali Akhtar
Thanks Matthias. So, if I'm understanding this right, Kafka will not discard which messages which arrive out of order. What it will do is show messages in the order in which they arrive. But if they arrive out of order, I have to detect / process that myself in the processor logic. Is that corr

Re: [kafka-clients] [VOTE] 0.10.1.0 RC2

2016-10-12 Thread Dana Powers
+1; all kafka-python integration tests pass. -Dana On Wed, Oct 12, 2016 at 10:41 AM, Jason Gustafson wrote: > Hello Kafka users, developers and client-developers, > > One more RC for 0.10.1.0. I think we're getting close! > > Release plan: > https://cwiki.apache.org/confluence/display/KAFKA/Rel

Re: Understanding out of order message processing w/ Streaming

2016-10-12 Thread Matthias J. Sax
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Last question first: A KTable is basically in finite window over the whole stream providing a single result (that gets updated when new data arrives). If you use windows, you cut the overall stream into finite subsets and get a result per window. Thu

How to specify starting position for consumers when using dynamic group re-balancing?

2016-10-12 Thread Marina
Hi, Is it possible to start 0.9 or 0.10 consumers from a specified offset, while still using consumer groups with dynamic re-balancing? Here is what have found so far: Case 1: If we use consumer.assign(…) method to manually assign partitions to consumers - we can do all below actions: consu

[VOTE] 0.10.1.0 RC2

2016-10-12 Thread Jason Gustafson
Hello Kafka users, developers and client-developers, One more RC for 0.10.1.0. I think we're getting close! Release plan: https://cwiki.apache.org/confluence/display/KAFKA/Rele ase+Plan+0.10.1. Release notes for the 0.10.1.0 release: http://home.apache.org/~jgus/kafka-0.10.1.0-rc2/RELEASE_NOTES.

Re: Process to enable log compaction on a cluster

2016-10-12 Thread Mario Ricci
Sathya, Did you ever figure out what to do here? On Mon, Jul 4, 2016 at 12:19 AM Sathyakumar Seshachalam < sathyakumar_seshacha...@trimble.com> wrote: > My another followup question is that Am I right in assuming that per topic > retention minutes or clean up policy, they all have any effect onl

delete topic causing spikes in fetch/metadata requests

2016-10-12 Thread sunil kalva
We are using kafka 0.8.2.2 (client and server), when ever we delete a topic we see lot of errors in broker logs like below, and there is also a spike in fetch/metadata requests. Can i correlate these errors with topic delete or its a known issue. Since there is spike in metadata requests and fetch

Re: In Kafka Streaming, Serdes should use Optionals

2016-10-12 Thread Guozhang Wang
Haha, I feel the same pain with you man. On Tue, Oct 11, 2016 at 8:59 PM, Ali Akhtar wrote: > Thanks. That filter() method is a good solution. But whenever I look at it, > I feel an empty spot in my heart which can only be filled by: > filter(Optional::isPresent) > > On Wed, Oct 12, 2016 at 12:1

Re: [VOTE] 0.10.1.0 RC1

2016-10-12 Thread Jason Gustafson
FYI: I'm cutting another RC this morning due to https://issues.apache.org/jira/browse/KAFKA-4290. Hopefully this is the last! -Jason On Mon, Oct 10, 2016 at 8:20 PM, Jason Gustafson wrote: > The documentation is mostly fixed now: http://kafka.apache.org/0 > 101/documentation.html. Thanks to Der

Re: JVM crash when closing persistent store (rocksDB)

2016-10-12 Thread Eno Thereska
Depending on how voting goes, the tentative date is Oct 17th: https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.1 Thanks Eno > On 12 Oct 2016, at 16:00, Damian Guy wrote: > > 0.10.1 will release will hopef

Re: JVM crash when closing persistent store (rocksDB)

2016-10-12 Thread Damian Guy
0.10.1 will release will hopefully be within the next couple of weeks. On Wed, 12 Oct 2016 at 15:52 Pierre Coquentin wrote: > Ok it works against the trunk and the branch 0.10.1, both have a dependency > to rockdb 4.9.0 vs 4.4.1 for kafka 0.10.0. > Do you know when 0.10.1 will be released ? > >

Re: JVM crash when closing persistent store (rocksDB)

2016-10-12 Thread Pierre Coquentin
Ok it works against the trunk and the branch 0.10.1, both have a dependency to rockdb 4.9.0 vs 4.4.1 for kafka 0.10.0. Do you know when 0.10.1 will be released ? On Tue, Oct 11, 2016 at 9:39 PM, Pierre Coquentin < pierre.coquen...@gmail.com> wrote: > Hi, > > I already tried to store rocks db file

Kafka-connect cannot find configuration in config.storage.topic

2016-10-12 Thread Kristoffer Sjögren
Hi We have noticed that kafka-connect cannot find its connector configuration after a few passed weeks. The web ui reports that no connectors are available even though the configuration records are still available in config.storage.topic. Its possible to start the connectors again by curling the c

HL7 messages to Kafka consumer

2016-10-12 Thread Samuel Glover
Has anyone done this? I'm working with medical hospital company that wants to ingest HL7 messages into Kafka cluster, topics. Any guidance appreciated. -- *Sam Glover* Solutions Architect *M* 512.550.5363 samglo...@cloudera.com 515 Congress Ave, Suite 1212 | Austin, TX | 78701 Celebrating a

Re: Force producer topic metadata refresh.

2016-10-12 Thread Ismael Juma
Hi Alexandru, I think your issue will be fixed by KAFKA-4254. There's a PR available and should be merged shortly. Can you please verify? Thanks, Ismael On Wed, Oct 12, 2016 at 11:00 AM, Alexandru Ionita < alexandru.ion...@gmail.com> wrote: > OK. then my question is: why is not the producer try

Re: Force producer topic metadata refresh.

2016-10-12 Thread Alexandru Ionita
OK. then my question is: why is not the producer trying to recover from this error by updating its topic metadata right away instead of waiting for the "metadata.max.age.ms" to expire? 2016-10-12 11:43 GMT+02:00 Manikumar : > we have similar setting "metadata.max.age.ms" in new producer api. > It

Re: Force producer topic metadata refresh.

2016-10-12 Thread Manikumar
we have similar setting "metadata.max.age.ms" in new producer api. Its default value is 300sec. On Wed, Oct 12, 2016 at 3:04 PM, Alexandru Ionita < alexandru.ion...@gmail.com> wrote: > Hello kafka users!! > > I'm trying implement/use a mechanism to make a Kafka producer imperatively > update its

Force producer topic metadata refresh.

2016-10-12 Thread Alexandru Ionita
Hello kafka users!! I'm trying implement/use a mechanism to make a Kafka producer imperatively update its topic metadata for a particular topic. Here is the use case: we are adding partitions on topics programmatically because we want to very strictly control how messages are published to partic

Re: KafkaStream Merging two topics is not working fro custom datatypes

2016-10-12 Thread Michael Noll
Happy to hear it works now for you, Ratha. -Michael On Wed, Oct 12, 2016 at 6:06 AM, Ratha v wrote: > Sorry my fault, In the kafkaConsumer I messed with 'value.deserializer' > property.. > Now things are working fine.. > Thanks a lot. > > On 12 October 2016 at 14:10, Ratha v wrote: > > > HI M