Joe / Oleg,

I did a bit of digging and found that the .parcel format is just a gzipped
tarball.  The included kafka within is kafka_2.10-0.8.2.0-kafka-1.3.1.jar

Michael



On Tue, Mar 15, 2016 at 1:52 PM, Joe Witt <joe.w...@gmail.com> wrote:

> Oleg,
>
> We as a community can support folks however we want provided we're
> doing so within the Apache way and ASF policies.
>
> We can totally help here.  Besides we want to make sure this thing is
> compatible with all systems it should be and this seems totally fair
> game.
>
> Thanks
> Joe
>
>
> On Tue, Mar 15, 2016 at 1:48 PM, Oleg Zhurakousky
> <ozhurakou...@hortonworks.com> wrote:
> > Michael
> >
> > Unfortunately I am relatively new to ASF culture and so not sure if it
> would
> > be appropriate/feasible for ASF distribution of NiFi to support
> individual
> > forks of any product/project (ASF gurus feel free to chime in). And with
> > Kafka it became quite unfortunate since there are many forks out there
> with
> > varying levels of compatibility with corresponding ASF version of Kafka.
> > That said, the best we can do for Kafka is what’s described here
> > https://issues.apache.org/jira/browse/NIFI-1629 and will go into the
> > upcoming 0.6 release. Hopefully that will bring some relief.
> >
> > Cheers
> > Oleg
> >
> > On Mar 15, 2016, at 1:21 PM, Michael Dyer <michael.d...@trapezoid.com>
> > wrote:
> >
> > Oleg,
> >
> > It's part of the Cloudera distro.  Not sure of the lineage beyond that.
> > Here are a couple of links.
> >
> >
> https://community.cloudera.com/t5/Data-Ingestion-Integration/New-Kafka-0-8-2-0-1-kafka1-3-1-p0-9-Parcel-What-are-the-changes/td-p/30506
> > http://archive.cloudera.com/kafka/parcels/1.3.1/
> >
> > Michael
> >
> >
> > On Tue, Mar 15, 2016 at 12:01 PM, Oleg Zhurakousky
> > <ozhurakou...@hortonworks.com> wrote:
> >>
> >> Michael
> >>
> >> What is KAFKA-0.8.2.0-1.kafka1.3.1.p0.9? I mean where can I get that
> >> build?
> >> I guess based on the previous email we’ve tested our code with 3
> versions
> >> of ASF distribution of Kafka and the above version tells me that it may
> be
> >> some kind of fork.
> >>
> >> Also, we are considering downgrading Kafka dependencies back to the 0.8
> >> and of 0.7 provide a new version of Kafka processors that utilize Kafka
> 0.9
> >> new producer/consumer API
> >>
> >> Thanks
> >> Oleg
> >>
> >> On Mar 15, 2016, at 11:46 AM, Michael Dyer <michael.d...@trapezoid.com>
> >> wrote:
> >>
> >> Joe,
> >>
> >> I'm seeing a similar issue moving from 0.3.0 to 0.5.1 with
> >> KAFKA-0.8.2.0-1.kafka1.3.1.p0.9.
> >>
> >> I can see the tasks/time counter increment on the processor but no flow
> >> data ever leaves the processor.  There are no errors shown in the
> bulletin
> >> board.  The app log shows below (repeating).
> >>
> >> The rename 0.4.1 nar to 0.5.1 nar trick (restart nifi) works, except
> that
> >> the 'batch size' value does not seem to be honored.  I have my batch
> size
> >> set to 10000, but I'm seeing files written continually (every few
> seconds)
> >> with much smaller sizes.  I suspect this has to do with the
> >> `auto.offset.reset` value which defaults to `largest`.  From what I have
> >> read 'smallest' causes the client to start at the beginning which sounds
> >> like I would be retrieving duplicates.
> >>
> >> Renaming 0.3.0 nar to 0.5.1 (restart nifi) restores the original
> behavior.
> >>
> >> yBuffer([[netflow5,0], initOffset 297426 to broker
> >> BrokerEndPoint(176,n2.foo.bar.com,9092)] )
> >> 2016-03-15 07:45:17,237 WARN
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176]
> >> kafka.consumer.ConsumerFetcherThread
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176],
> >> Error in fetch
> kafka.consumer.ConsumerFetcherThread$FetchRequest@11bd00f6.
> >> Possible cause: java.lang.IllegalArgumentException
> >> 2016-03-15 07:45:17,443 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.utils.VerifiableProperties Verifying properties
> >> 2016-03-15 07:45:17,443 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.utils.VerifiableProperties Property client.id is overridden to
> >> NiFi-b6c67ee3-aa9e-419d-8a57-84ab5e76c017
> >> 2016-03-15 07:45:17,443 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.utils.VerifiableProperties Property metadata.broker.list is
> overridden
> >> to n3.foo.bar.com:9092,n2.foo.bar.com:9092,n4.foo.bar.com:9092
> >> 2016-03-15 07:45:17,443 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.utils.VerifiableProperties Property request.timeout.ms is
> overridden
> >> to 30000
> >> 2016-03-15 07:45:17,443 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.client.ClientUtils$ Fetching metadata from broker
> >> BrokerEndPoint(196,n4.foo.bar.com,9092) with correlation id 14 for 1
> >> topic(s) Set(netflow5)
> >> 2016-03-15 07:45:17,443 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.producer.SyncProducer Connected to n4.foo.bar.com:9092 for
> producing
> >> 2016-03-15 07:45:17,444 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.producer.SyncProducer Disconnecting from n4.foo.bar.com:9092
> >> 2016-03-15 07:45:17,444 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> kafka.consumer.ConsumerFetcherManager
> [ConsumerFetcherManager-1458053114395]
> >> Added fetcher for partitions ArrayBuffer([[netflow5,0], initOffset
> 297426 to
> >> broker BrokerEndPoint(176,n2.foo.bar.com,9092)] )
> >> 2016-03-15 07:45:17,449 WARN
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176]
> >> kafka.consumer.ConsumerFetcherThread
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176],
> >> Error in fetch
> kafka.consumer.ConsumerFetcherThread$FetchRequest@3e69ef73.
> >> Possible cause: java.lang.IllegalArgumentException
> >> 2016-03-15 07:45:17,626 INFO [NiFi Web Server-259]
> >> o.a.n.c.s.TimerDrivenSchedulingAgent Stopped scheduling
> >> GetKafka[id=4943a24e-af5c-4392-bc45-7008f30674bb] to run
> >> 2016-03-15 07:45:17,626 INFO [Timer-Driven Process Thread-3]
> >> k.consumer.ZookeeperConsumerConnector
> >> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0],
> >> ZKConsumerConnector shutting down
> >> 2016-03-15 07:45:17,632 INFO [Timer-Driven Process Thread-3]
> >> kafka.consumer.ConsumerFetcherManager
> [ConsumerFetcherManager-1458053114395]
> >> Stopping leader finder thread
> >> 2016-03-15 07:45:17,633 INFO [Timer-Driven Process Thread-3]
> >> k.c.ConsumerFetcherManager$LeaderFinderThread
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread],
> >> Shutting down
> >> 2016-03-15 07:45:17,634 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread]
> >> k.c.ConsumerFetcherManager$LeaderFinderThread
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread],
> >> Stopped
> >> 2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3]
> >> k.c.ConsumerFetcherManager$LeaderFinderThread
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-leader-finder-thread],
> >> Shutdown completed
> >> 2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3]
> >> kafka.consumer.ConsumerFetcherManager
> [ConsumerFetcherManager-1458053114395]
> >> Stopping all fetchers
> >> 2016-03-15 07:45:17,634 INFO [Timer-Driven Process Thread-3]
> >> kafka.consumer.ConsumerFetcherThread
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176],
> >> Shutting down
> >> 2016-03-15 07:45:17,634 INFO
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176]
> >> kafka.consumer.ConsumerFetcherThread
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176],
> >> Stopped
> >> 2016-03-15 07:45:17,635 INFO [Timer-Driven Process Thread-3]
> >> kafka.consumer.ConsumerFetcherThread
> >>
> [ConsumerFetcherThread-b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0-0-176],
> >> Shutdown completed
> >> 2016-03-15 07:45:17,635 INFO [Timer-Driven Process Thread-3]
> >> kafka.consumer.ConsumerFetcherManager
> [ConsumerFetcherManager-1458053114395]
> >> All connections stopped
> >> 2016-03-15 07:45:17,635 INFO [ZkClient-EventThread-302-192.168.1.1:2181]
> >> org.I0Itec.zkclient.ZkEventThread Terminate ZkClient event thread.
> >> 2016-03-15 07:45:17,638 INFO [Timer-Driven Process Thread-3]
> >> org.apache.zookeeper.ZooKeeper Session: 0x1535e2aa53b3f61 closed
> >> 2016-03-15 07:45:17,638 INFO [Timer-Driven Process Thread-3]
> >> k.consumer.ZookeeperConsumerConnector
> >> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0],
> >> ZKConsumerConnector shutdown completed in 11 ms
> >> 2016-03-15 07:45:17,639 INFO [Timer-Driven Process Thread-4-EventThread]
> >> org.apache.zookeeper.ClientCnxn EventThread shut down
> >> 2016-03-15 07:45:17,745 INFO [Flow Service Tasks Thread-2]
> >> o.a.nifi.controller.StandardFlowService Saved flow controller
> >> org.apache.nifi.controller.FlowController@45d22bd5 // Another save
> pending =
> >> false
> >> 2016-03-15 07:45:18,414 INFO
> >>
> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0_watcher_executor]
> >> k.consumer.ZookeeperConsumerConnector
> >> [b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0],
> stopping
> >> watcher executor thread for consumer
> >> b6c67ee3-aa9e-419d-8a57-84ab5e76c017_nifi-1458053114390-d60f3cc0
> >>
> >> Hope this helps...
> >>
> >> Michael
> >>
> >> On Mon, Feb 22, 2016 at 4:12 PM, Joe Witt <joe.w...@gmail.com> wrote:
> >>>
> >>> Sorry to clarify it working against all three of these at once:
> >>> - Kafka 0.8.1.1
> >>> - Kafka 0.8.2.1
> >>> - Kafka 0.9.0.1
> >>>
> >>> Thanks
> >>> Joe
> >>>
> >>> On Mon, Feb 22, 2016 at 4:12 PM, Joe Witt <joe.w...@gmail.com> wrote:
> >>> > All just as a point of reference we now have a live system running
> >>> > that is on NiFi 0.5.0 and feeding three versions of Kafka at once:
> >>> > - 0.8.1
> >>> > - 0.8.2.0
> >>> > - 0.9.0.1
> >>> >
> >>> > So perhaps there are some particular configurations that cause
> issues.
> >>> > Can you share more details about your configuration of Kafka/NiFi and
> >>> > what sort of security is enabled?
> >>> >
> >>> > Thanks
> >>> > Joe
> >>> >
> >>> > On Mon, Feb 22, 2016 at 1:01 PM, Kyle Burke
> >>> > <kyle.bu...@ignitionone.com> wrote:
> >>> >> I replaced my 0.5.0 kafka nar with the 0.4.1 kakfa nar and it fixed
> my
> >>> >> kafka
> >>> >> issue. I renamed the 0.4.1 nar to be 0.5.0.nar and restart nifi and
> my
> >>> >> kafka
> >>> >> processor started reading my 0.8.2.1 stream. Not elegant but glad it
> >>> >> worked.
> >>> >>
> >>> >>
> >>> >> Respectfully,
> >>> >>
> >>> >> Kyle Burke | Data Science Engineer
> >>> >> IgnitionOne - Marketing Technology. Simplified.
> >>> >> Office: 1545 Peachtree St NE, Suite 500 | Atlanta, GA | 30309
> >>> >> Direct: 404.961.3918
> >>> >>
> >>> >>
> >>> >> From: Joe Witt
> >>> >> Reply-To: "users@nifi.apache.org"
> >>> >> Date: Sunday, February 21, 2016 at 5:23 PM
> >>> >> To: "users@nifi.apache.org"
> >>> >> Subject: Re: Nifi 0.50 and GetKafka Issues
> >>> >>
> >>> >> Yeah the intent is to support 0.8 and 0.9.  Will figure something
> out.
> >>> >>
> >>> >> Thanks
> >>> >> Joe
> >>> >>
> >>> >> On Feb 21, 2016 4:47 PM, "West, Joshua" <josh_w...@bose.com> wrote:
> >>> >>>
> >>> >>> Hi Oleg,
> >>> >>>
> >>> >>> Hmm -- from what I can tell, this isn't a Zookeeper communication
> >>> >>> issue.
> >>> >>> Nifi is able to connect into the Kafka brokers' Zookeeper cluster
> and
> >>> >>> retrieve the list of the kafka brokers to connect to.  Seems, from
> >>> >>> the logs,
> >>> >>> to be a problem when attempting to consume from Kafka itself.
> >>> >>>
> >>> >>> I'm guessing that the Kafka 0.9.0 client libraries just are not
> >>> >>> compatible
> >>> >>> with Kafka 0.8.2.1 so in order to use Nifi 0.5.0 with Kafka, the
> >>> >>> Kafka
> >>> >>> version must be >= 0.9.0.
> >>> >>>
> >>> >>> Any change Nifi could add backwards compatible support for Kafka
> >>> >>> 0.8.2.1
> >>> >>> too?  Let you choose which client library version, when setting up
> >>> >>> the
> >>> >>> GetKafka processor?
> >>> >>>
> >>> >>> --
> >>> >>> Josh West <josh_w...@bose.com>
> >>> >>> Bose Corporation
> >>> >>>
> >>> >>>
> >>> >>> On Sun, 2016-02-21 at 15:02 +0000, Oleg Zhurakousky wrote:
> >>> >>>
> >>> >>> Josh
> >>> >>>
> >>> >>> Also, keep in mind that there are incompatible property names in
> >>> >>> Kafka
> >>> >>> between the 0.7 and 0.8 releases. One of the change that went it
> was
> >>> >>> replacing “zk.connectiontimeout.ms” with
> >>> >>> “zookeeper.connection.timeout.ms”.
> >>> >>> Not sure if it’s related though, but realizing that 0.4.1 was
> relying
> >>> >>> on
> >>> >>> this property it’s value was completely ignored with 0.8 client
> >>> >>> libraries
> >>> >>> (you could actually see the WARN message to that effect) and now it
> >>> >>> is not
> >>> >>> ignored, so take a look and see if tinkering with its value changes
> >>> >>> something.
> >>> >>>
> >>> >>> Cheers
> >>> >>> Oleg
> >>> >>>
> >>> >>> On Feb 20, 2016, at 6:47 PM, Oleg Zhurakousky
> >>> >>> <ozhurakou...@hortonworks.com> wrote:
> >>> >>>
> >>> >>> Josh
> >>> >>>
> >>> >>> The only change that ’s went and relevant to your issue is the fact
> >>> >>> that
> >>> >>> we’ve upgraded client libraries to Kafka 0.9 and between 0.8 and
> 0.9
> >>> >>> Kafka
> >>> >>> introduced wire protocol changes that break compatibility.
> >>> >>> I am still digging so stay tuned.
> >>> >>>
> >>> >>> Oleg
> >>> >>>
> >>> >>> On Feb 20, 2016, at 4:10 PM, West, Joshua <josh_w...@bose.com>
> wrote:
> >>> >>>
> >>> >>> Hi Oleg and Joe,
> >>> >>>
> >>> >>> Kafka 0.8.2.1
> >>> >>>
> >>> >>> Attached is the app log with hostnames scrubbed.
> >>> >>>
> >>> >>> Thanks for your help.  Much appreciated.
> >>> >>>
> >>> >>> --
> >>> >>> Josh West <josh_w...@bose.com>
> >>> >>> Bose Corporation
> >>> >>>
> >>> >>>
> >>> >>> On Sat, 2016-02-20 at 15:46 -0500, Joe Witt wrote:
> >>> >>>
> >>> >>> And also what version of Kafka are you using?
> >>> >>>
> >>> >>> On Feb 20, 2016 3:37 PM, "Oleg Zhurakousky"
> >>> >>> <ozhurakou...@hortonworks.com>
> >>> >>> wrote:
> >>> >>>
> >>> >>> Josh
> >>> >>>
> >>> >>> Any chance to attache the app-log or relevant stack trace?
> >>> >>>
> >>> >>> Thanks
> >>> >>> Oleg
> >>> >>>
> >>> >>> On Feb 20, 2016, at 3:30 PM, West, Joshua <josh_w...@bose.com>
> wrote:
> >>> >>>
> >>> >>> Hi folks,
> >>> >>>
> >>> >>> I've upgraded from Nifi 0.4.1 to 0.5.0 and I am no longer able to
> use
> >>> >>> the
> >>> >>> GetKafka processor.  I'm seeing errors like so:
> >>> >>>
> >>> >>> 2016-02-20 20:10:14,953 WARN
> >>> >>>
> >>> >>>
> [ConsumerFetcherThread-NiFi-sldjflkdsjflksjf_**SCRUBBED**-1455999008728-5b8c7108-0-0]
> >>> >>> kafka.consumer.ConsumerFetcherThread
> >>> >>>
> >>> >>>
> [ConsumerFetcherThread-NiFi-sldjflkdsjflksjf_**SCRUBBED**-1455999008728-5b8c7108-0-0],
> >>> >>> Error in
> >>> >>> fetchkafka.consumer.ConsumerFetcherThread$FetchRequest@7b49a642.
> >>> >>> Possible cause: java.lang.IllegalArgumentException
> >>> >>>
> >>> >>> ^ Note  the hostname of the server has been scrubbed.
> >>> >>>
> >>> >>> My configuration is pretty generic, except that with Zookeeper we
> use
> >>> >>> a
> >>> >>> different root path, so our Zookeeper connect string looks like so:
> >>> >>>
> >>> >>>
> zookeeper-node1:2181,zookeeper-node2:2181,zookeeper-node3:2181/kafka
> >>> >>>
> >>> >>> Is anybody else experiencing issues?
> >>> >>>
> >>> >>> Thanks.
> >>> >>>
> >>> >>> --
> >>> >>> Josh West <josh_w...@bose.com>
> >>> >>>
> >>> >>> Cloud Architect
> >>> >>> Bose Corporation
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>> <nifi-app.log.kafkaissues.bz2>
> >>> >>>
> >>> >>>
> >>> >>>
> >>> >>
> >>
> >>
> >>
> >
> >
>

Reply via email to