Sumo,

Have you tried the workaround? Did it work?

Cheers

On Thu, Oct 27, 2016 at 3:41 PM, Sumanth <xmlk...@gmail.com> wrote:

> I had same issue with Secure MapR Cluster + NiFi cluster with embedded zk
> + PutHDFS setup.
> Switched back to non-cluster NiFi to avoid conflict between NiFi enabled
> zk and MapR. Waiting for  stable solution to get back NiFi cluster.
> Thanks
> Sumo
>
>
>
>
> Sent from my iPad
>
> > On Oct 26, 2016, at 7:21 AM, Andre <andre-li...@fucs.org> wrote:
> >
> > Bryan,
> >
> > My apologies as the original email wasn't explicit about this:
> >
> > Your assumption is correct: My flow contains a processor (PutHDFS) with a
> > core-site.xml configured. The file contains the property you refer to (as
> > this is a cleaner way to force NiFi to connect to the secure MapR
> cluster).
> >
> > Funny enough, when used without zk, the processor works fine. Same way zk
> > works if correctly configured,
> >
> > However, in order for both to work at the same time, I had to use to JAAS
> > workaround.
> >
> >
> > As a side note, In case you wonder: MapR's JAAS contains both the Server
> > and Client stanzas required to run a secure zk, however, they are
> designed
> > to use MapR's security mechanism and their packaged version of zookeeper.
> > As consequence, their Stanzas require jars to be added to class path and
> > all sort of weird stuff that I preferred not to introduce (since I am
> using
> > the zk embedded within NiFi).
> >
> > Had not been the case, I could point arg.15 to MapR's default JAAS as
> > described here: http://doc.mapr.com/display/MapR/mapr.login.conf and
> here:
> > http://doc.mapr.com/pages/viewpage.action?pageId=32506648
> >
> >
> > Cheers
> >
> >
> >
> >> On Thu, Oct 27, 2016 at 12:51 AM, Bryan Bende <bbe...@gmail.com> wrote:
> >>
> >> Meant to say.... the config instance somehow got
> >> "hadoop.security.authentication"
> >> set to "kerberos"
> >>
> >>> On Wed, Oct 26, 2016 at 9:50 AM, Bryan Bende <bbe...@gmail.com> wrote:
> >>>
> >>> Andre,
> >>>
> >>> This definitely seems weird that somehow using embedded ZooKeeper is
> >>> causing this.
> >>>
> >>> One thing I can say though, is that in order to get into the code in
> your
> >>> stacktrace, it had to pass through SecurityUtil.
> >> isSecurityEnabled(config)
> >>> which does the following:
> >>>
> >>> public static boolean isSecurityEnabled(final Configuration config) {
> >>>    Validate.notNull(config);
> >>>    return "kerberos".equalsIgnoreCase(config.get("hadoop.security.
> >>> authentication"));
> >>> }
> >>>
> >>> The Configuration instance passed in is created using the default
> >>> constructor Configuration config = new Configuration(); and then any
> >>> files/paths entered into the processor's resource property is added to
> >> the
> >>> config.
> >>>
> >>> So in order for isSecurityEnabled to return true, it means the config
> >>> instance somehow got "hadoop.security.authentication" set to true,
> which
> >>> usually only happens if you put a core-site.xml on the classpath with
> >> that
> >>> value set.
> >>>
> >>> Is it possible some JAR from the MapR dependencies has a core-site.xml
> >>> embedded in it?
> >>>
> >>> -Bryan
> >>>
> >>>> On Wed, Oct 26, 2016 at 6:09 AM, Andre <andre-li...@fucs.org> wrote:
> >>>>
> >>>> Hi there,
> >>>>
> >>>> I've notice an odd behavior when using embedded Zookeeper on a NiFi
> >>>> cluster
> >>>> with MapR compatible processors:
> >>>>
> >>>> I noticed that every time I enable embedded zookeeper, NiFi's HDFS
> >>>> processors (e.g. PutHDFS) start complaining about Kerberos identities:
> >>>>
> >>>> 2016-10-26 20:07:22,376 ERROR [StandardProcessScheduler Thread-2]
> >>>> o.apache.nifi.processors.hadoop.PutHDFS
> >>>> java.io.IOException: Login failure for princical@REALM-NAME-GOES-HERE
> >>>> from
> >>>> keytab /path/to/keytab_file/nifi.keytab
> >>>>        at
> >>>> org.apache.hadoop.security.UserGroupInformation.loginUserFro
> >>>> mKeytabAndReturnUGI(UserGroupInformation.java:1084)
> >>>> ~[hadoop-common-2.7.0-mapr-1602.jar:na]
> >>>>        at
> >>>> org.apache.nifi.hadoop.SecurityUtil.loginKerberos(
> SecurityUtil.java:52)
> >>>> ~[nifi-hadoop-utils-1.0.0.jar:1.0.0]
> >>>>        at
> >>>> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.re
> >>>> setHDFSResources(AbstractHadoopProcessor.java:285)
> >>>> ~[nifi-hdfs-processors-1.0.0.jar:1.0.0]
> >>>>        at
> >>>> org.apache.nifi.processors.hadoop.AbstractHadoopProcessor.ab
> >>>> stractOnScheduled(AbstractHadoopProcessor.java:213)
> >>>> ~[nifi-hdfs-processors-1.0.0.jar:1.0.0]
> >>>>        at
> >>>> org.apache.nifi.processors.hadoop.PutHDFS.onScheduled(
> PutHDFS.java:181)
> >>>> [nifi-hdfs-processors-1.0.0.jar:1.0.0]
> >>>>
> >>>> So far so good, these errors are quite familiar to people using NiFi
> >>>> against secure MapR clusters and caused by issues around the custom
> JAAS
> >>>> settings required by Java applications relying on the MapR client to
> >> work.
> >>>>
> >>>> The normal workaround this would be instructing NiFi to where the the
> >> JAAS
> >>>> settings via bootstrap.conf [1]:
> >>>>
> >>>> $grep jaas
> >>>> java.arg.15=-Djava.security.auth.login.config=./conf/nifi-jaas.conf
> >>>>
> >>>> The contents of nifi-jaas.conf are a copy of the relevant MapR JAAS
> >>>> stanza:
> >>>>
> >>>> While the workaround seems to work (still doing tests) I ask:
> >>>>
> >>>> Should setting
> >>>>
> >>>> nifi.state.management.embedded.zookeeper.start=true
> >>>>
> >>>> Cause this behavior?
> >>>>
> >>>> Cheers
> >>>>
> >>>
> >>>
> >>
>

Reply via email to