[ 
https://issues.apache.org/jira/browse/NIFI-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15687755#comment-15687755
 ] 

Bryan Bende commented on NIFI-3068:
-----------------------------------

Also wanted to note, the issue described above only happens with the latest 1.1 
code, and doesn't happen in 1.0. The reason is because in 1.1 we upgraded to 
the hadoop 2.7.3 client libs, which then caused PutHDFS to not work against 
directories with TDE enabled. That error is the following:

{code}
2016-11-21 21:32:13,306 ERROR [Timer-Driven Process Thread-5] 
o.apache.nifi.processors.hadoop.PutHDFS 
PutHDFS[id=01581004-7069-19ef-5ec2-87b728465117] Failed to write to HDFS due to 
org.apache.nifi.processor.exception.ProcessException: IOException thrown from 
PutHDFS[id=01581004-7069-19ef-5ec2-87b728465117]: java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt): org.apache.nifi.processor.exception.ProcessException: 
IOException thrown from PutHDFS[id=01581004-7069-19ef-5ec2-87b728465117]: 
java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
2016-11-21 21:32:13,312 ERROR [Timer-Driven Process Thread-5] 
o.apache.nifi.processors.hadoop.PutHDFS
org.apache.nifi.processor.exception.ProcessException: IOException thrown from 
PutHDFS[id=01581004-7069-19ef-5ec2-87b728465117]: java.io.IOException: 
org.apache.hadoop.security.authentication.client.AuthenticationException: 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)
        at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2104)
 ~[na:na]
        at 
org.apache.nifi.controller.repository.StandardProcessSession.read(StandardProcessSession.java:2053)
 ~[na:na]
        at 
org.apache.nifi.processors.hadoop.PutHDFS.onTrigger(PutHDFS.java:294) 
~[nifi-hdfs-processors-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
        at 
org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
 [nifi-api-1.1.0-SNAPSHOT.jar:1.1.0-SNAPSHOT]
{code}

To resolve that issue we had wrap some of the code in PutHDFS in a 
ugi.doAs(...) block which then caused the error above.

> NiFi can not reliably support multiple HDFS clusters in the same flow
> ---------------------------------------------------------------------
>
>                 Key: NIFI-3068
>                 URL: https://issues.apache.org/jira/browse/NIFI-3068
>             Project: Apache NiFi
>          Issue Type: Bug
>          Components: Core Framework
>    Affects Versions: 1.0.0
>            Reporter: Sam Hjelmfelt
>            Assignee: Bryan Bende
>              Labels: HDFS
>         Attachments: NIFI-3068.patch
>
>
> The HDFS configurations in PutHDFS are not respected when two (or more) 
> PutHDFS processors exist with different configurations. The second processor 
> to run will use the configurations from the first processor. This can cause 
> data to be written to the wrong cluster.
> This appears to be caused by configuration caching in 
> AbstractHadoopProcessor, which would affect all HDFS processors.
> https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/AbstractHadoopProcessor.java#L144



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to