[ https://issues.apache.org/jira/browse/NIFI-1536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15155027#comment-15155027 ]
Oleg Zhurakousky commented on NIFI-1536: ---------------------------------------- Matt Having hard time reproducing it, Basically I have two PutHDFS each pointing to different FS and what I see in the logs seems to be exactly what I expect {code} 17:36:18,193 INFO pool-26-thread-2 hadoop.PutHDFS:220 - PutHDFS[id=7ce7cfb8-8d2a-4eff-9507-39c864115950] Initialized a new HDFS File System with working dir: file:/Users/ozhurakousky/dev/nifi/nifi-assembly/target/nifi-0.6.0-SNAPSHOT-bin/nifi-0.6.0-SNAPSHOT default block size: 33554432 default replication: 1 config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, /Users/oleg/dev/remote/yarn-site.xml . . . . 17:36:18,921 INFO pool-26-thread-1 hadoop.PutHDFS:220 - PutHDFS[id=672282c8-3298-4471-94dd-2b59eece34e0] Initialized a new HDFS File System with working dir: hdfs://localhost:55555/user/ozhurakousky default block size: 134217728 default replication: 3 config: Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml, /Users/oleg/remote-mini/yarn-site.xml {code} Could you share some more info? > multiple putHDFS processors can result in put failures. > ------------------------------------------------------- > > Key: NIFI-1536 > URL: https://issues.apache.org/jira/browse/NIFI-1536 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework > Affects Versions: 0.5.0 > Environment: Java 1.8.0_60 > OSX Yosemite 10.10.5 > Reporter: Matthew Clarke > Assignee: Oleg Zhurakousky > Fix For: 0.5.1 > > > When multiple putHDFS processors exist, the first to run loads some config > that is then used by other putHDFS processors that are started. > I have two dataflows setup. One pushes data to a kerberized HDFS cluster > while the other pushes data to a totally different non kerberized HDFS > cluster. > Each putHDFS is configured to use its own core-sites.xml. If I start the > putHDFS that sends to the Kerberized HDFS cluster first, the other putHDFS > will throw an error when it tries to send data to the non-kerberized HDFS > cluster. > ERROR [Timer-Driven Process Thread-7] o.apache.nifi.processors.hadoop.PutHDFS > java.io.IOException: Failed on local exception: java.io.IOException: Server > asks us to fall back to SIMPLE auth, but this client is configured to only > allow secure connections.; Host Details : local host is: "<client > hostname>/<client IP>"; destination host is: "<hdfs hostname>":<hdfs port> > .... > Caused by: java.io.IOException: Server asks us to fall back to SIMPLE auth, > but this client is configured to only allow secure connections. > Even if i stop the putHDFS that is sending to the kerberized HDFS and/or > restart the putHDFS sending to the non kerberized HDFS, the above error > persists. > I need to restart NiFi to clear condition. After NiFi restart, if i run the > putHDFS that sends to the non-kerberized HDFS first, the putHDFS to the > kerberized HDFS will still work. -- This message was sent by Atlassian JIRA (v6.3.4#6332)