Github user jtstorck commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/2971#discussion_r219380028
  
    --- Diff: 
nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java
 ---
    @@ -266,6 +271,16 @@ public Object run() {
                                 throw new 
IOException(configuredRootDirPath.toString() + " could not be created");
                             }
                             changeOwner(context, hdfs, configuredRootDirPath, 
flowFile);
    +                    } catch (IOException e) {
    +                      boolean tgtExpired = hasCause(e, GSSException.class, 
gsse -> GSSException.NO_CRED == gsse.getMajor());
    +                      if (tgtExpired) {
    +                        getLogger().error(String.format("An error occured 
while connecting to HDFS. Rolling back session, and penalizing flow file %s",
    --- End diff --
    
    The exception be logged here, in addition to the flowfile UUID.  It might 
be useful to have the stack trace and exception class available in the log, and 
we shouldn't suppress/omit the actual GSSException from the logging.
    
    It might also be a good idea to log this at the "warn" level, so that the 
user can choose to not have these show as bulletins on the processor in the UI. 
 Since the flowfile is being rolled back, and hadoop-client will implicitly 
acquire a new ticket, I don't think this should show as an error.  @mcgilman, 
@bbende, do either of you have a preference here?


---

Reply via email to