Lehel44 commented on code in PR #8495: URL: https://github.com/apache/nifi/pull/8495#discussion_r1554480782
########## nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/MoveHDFS.java: ########## @@ -352,95 +355,95 @@ protected void processBatchOfFiles(final List<Path> files, final ProcessContext for (final Path file : files) { - ugi.doAs(new PrivilegedAction<Object>() { - @Override - public Object run() { - FlowFile flowFile = session.create(parentFlowFile); - try { - final String originalFilename = file.getName(); - final Path outputDirPath = getNormalizedPath(context, OUTPUT_DIRECTORY, parentFlowFile); - final Path newFile = new Path(outputDirPath, originalFilename); - final boolean destinationExists = hdfs.exists(newFile); - // If destination file already exists, resolve that - // based on processor configuration - if (destinationExists) { - switch (processorConfig.getConflictResolution()) { - case REPLACE_RESOLUTION: - // Remove destination file (newFile) to replace - if (hdfs.delete(newFile, false)) { - getLogger().info("deleted {} in order to replace with the contents of {}", - new Object[]{newFile, flowFile}); - } - break; - case IGNORE_RESOLUTION: - session.transfer(flowFile, REL_SUCCESS); - getLogger().info( - "transferring {} to success because file with same name already exists", - new Object[]{flowFile}); - return null; - case FAIL_RESOLUTION: - session.transfer(session.penalize(flowFile), REL_FAILURE); - getLogger().warn( - "penalizing {} and routing to failure because file with same name already exists", - new Object[]{flowFile}); - return null; - default: - break; - } + ugi.doAs((PrivilegedAction<Object>) () -> { Review Comment: I think the patch didn't apply correctly. The NullPointerException still occurs. In the processBatchOfFiles method's catch should look like this: ```java catch (final Throwable t) { final Optional<GSSException> causeOptional = findCause(t, GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor()); if (causeOptional.isPresent()) { throw new UncheckedIOException(new IOException(causeOptional.get())); } getLogger().error("Failed to rename on HDFS due to {}", new Object[]{t}); session.transfer(session.penalize(flowFile), REL_FAILURE); context.yield(); } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org