[ https://issues.apache.org/jira/browse/YARN-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413916#comment-15413916 ]
Karthik Kambatla commented on YARN-5373: ---------------------------------------- Thanks for picking this up, [~templedf]. Quick high-level question. After chowning to the user who would run the container, can we setfacl to give access to user "yarn" as well? Comments on the patch itself: # container-executor.c ## The log messages for failure to open/read directory are missing the word NOT? ## After readdir, I see the patch resets errno. What happens if the first call to readdir fails? Don't we lose the errno and fail to log and return -1? May be reset before the readdir call? Skip resetting altogether? ## For the (dir == NULL), can we invert the operands to (NULL == dir)? # test-container-executor.c - typo: s/existant/existent On the tests, do we need tests with {{linux-container-executor.nonsecure-mode.limit-users}} turned on/off? > NPE listing wildcard directory in containerLaunch > ------------------------------------------------- > > Key: YARN-5373 > URL: https://issues.apache.org/jira/browse/YARN-5373 > Project: Hadoop YARN > Issue Type: Bug > Components: nodemanager > Affects Versions: 2.9.0 > Reporter: Haibo Chen > Assignee: Daniel Templeton > Priority: Blocker > Attachments: YARN-5373.001.patch, YARN-5373.002.patch > > > YARN-4958 added support for wildcards in file localization. It introduces a > NPE > at > {code:java} > for (File wildLink : directory.listFiles()) { > sb.symlink(new Path(wildLink.toString()), new Path(wildLink.getName())); > } > {code} > When directory.listFiles returns null (only happens in a secure cluster), NPE > will cause the container fail to launch. > Hive, Oozie jobs fail as a result. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org