[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14954513#comment-14954513 ] nijel commented on HADOOP-11715: hi [~brandonli] Can you have a look on this change ? > azureFs::getFileStatus doesn't check the file system scheme and thus could > throw a misleading exception. > - > > Key: HADOOP-11715 > URL: https://issues.apache.org/jira/browse/HADOOP-11715 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.0 >Reporter: Brandon Li >Assignee: nijel > Labels: BB2015-05-TBR > Fix For: 2.8.0 > > Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch, > HADOOP-11715.3.patch > > > azureFs::getFileStatus doesn't check the file system scheme and thus could > throw a misleading exception. > For example, it complains filenotfound instead of wrong-fs for an hdfs path: > Caused by: java.io.FileNotFoundException: > hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: > No such file or directory. > at > org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HADOOP-12442) Display help if the command option to "hdfs dfs " is not valid
[ https://issues.apache.org/jira/browse/HADOOP-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel moved HDFS-9125 to HADOOP-12442: -- Key: HADOOP-12442 (was: HDFS-9125) Project: Hadoop Common (was: Hadoop HDFS) > Display help if the command option to "hdfs dfs " is not valid > --- > > Key: HADOOP-12442 > URL: https://issues.apache.org/jira/browse/HADOOP-12442 > Project: Hadoop Common > Issue Type: Improvement >Reporter: nijel >Assignee: nijel >Priority: Minor > Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch > > > {noformat} > master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs > -mkdirs: Unknown command > {noformat} > Better to display the help info. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12442) Display help if the command option to "hdfs dfs " is not valid
[ https://issues.apache.org/jira/browse/HADOOP-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-12442: --- Attachment: HDFS-9125_4.patch Thanks [~vinayrpet] for the comments Updated the patch > Display help if the command option to "hdfs dfs " is not valid > --- > > Key: HADOOP-12442 > URL: https://issues.apache.org/jira/browse/HADOOP-12442 > Project: Hadoop Common > Issue Type: Improvement >Reporter: nijel >Assignee: nijel >Priority: Minor > Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch > > > {noformat} > master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs > -mkdirs: Unknown command > {noformat} > Better to display the help info. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12442) Display help if the command option to "hdfs dfs " is not valid
[ https://issues.apache.org/jira/browse/HADOOP-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-12442: --- Attachment: (was: HDFS-9125_4.patch) > Display help if the command option to "hdfs dfs " is not valid > --- > > Key: HADOOP-12442 > URL: https://issues.apache.org/jira/browse/HADOOP-12442 > Project: Hadoop Common > Issue Type: Improvement >Reporter: nijel >Assignee: nijel >Priority: Minor > Attachments: HDFS-9125_1.patch, HDFS-9125_2.patch, HDFS-9125_3.patch > > > {noformat} > master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs > -mkdirs: Unknown command > {noformat} > Better to display the help info. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-12442) Display help if the command option to "hdfs dfs " is not valid
[ https://issues.apache.org/jira/browse/HADOOP-12442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-12442: --- Attachment: HADOOP-12442.patch > Display help if the command option to "hdfs dfs " is not valid > --- > > Key: HADOOP-12442 > URL: https://issues.apache.org/jira/browse/HADOOP-12442 > Project: Hadoop Common > Issue Type: Improvement >Reporter: nijel >Assignee: nijel >Priority: Minor > Attachments: HADOOP-12442.patch, HDFS-9125_1.patch, > HDFS-9125_2.patch, HDFS-9125_3.patch > > > {noformat} > master:/home/nijel/hadoop-3.0.0-SNAPSHOT/bin # ./hdfs dfs -mkdirs > -mkdirs: Unknown command > {noformat} > Better to display the help info. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-8862) remove deprecated properties used in default configurations
[ https://issues.apache.org/jira/browse/HADOOP-8862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534007#comment-14534007 ] nijel commented on HADOOP-8862: --- LGTM +1 for the change remove deprecated properties used in default configurations --- Key: HADOOP-8862 URL: https://issues.apache.org/jira/browse/HADOOP-8862 Project: Hadoop Common Issue Type: Improvement Components: conf Affects Versions: 3.0.0 Reporter: Jianbin Wei Assignee: Jianbin Wei Labels: BB2015-05-TBR Attachments: HADOOP-8862-01.patch, HADOOP-8862.patch We need to remove the deprecated properties included in the default configurations, such as core-default.xml and core-site.xml. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7308) Remove unused TaskLogAppender configurations from log4j.properties
[ https://issues.apache.org/jira/browse/HADOOP-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534106#comment-14534106 ] nijel commented on HADOOP-7308: --- thanks [~andreina] for the update lgtm +1 Remove unused TaskLogAppender configurations from log4j.properties -- Key: HADOOP-7308 URL: https://issues.apache.org/jira/browse/HADOOP-7308 Project: Hadoop Common Issue Type: Improvement Components: conf Affects Versions: 0.22.0 Reporter: Todd Lipcon Assignee: Todd Lipcon Labels: BB2015-05-TBR Attachments: HADOOP-7308.1.patch, hadoop-7308.txt MAPREDUCE-2372 improved TaskLogAppender to no longer need as much wiring in log4j.properties. There are also some old properties in there that are no longer used (eg logsRetainHours and noKeepSplits). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7891) KerberosName method typo and log warning when rules are set
[ https://issues.apache.org/jira/browse/HADOOP-7891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534108#comment-14534108 ] nijel commented on HADOOP-7891: --- thanks [~surendrasingh] +1 for the change KerberosName method typo and log warning when rules are set --- Key: HADOOP-7891 URL: https://issues.apache.org/jira/browse/HADOOP-7891 Project: Hadoop Common Issue Type: Improvement Components: security Affects Versions: 0.23.1 Reporter: Alejandro Abdelnur Assignee: Alejandro Abdelnur Priority: Minor Labels: BB2015-05-TBR Attachments: HADOOP-7891.patch, HADOOP-7891_1.patch The method hasRulesBeenSet() should be named haveRulesBeenSet() if the rules setting is not done during UGI initialization because they have been already set, a warning should be logged. Along the following lines: Not setting kerberos name mappings defined in hadoop.security.auth_to_local because name mappings are already set) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9905) remove dependency of zookeeper for hadoop-client
[ https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534192#comment-14534192 ] nijel commented on HADOOP-9905: --- thanks [~vinayrpet] for the patch lgtm +1 remove dependency of zookeeper for hadoop-client Key: HADOOP-9905 URL: https://issues.apache.org/jira/browse/HADOOP-9905 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha Reporter: Vinayakumar B Assignee: Vinayakumar B Labels: BB2015-05-TBR Attachments: HADOOP-9905.patch zookeeper dependency was added for ZKFC, which will not be used by client. Better remove the dependency of zookeeper jar for hadoop-client -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client
[ https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-9905: -- Labels: BB-2015-05-RFC (was: BB-2015-05-rfc) remove dependency of zookeeper for hadoop-client Key: HADOOP-9905 URL: https://issues.apache.org/jira/browse/HADOOP-9905 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha Reporter: Vinayakumar B Assignee: Vinayakumar B Labels: BB-2015-05-RFC Attachments: HADOOP-9905.patch zookeeper dependency was added for ZKFC, which will not be used by client. Better remove the dependency of zookeeper jar for hadoop-client -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client
[ https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-9905: -- Labels: BB-2015-05-rfc (was: BB2015-05-TBR) remove dependency of zookeeper for hadoop-client Key: HADOOP-9905 URL: https://issues.apache.org/jira/browse/HADOOP-9905 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha Reporter: Vinayakumar B Assignee: Vinayakumar B Labels: BB-2015-05-RFC Attachments: HADOOP-9905.patch zookeeper dependency was added for ZKFC, which will not be used by client. Better remove the dependency of zookeeper jar for hadoop-client -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect
[ https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-9729: -- Labels: BB-2015-05-RFC (was: BB2015-05-TBR) The example code of org.apache.hadoop.util.Tool is incorrect Key: HADOOP-9729 URL: https://issues.apache.org/jira/browse/HADOOP-9729 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 1.1.2 Reporter: hellojinjie Labels: BB-2015-05-RFC Attachments: HADOOP-9729.patch Original Estimate: 1h Remaining Estimate: 1h see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html function public int run(String[] args) has no return value in the example code -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9729) The example code of org.apache.hadoop.util.Tool is incorrect
[ https://issues.apache.org/jira/browse/HADOOP-9729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534357#comment-14534357 ] nijel commented on HADOOP-9729: --- patch looks good +1 The example code of org.apache.hadoop.util.Tool is incorrect Key: HADOOP-9729 URL: https://issues.apache.org/jira/browse/HADOOP-9729 Project: Hadoop Common Issue Type: Bug Components: util Affects Versions: 1.1.2 Reporter: hellojinjie Labels: BB-2015-05-RFC Attachments: HADOOP-9729.patch Original Estimate: 1h Remaining Estimate: 1h see http://hadoop.apache.org/docs/stable/api/org/apache/hadoop/util/Tool.html function public int run(String[] args) has no return value in the example code -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9905) remove dependency of zookeeper for hadoop-client
[ https://issues.apache.org/jira/browse/HADOOP-9905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-9905: -- Labels: BB2015-05-RFC (was: BB-2015-05-RFC) remove dependency of zookeeper for hadoop-client Key: HADOOP-9905 URL: https://issues.apache.org/jira/browse/HADOOP-9905 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta, 2.0.6-alpha Reporter: Vinayakumar B Assignee: Vinayakumar B Labels: BB2015-05-RFC Attachments: HADOOP-9905.patch zookeeper dependency was added for ZKFC, which will not be used by client. Better remove the dependency of zookeeper jar for hadoop-client -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9723) Improve error message when hadoop archive output path already exists
[ https://issues.apache.org/jira/browse/HADOOP-9723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14534346#comment-14534346 ] nijel commented on HADOOP-9723: --- lgtm +1 for the change Improve error message when hadoop archive output path already exists Key: HADOOP-9723 URL: https://issues.apache.org/jira/browse/HADOOP-9723 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.4-alpha Reporter: Stephen Chu Assignee: Akira AJISAKA Priority: Trivial Labels: BB2015-05-TBR Attachments: HADOOP-9723.2.patch, HADOOP-9723.patch When creating a hadoop archive and specifying an output path of an already existing file, we get an Invalid Output error message. {code} [schu@hdfs-vanilla-1 ~]$ hadoop archive -archiveName foo.har -p /user/schu testDir1 /user/schu Invalid Output: /user/schu/foo.har {code} This error can be improved to tell users immediately that the output path already exists. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9723) Improve error message when hadoop archive output path already exists
[ https://issues.apache.org/jira/browse/HADOOP-9723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-9723: -- Labels: BB-2015-05-RFC (was: BB2015-05-TBR) Improve error message when hadoop archive output path already exists Key: HADOOP-9723 URL: https://issues.apache.org/jira/browse/HADOOP-9723 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0, 2.0.4-alpha Reporter: Stephen Chu Assignee: Akira AJISAKA Priority: Trivial Labels: BB-2015-05-RFC Attachments: HADOOP-9723.2.patch, HADOOP-9723.patch When creating a hadoop archive and specifying an output path of an already existing file, we get an Invalid Output error message. {code} [schu@hdfs-vanilla-1 ~]$ hadoop archive -archiveName foo.har -p /user/schu testDir1 /user/schu Invalid Output: /user/schu/foo.har {code} This error can be improved to tell users immediately that the output path already exists. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14519336#comment-14519336 ] nijel commented on HADOOP-11715: White space looks like not related to this patch Thanks azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch, HADOOP-11715.3.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Attachment: HADOOP-11715.3.patch removed lines with 80 chars azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch, HADOOP-11715.3.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11677) Missing secure session attributed for log and static contexts
[ https://issues.apache.org/jira/browse/HADOOP-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11677: --- Attachment: HADOOP-11677-2.patch Added missing import. Could not find any test class for HTTPServer2. I will try to add tests for this class Missing secure session attributed for log and static contexts - Key: HADOOP-11677 URL: https://issues.apache.org/jira/browse/HADOOP-11677 Project: Hadoop Common Issue Type: Bug Reporter: nijel Assignee: nijel Attachments: 001-HADOOP-11677.patch, HADOOP-11677-2.patch, HADOOP-11677.1.patch In HTTPServer2.java for the default context the secure attributes are set. {code} SessionManager sm = webAppContext.getSessionHandler().getSessionManager(); if (sm instanceof AbstractSessionManager) { AbstractSessionManager asm = (AbstractSessionManager)sm; asm.setHttpOnly(true); asm.setSecureCookies(true); } {code} But when the contexts are created for /logs and /static, new contexts are created and the session handler is assigned as null. Here also the secure attributes needs to be set. Is it not done intentionally ? please give your thought Background trying to add login action for HTTP pages. After this when security test tool is used, it reports error for these 2 urls (/logs and /static). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11677) Missing secure session attributed for log and static contexts
[ https://issues.apache.org/jira/browse/HADOOP-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11677: --- Status: Patch Available (was: Open) Missing secure session attributed for log and static contexts - Key: HADOOP-11677 URL: https://issues.apache.org/jira/browse/HADOOP-11677 Project: Hadoop Common Issue Type: Bug Reporter: nijel Assignee: nijel Attachments: 001-HADOOP-11677.patch, HADOOP-11677.1.patch In HTTPServer2.java for the default context the secure attributes are set. {code} SessionManager sm = webAppContext.getSessionHandler().getSessionManager(); if (sm instanceof AbstractSessionManager) { AbstractSessionManager asm = (AbstractSessionManager)sm; asm.setHttpOnly(true); asm.setSecureCookies(true); } {code} But when the contexts are created for /logs and /static, new contexts are created and the session handler is assigned as null. Here also the secure attributes needs to be set. Is it not done intentionally ? please give your thought Background trying to add login action for HTTP pages. After this when security test tool is used, it reports error for these 2 urls (/logs and /static). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11677) Missing secure session attributed for log and static contexts
[ https://issues.apache.org/jira/browse/HADOOP-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11677: --- Attachment: HADOOP-11677.1.patch Please review the patch Missing secure session attributed for log and static contexts - Key: HADOOP-11677 URL: https://issues.apache.org/jira/browse/HADOOP-11677 Project: Hadoop Common Issue Type: Bug Reporter: nijel Assignee: nijel Attachments: 001-HADOOP-11677.patch, HADOOP-11677.1.patch In HTTPServer2.java for the default context the secure attributes are set. {code} SessionManager sm = webAppContext.getSessionHandler().getSessionManager(); if (sm instanceof AbstractSessionManager) { AbstractSessionManager asm = (AbstractSessionManager)sm; asm.setHttpOnly(true); asm.setSecureCookies(true); } {code} But when the contexts are created for /logs and /static, new contexts are created and the session handler is assigned as null. Here also the secure attributes needs to be set. Is it not done intentionally ? please give your thought Background trying to add login action for HTTP pages. After this when security test tool is used, it reports error for these 2 urls (/logs and /static). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14516978#comment-14516978 ] nijel commented on HADOOP-11715: Updated the patch to fix the white space issue and test failures. I didnt not see any details of checkstyle comments. Can anyone guide me ? Thanks in advance azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Attachment: HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch, HADOOP-11715.2.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Attachment: HADOOP-11715.1.patch Attaching the patch Please review azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Attachments: HADOOP-11715.1.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11715: --- Fix Version/s: 2.8.0 Status: Patch Available (was: Open) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel Fix For: 2.8.0 Attachments: HADOOP-11715.1.patch azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HADOOP-11839) when try to roll one key which not exist in kms ,will have nullpointer Exception
[ https://issues.apache.org/jira/browse/HADOOP-11839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel moved HDFS-8158 to HADOOP-11839: -- Component/s: (was: encryption) kms Affects Version/s: (was: 2.6.0) 2.6.0 Key: HADOOP-11839 (was: HDFS-8158) Project: Hadoop Common (was: Hadoop HDFS) when try to roll one key which not exist in kms ,will have nullpointer Exception Key: HADOOP-11839 URL: https://issues.apache.org/jira/browse/HADOOP-11839 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.6.0 Reporter: huangyitian Assignee: J.Andreina Priority: Minor Test Step: 1.try to roll one key which is not existed in kms: ./hadoop key roll hyt Test reslt: will have a nullPointer Exception in Linux consol: vm-204:/opt/OpenSource/install/hadoop/namenode/bin # ./hadoop key roll hyt 15/04/16 11:58:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Rolling key version from KeyProvider: KMSClientProvider[http://9.91.8.204:16000/kms/v1/] for key name: hyt java.lang.NullPointerException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:485) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:443) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.rollNewVersionInternal(KMSClientProvider.java:649) at org.apache.hadoop.crypto.key.kms.KMSClientProvider.rollNewVersion(KMSClientProvider.java:660) at org.apache.hadoop.crypto.key.KeyShell$RollCommand.execute(KeyShell.java:347) at org.apache.hadoop.crypto.key.KeyShell.run(KeyShell.java:79) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.crypto.key.KeyShell.main(KeyShell.java:515) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11669) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class
[ https://issues.apache.org/jira/browse/HADOOP-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381234#comment-14381234 ] nijel commented on HADOOP-11669: code refactoring. Tests not applicable. Please review Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class - Key: HADOOP-11669 URL: https://issues.apache.org/jira/browse/HADOOP-11669 Project: Hadoop Common Issue Type: Improvement Reporter: nijel Assignee: nijel Priority: Minor Attachments: 0001-HDFS-7883.patch, 001-HADOOP-11669.patch These 2 configurations in HttpServer2.java is hadoop configurations. {code} static final String FILTER_INITIALIZER_PROPERTY = hadoop.http.filter.initializers; public static final String HTTP_MAX_THREADS = hadoop.http.max.threads; {code} It is better to keep it inside CommonConfigurationKeys -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11732) Make the KMS related log file names consistent with other hadoop processes
[ https://issues.apache.org/jira/browse/HADOOP-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11732: --- Status: Patch Available (was: Open) Make the KMS related log file names consistent with other hadoop processes -- Key: HADOOP-11732 URL: https://issues.apache.org/jira/browse/HADOOP-11732 Project: Hadoop Common Issue Type: Bug Components: kms Reporter: nijel Assignee: nijel Priority: Minor Attachments: 0001-HADOOP-11732.patch Now the kms log file names are kms.log and kms-audit.log Preferably KMS also can use the same log file name patetrn as other processes. hadoop-user-kms-host.log hadoop-user-kms-host-audit.log -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11732) Make the KMS related log file name consistent with other hadoop processes
nijel created HADOOP-11732: -- Summary: Make the KMS related log file name consistent with other hadoop processes Key: HADOOP-11732 URL: https://issues.apache.org/jira/browse/HADOOP-11732 Project: Hadoop Common Issue Type: Bug Components: kms Reporter: nijel Assignee: nijel Priority: Minor Now the kms log file names are kms.log and kms-audit.log Preferably KMS also can use the same log file name patetrn as other processes. hadoop-user-kms-host.log hadoop-user-kms-host-audit.log -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11732) Make the KMS related log file name consistent with other hadoop processes
[ https://issues.apache.org/jira/browse/HADOOP-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11732: --- Attachment: 0001-HADOOP-11732.patch Attached an initial patch Please have a look Make the KMS related log file name consistent with other hadoop processes - Key: HADOOP-11732 URL: https://issues.apache.org/jira/browse/HADOOP-11732 Project: Hadoop Common Issue Type: Bug Components: kms Reporter: nijel Assignee: nijel Priority: Minor Attachments: 0001-HADOOP-11732.patch Now the kms log file names are kms.log and kms-audit.log Preferably KMS also can use the same log file name patetrn as other processes. hadoop-user-kms-host.log hadoop-user-kms-host-audit.log -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11732) Make the KMS related log file names consistent with other hadoop processes
[ https://issues.apache.org/jira/browse/HADOOP-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11732: --- Summary: Make the KMS related log file names consistent with other hadoop processes (was: Make the KMS related log file name consistent with other hadoop processes) Make the KMS related log file names consistent with other hadoop processes -- Key: HADOOP-11732 URL: https://issues.apache.org/jira/browse/HADOOP-11732 Project: Hadoop Common Issue Type: Bug Components: kms Reporter: nijel Assignee: nijel Priority: Minor Attachments: 0001-HADOOP-11732.patch Now the kms log file names are kms.log and kms-audit.log Preferably KMS also can use the same log file name patetrn as other processes. hadoop-user-kms-host.log hadoop-user-kms-host-audit.log -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11734) Support rotation of kms.out file.
nijel created HADOOP-11734: -- Summary: Support rotation of kms.out file. Key: HADOOP-11734 URL: https://issues.apache.org/jira/browse/HADOOP-11734 Project: Hadoop Common Issue Type: Bug Components: kms Reporter: nijel Assignee: nijel Priority: Minor kms.out file is aways keep appending. Better to support file rolling for last 5 restarts. Potential issue is in deployment the log file can grow if some error in startup. In my case OM system keep restarts and it fails due to some misconfiguration. Please give your opinion -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14363081#comment-14363081 ] nijel commented on HADOOP-11715: i would like to work on this. Please feel free to reassign if work started azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-11715) azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception.
[ https://issues.apache.org/jira/browse/HADOOP-11715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel reassigned HADOOP-11715: -- Assignee: nijel azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. - Key: HADOOP-11715 URL: https://issues.apache.org/jira/browse/HADOOP-11715 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.7.0 Reporter: Brandon Li Assignee: nijel azureFs::getFileStatus doesn't check the file system scheme and thus could throw a misleading exception. For example, it complains filenotfound instead of wrong-fs for an hdfs path: Caused by: java.io.FileNotFoundException: hdfs://headnode0:9000/hive/scratch/hadoopqa/a7d34a22-57eb-4678-84b4-43d84027d45f/hive_2015-03-02_23-13-04_713_5722627238053417441-1/hadoopqa/_tez_scratch_dir/_tez_scratch_dir/split_Map_1/job.split: No such file or directory. at org.apache.hadoop.fs.azure.NativeAzureFileSystem.getFileStatus(NativeAzureFileSystem.java:1625) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9874) hadoop.security.logger output goes to both logs
[ https://issues.apache.org/jira/browse/HADOOP-9874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354245#comment-14354245 ] nijel commented on HADOOP-9874: --- Hi, faced the same issue and solved by adding {code} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit = false log4j.additivity.org.apache.hadoop.hdfs.server.common.HadoopAuditLogger.audit = false {code} can this be added in default log4j file ? hadoop.security.logger output goes to both logs --- Key: HADOOP-9874 URL: https://issues.apache.org/jira/browse/HADOOP-9874 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.1.0-beta Reporter: Allen Wittenauer Setting hadoop.security.logger (for SecurityLogger messages) to non-null sends authentication information to the other log as specified. However, that logging information also goes to the main log. It should only go to one log, not both. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11677) Missing secure session attributed for log and static contexts
[ https://issues.apache.org/jira/browse/HADOOP-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11677: --- Attachment: 001-HADOOP-11677.patch attaching the patch with the change. Please review if the change make sense. Missing secure session attributed for log and static contexts - Key: HADOOP-11677 URL: https://issues.apache.org/jira/browse/HADOOP-11677 Project: Hadoop Common Issue Type: Bug Reporter: nijel Assignee: nijel Attachments: 001-HADOOP-11677.patch In HTTPServer2.java for the default context the secure attributes are set. {code} SessionManager sm = webAppContext.getSessionHandler().getSessionManager(); if (sm instanceof AbstractSessionManager) { AbstractSessionManager asm = (AbstractSessionManager)sm; asm.setHttpOnly(true); asm.setSecureCookies(true); } {code} But when the contexts are created for /logs and /static, new contexts are created and the session handler is assigned as null. Here also the secure attributes needs to be set. Is it not done intentionally ? please give your thought Background trying to add login action for HTTP pages. After this when security test tool is used, it reports error for these 2 urls (/logs and /static). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-11677) Missing secure session attributed for log and static contexts
[ https://issues.apache.org/jira/browse/HADOOP-11677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel reassigned HADOOP-11677: -- Assignee: nijel Missing secure session attributed for log and static contexts - Key: HADOOP-11677 URL: https://issues.apache.org/jira/browse/HADOOP-11677 Project: Hadoop Common Issue Type: Bug Reporter: nijel Assignee: nijel In HTTPServer2.java for the default context the secure attributes are set. {code} SessionManager sm = webAppContext.getSessionHandler().getSessionManager(); if (sm instanceof AbstractSessionManager) { AbstractSessionManager asm = (AbstractSessionManager)sm; asm.setHttpOnly(true); asm.setSecureCookies(true); } {code} But when the contexts are created for /logs and /static, new contexts are created and the session handler is assigned as null. Here also the secure attributes needs to be set. Is it not done intentionally ? please give your thought Background trying to add login action for HTTP pages. After this when security test tool is used, it reports error for these 2 urls (/logs and /static). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HADOOP-11669) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class
[ https://issues.apache.org/jira/browse/HADOOP-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel moved HDFS-7883 to HADOOP-11669: -- Key: HADOOP-11669 (was: HDFS-7883) Project: Hadoop Common (was: Hadoop HDFS) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class - Key: HADOOP-11669 URL: https://issues.apache.org/jira/browse/HADOOP-11669 Project: Hadoop Common Issue Type: Improvement Reporter: nijel Assignee: nijel Priority: Minor Attachments: 0001-HDFS-7883.patch These 2 configurations in HttpServer2.java is hadoop configurations. {code} static final String FILTER_INITIALIZER_PROPERTY = hadoop.http.filter.initializers; public static final String HTTP_MAX_THREADS = hadoop.http.max.threads; {code} It is better to keep it inside CommonConfigurationKeys -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11669) Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class
[ https://issues.apache.org/jira/browse/HADOOP-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] nijel updated HADOOP-11669: --- Attachment: 001-HADOOP-11669.patch Missed the test and impacted files in the patch Updated the patch. Move the Hadoop constants in HTTPServer.java to CommonConfigurationKeys class - Key: HADOOP-11669 URL: https://issues.apache.org/jira/browse/HADOOP-11669 Project: Hadoop Common Issue Type: Improvement Reporter: nijel Assignee: nijel Priority: Minor Attachments: 0001-HDFS-7883.patch, 001-HADOOP-11669.patch These 2 configurations in HttpServer2.java is hadoop configurations. {code} static final String FILTER_INITIALIZER_PROPERTY = hadoop.http.filter.initializers; public static final String HTTP_MAX_THREADS = hadoop.http.max.threads; {code} It is better to keep it inside CommonConfigurationKeys -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10136) Custom JMX server to avoid random port usage by default JMX Server
[ https://issues.apache.org/jira/browse/HADOOP-10136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13881732#comment-13881732 ] nijel commented on HADOOP-10136: bq.Unfortunately, some Hadoop daemons are using ports in the ephemeral range as if they were fixed ports. In this case we can change the default port right ? In case of JMX even if we need to configure it is not possible. So i think better to keep this JMX server as an option. Custom JMX server to avoid random port usage by default JMX Server -- Key: HADOOP-10136 URL: https://issues.apache.org/jira/browse/HADOOP-10136 Project: Hadoop Common Issue Type: Improvement Reporter: Vinay Assignee: Vinay Attachments: HADOOP-10136.patch If any of the java process want to enable the JMX MBean server then following VM arguments needs to be passed. {code} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=14005 -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false{code} But the issue here is this will use one more random port other than 14005 while starting JMX. This can be a problem if that random port is used for some other service. So support a custom JMX Server through which random port can be avoided. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (HADOOP-8476) Remove duplicate VM arguments for hadoop deamon
[ https://issues.apache.org/jira/browse/HADOOP-8476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13881733#comment-13881733 ] nijel commented on HADOOP-8476: --- I also went wrong with duplicate values for mx !! Vinay, can you update the patch ? Remove duplicate VM arguments for hadoop deamon --- Key: HADOOP-8476 URL: https://issues.apache.org/jira/browse/HADOOP-8476 Project: Hadoop Common Issue Type: Bug Components: conf Affects Versions: 2.0.0-alpha, 3.0.0 Reporter: Vinay Assignee: Vinay Priority: Minor Attachments: HADOOP-8476.patch, HADOOP-8476.patch remove duplicate VM arguments passed to hadoop daemon Following are the VM arguments currently duplicated. {noformat}-Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Xmx128m -Xmx128m -Dhadoop.log.dir=/home/nn2/logs -Dhadoop.log.file=hadoop-root-namenode-HOST-xx-xx-xx-105.log -Dhadoop.home.dir=/home/nn2/ -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS{noformat} In above VM argumants -Xmx1000m will be Overridden by -Xmx128m. BTW Other duplicate arguments wont harm -- This message was sent by Atlassian JIRA (v6.1.5#6160)