[jira] [Commented] (HADOOP-8551) fs -mkdir creates parent directories without the -p option
[ https://issues.apache.org/jira/browse/HADOOP-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423034#comment-13423034 ] Hudson commented on HADOOP-8551: Integrated in Hadoop-Hdfs-0.23-Build #325 (See [https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/325/]) svn merge -c 1365588 FIXES: HADOOP-8551. fs -mkdir creates parent directories without the -p option (John George via bobby) (Revision 1365590) Result = SUCCESS bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365590 Files : * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java * /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml fs -mkdir creates parent directories without the -p option -- Key: HADOOP-8551 URL: https://issues.apache.org/jira/browse/HADOOP-8551 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: John George Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch hadoop fs -mkdir foo/bar will work even if bar is not present. It should only work if -p is given and foo is not present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8624) ProtobufRpcEngine should log all RPCs if TRACE logging is enabled
[ https://issues.apache.org/jira/browse/HADOOP-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13422937#comment-13422937 ] Aaron T. Myers commented on HADOOP-8624: +1, the patch looks good to me. I'm confident that the test failure is unrelated. The javadoc warning is a little curious, though. Any explanation for that? ProtobufRpcEngine should log all RPCs if TRACE logging is enabled - Key: HADOOP-8624 URL: https://issues.apache.org/jira/browse/HADOOP-8624 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 3.0.0, 2.2.0-alpha Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hadoop-8624.txt Since all RPC requests/responses are now ProtoBufs, it's easy to add a TRACE level logging output for ProtobufRpcEngine that actually shows the full content of all calls. This is very handy especially when writing/debugging unit tests, but might also be useful to enable at runtime for short periods of time to debug certain production issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8625) Use GzipCodec to decompress data in ResetableGzipOutputStream test
[ https://issues.apache.org/jira/browse/HADOOP-8625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13422905#comment-13422905 ] Mike Percy commented on HADOOP-8625: Please see the following JIRA issue comment for context: https://issues.apache.org/jira/browse/HADOOP-8522?focusedCommentId=13398854page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13398854 Use GzipCodec to decompress data in ResetableGzipOutputStream test -- Key: HADOOP-8625 URL: https://issues.apache.org/jira/browse/HADOOP-8625 Project: Hadoop Common Issue Type: Bug Reporter: Mike Percy Fix For: 2.0.1-alpha Use GzipCodec to decompress data in ResetableGzipOutputStream test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-7750) DataNode: Cannot start secure cluster without privileged resources | tags/release-0.20.205.0-rc2
[ https://issues.apache.org/jira/browse/HADOOP-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13422879#comment-13422879 ] fujie commented on HADOOP-7750: --- i found this problem in 1.0.3-release DataNode: Cannot start secure cluster without privileged resources | tags/release-0.20.205.0-rc2 - Key: HADOOP-7750 URL: https://issues.apache.org/jira/browse/HADOOP-7750 Project: Hadoop Common Issue Type: Bug Affects Versions: 1.0.0 Environment: Linux RHEL5 64bit sunjava2 1.6.0r14 http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.205.0-rc2/branch-0.20-security-205/src/hdfs/org/apache/hadoop/hdfs/server/datanode/DataNode.java Reporter: Trevor Powell Labels: hdfs This tag compiles just fine. But after configuring it, the datanode fails on startup with the below error: TARTUP_MSG: Starting DataNode STARTUP_MSG: host = hd3w94m7/10.152.94.111 STARTUP_MSG: args = [] STARTUP_MSG: version = 0.20.205.1 STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.205.0-rc2 -r 1179942; compiled by 'tpowell1' on Wed Oct 12 11:14:46 PDT 2011 / 2011-10-14 15:24:56,028 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2011-10-14 15:24:56,043 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 2011-10-14 15:24:56,044 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 2011-10-14 15:24:56,044 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started 2011-10-14 15:24:56,192 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 2011-10-14 15:24:56,421 INFO org.apache.hadoop.security.UserGroupInformation: Asked the TGT renewer thread to terminate 2011-10-14 15:24:57,241 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/hd3w94m7@XXX using keytab file /home/tpowell1/hadoop.tags.release-0.20.205.0-rc2/conf/hdfs.keytab 2011-10-14 15:24:57,242 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.lang.RuntimeException: Cannot start secure cluster without privileged resources. at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:306) at org.apache.hadoop.hdfs.server.datanode.DataNode.init(DataNode.java:281) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1545) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1484) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1502) at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1628) at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1645) 2011-10-14 15:24:57,243 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: / SHUTDOWN_MSG: Shutting down DataNode at hd3w94m7.XXX/10.152.94.111 / Checking the Datanode.java code it is started with a null SecureResources . public static void main(String args[]) { secureMain(args, null); } This null resource seems to get passed all the way down to startDataNode() where there is a null check... which in turns throws the error we see. void startDataNode(Configuration conf, AbstractListFile dataDirs, SecureResources resources ) throws IOException { if(UserGroupInformation.isSecurityEnabled() resources == null) throw new RuntimeException(Cannot start secure cluster without + privileged resources.); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8621) FileUtil.symLink fails if spaces in path
[ https://issues.apache.org/jira/browse/HADOOP-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Fuller updated HADOOP-8621: -- Attachment: hadoop-8621.txt Adding patch with unit test and fix. FileUtil.symLink fails if spaces in path Key: HADOOP-8621 URL: https://issues.apache.org/jira/browse/HADOOP-8621 Project: Hadoop Common Issue Type: Bug Reporter: Robert Fuller Priority: Minor Attachments: hadoop-8621.txt, patch.txt the 'ln -s' command fails in the current implementation if there is a space in the path for the target or linkname. A small change resolves the issue. String cmd = ln -s + target + + linkname; //Process p = Runtime.getRuntime().exec(cmd, null); //broken Process p = Runtime.getRuntime().exec(new String[]{ln,-s,target,linkname}, null); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8621) FileUtil.symLink fails if spaces in path
[ https://issues.apache.org/jira/browse/HADOOP-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423037#comment-13423037 ] Hudson commented on HADOOP-8621: Integrated in Hadoop-Hdfs-trunk #1116 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1116/]) Amend previous commit of HDFS-3626: accidentally included a hunk from HADOOP-8621 in svn commit. Reverting that hunk (Revision 1365817) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365817 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java FileUtil.symLink fails if spaces in path Key: HADOOP-8621 URL: https://issues.apache.org/jira/browse/HADOOP-8621 Project: Hadoop Common Issue Type: Bug Reporter: Robert Fuller Priority: Minor Attachments: hadoop-8621.txt, patch.txt the 'ln -s' command fails in the current implementation if there is a space in the path for the target or linkname. A small change resolves the issue. String cmd = ln -s + target + + linkname; //Process p = Runtime.getRuntime().exec(cmd, null); //broken Process p = Runtime.getRuntime().exec(new String[]{ln,-s,target,linkname}, null); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8522) ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used
[ https://issues.apache.org/jira/browse/HADOOP-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13422904#comment-13422904 ] Mike Percy commented on HADOOP-8522: Sincere apologies, but I just don't see myself finding the time to improve this unit test very soon. I have a lot going on over in Flume land. Would it be alright if we file another JIRA to improve the unit test and move forward with committing this patch? Just in case that is OK, I have filed HADOOP-8625 for that. ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used Key: HADOOP-8522 URL: https://issues.apache.org/jira/browse/HADOOP-8522 Project: Hadoop Common Issue Type: Bug Components: io Affects Versions: 1.0.3, 2.0.0-alpha Reporter: Mike Percy Assignee: Mike Percy Attachments: HADOOP-8522-2a.patch ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used. The issue is that finish() flushes the compressor buffer and writes the gzip CRC32 + data length trailer. After that, resetState() does not repeat the gzip header, but simply starts writing more deflate-compressed data. The resultant files are not readable by the Linux gunzip tool. ResetableGzipOutputStream should write valid multi-member gzip files. The gzip format is specified in [RFC 1952|https://tools.ietf.org/html/rfc1952]. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8625) Use GzipCodec to decompress data in ResetableGzipOutputStream test
Mike Percy created HADOOP-8625: -- Summary: Use GzipCodec to decompress data in ResetableGzipOutputStream test Key: HADOOP-8625 URL: https://issues.apache.org/jira/browse/HADOOP-8625 Project: Hadoop Common Issue Type: Bug Reporter: Mike Percy Fix For: 2.0.1-alpha Use GzipCodec to decompress data in ResetableGzipOutputStream test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8551) fs -mkdir creates parent directories without the -p option
[ https://issues.apache.org/jira/browse/HADOOP-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423039#comment-13423039 ] Hudson commented on HADOOP-8551: Integrated in Hadoop-Hdfs-trunk #1116 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1116/]) HADOOP-8551. fs -mkdir creates parent directories without the -p option (John George via bobby) (Revision 1365588) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365588 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml fs -mkdir creates parent directories without the -p option -- Key: HADOOP-8551 URL: https://issues.apache.org/jira/browse/HADOOP-8551 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: John George Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch hadoop fs -mkdir foo/bar will work even if bar is not present. It should only work if -p is given and foo is not present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8623) hadoop jar command should respect HADOOP_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423074#comment-13423074 ] Daryn Sharp commented on HADOOP-8623: - +1 ditto hadoop jar command should respect HADOOP_OPTS - Key: HADOOP-8623 URL: https://issues.apache.org/jira/browse/HADOOP-8623 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: 0.23.1, 2.0.0-alpha Reporter: Steven Willis Attachments: HADOOP-8623.patch The jar command to the hadoop script should use any set HADOOP_OPTS and HADOOP_CLIENT_OPTS environment variables like all the other commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8613) AbstractDelegationTokenIdentifier#getUser() should set token auth type
[ https://issues.apache.org/jira/browse/HADOOP-8613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423076#comment-13423076 ] Daryn Sharp commented on HADOOP-8613: - Test failure is unrelated. AbstractDelegationTokenIdentifier#getUser() should set token auth type -- Key: HADOOP-8613 URL: https://issues.apache.org/jira/browse/HADOOP-8613 Project: Hadoop Common Issue Type: Bug Affects Versions: 1.0.0, 0.23.0, 2.0.0-alpha, 3.0.0 Reporter: Daryn Sharp Assignee: Daryn Sharp Priority: Critical Attachments: HADOOP-8613-2.branch-1.patch, HADOOP-8613-2.patch, HADOOP-8613.branch-1.patch, HADOOP-8613.patch {{AbstractDelegationTokenIdentifier#getUser()}} returns the UGI associated with a token. The UGI's auth type will either be SIMPLE for non-proxy tokens, or PROXY (effective user) and SIMPLE (real user). Instead of SIMPLE, it needs to be TOKEN. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8621) FileUtil.symLink fails if spaces in path
[ https://issues.apache.org/jira/browse/HADOOP-8621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423086#comment-13423086 ] Hudson commented on HADOOP-8621: Integrated in Hadoop-Mapreduce-trunk #1148 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1148/]) Amend previous commit of HDFS-3626: accidentally included a hunk from HADOOP-8621 in svn commit. Reverting that hunk (Revision 1365817) Result = FAILURE todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365817 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java FileUtil.symLink fails if spaces in path Key: HADOOP-8621 URL: https://issues.apache.org/jira/browse/HADOOP-8621 Project: Hadoop Common Issue Type: Bug Reporter: Robert Fuller Priority: Minor Attachments: hadoop-8621.txt, patch.txt the 'ln -s' command fails in the current implementation if there is a space in the path for the target or linkname. A small change resolves the issue. String cmd = ln -s + target + + linkname; //Process p = Runtime.getRuntime().exec(cmd, null); //broken Process p = Runtime.getRuntime().exec(new String[]{ln,-s,target,linkname}, null); -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8551) fs -mkdir creates parent directories without the -p option
[ https://issues.apache.org/jira/browse/HADOOP-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423088#comment-13423088 ] Hudson commented on HADOOP-8551: Integrated in Hadoop-Mapreduce-trunk #1148 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1148/]) HADOOP-8551. fs -mkdir creates parent directories without the -p option (John George via bobby) (Revision 1365588) Result = FAILURE bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1365588 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Mkdir.java * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testHDFSConf.xml fs -mkdir creates parent directories without the -p option -- Key: HADOOP-8551 URL: https://issues.apache.org/jira/browse/HADOOP-8551 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 0.23.3, 2.1.0-alpha, 3.0.0 Reporter: Robert Joseph Evans Assignee: John George Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch, HADOOP-8551.patch hadoop fs -mkdir foo/bar will work even if bar is not present. It should only work if -p is given and foo is not present. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8599) Non empty response from FileSystem.getFileBlockLocations when asking for data beyond the end of file
[ https://issues.apache.org/jira/browse/HADOOP-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423158#comment-13423158 ] Mariappan Asokan commented on HADOOP-8599: -- Can someone take a look at MAPREDUCE-4470 and propose a proper fix or retract HADOOP-8599 from the trunk? This is causing the trunk build to fail for more than a week. I think returning a 0 split for an empty input is not a good idea. The original behavior of 1 split with size 0 was good. Can others jump in and comment on this? Thanks. Non empty response from FileSystem.getFileBlockLocations when asking for data beyond the end of file - Key: HADOOP-8599 URL: https://issues.apache.org/jira/browse/HADOOP-8599 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha Reporter: Andrey Klochkov Assignee: Andrey Klochkov Fix For: 0.23.3, 3.0.0, 2.2.0-alpha Attachments: HADOOP-8859-branch-0.23.patch When FileSystem.getFileBlockLocations(file,start,len) is called with start argument equal to the file size, the response is not empty. There is a test TestGetFileBlockLocations.testGetFileBlockLocations2 which uses randomly generated start and len arguments when calling FileSystem.getFileBlockLocations and the test fails randomly (when the generated start value equals to the file size). -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-3438) NPE if job tracker started and system property hadoop.log.dir is not set
[ https://issues.apache.org/jira/browse/HADOOP-3438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423183#comment-13423183 ] Steve Loughran commented on HADOOP-3438: Hey Junping, it's one of those very-low-priority issues that will have to depend on someone hitting it getting really frustrated. If you want to patch it... However, it has been around a very long time; it's time to fix it NPE if job tracker started and system property hadoop.log.dir is not set Key: HADOOP-3438 URL: https://issues.apache.org/jira/browse/HADOOP-3438 Project: Hadoop Common Issue Type: Bug Components: metrics Affects Versions: 0.18.0 Environment: amd64 ubuntu, jrockit 1.6 Reporter: Steve Loughran This is a regression. If the system property hadoop.log.dir is not set, the job tracker NPEs rather than starts up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8365) Add flag to disable durable sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423257#comment-13423257 ] Suresh Srinivas commented on HADOOP-8365: - Look at HDFS-3731. 2.x upgrades does not handle this functionality well. For people who did not need durable sync, by turning the feature on by default, we are causing unnecessary upgrade issues. Add flag to disable durable sync Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Fix For: 1.1.0 Attachments: hadoop-8365.txt, hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8365) Add flag to disable durable sync
[ https://issues.apache.org/jira/browse/HADOOP-8365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423262#comment-13423262 ] Todd Lipcon commented on HADOOP-8365: - I think you're misinterpreting HDFS-3731: the bug there is that data which was in-progress during the upgrade will be lost in the upgraded cluster. But without durable sync, all the blocks being written would have been lost anyway. That is to say, the bug makes durable sync just regress back to the old non-durable behavior. Anyone who is not using the feature would have lost the same amount of data. Or am I the one misinterpreting? Add flag to disable durable sync Key: HADOOP-8365 URL: https://issues.apache.org/jira/browse/HADOOP-8365 Project: Hadoop Common Issue Type: Improvement Affects Versions: 1.1.0 Reporter: Eli Collins Assignee: Eli Collins Priority: Blocker Fix For: 1.1.0 Attachments: hadoop-8365.txt, hadoop-8365.txt Per HADOOP-8230 there's a request for a flag to disable the sync code paths that dfs.support.append used to enable. The sync method itself will still be available and have a broken implementation as that was the behavior before HADOOP-8230. This config flag should default to false as the primary motivation for HADOOP-8230 is so HBase works out-of-the-box with Hadoop 1.1. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8623) hadoop jar command should respect HADOOP_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8623: Resolution: Fixed Fix Version/s: 3.0.0 Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks for contributing the patch Steven. I have added you as a contributor to Hadoop common. You can now assign the jiras to yourself. I committed the patch to trunk. Will merge it into 2.x next. hadoop jar command should respect HADOOP_OPTS - Key: HADOOP-8623 URL: https://issues.apache.org/jira/browse/HADOOP-8623 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: 0.23.1, 2.0.0-alpha Reporter: Steven Willis Fix For: 3.0.0 Attachments: HADOOP-8623.patch The jar command to the hadoop script should use any set HADOOP_OPTS and HADOOP_CLIENT_OPTS environment variables like all the other commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8624) ProtobufRpcEngine should log all RPCs if TRACE logging is enabled
[ https://issues.apache.org/jira/browse/HADOOP-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423321#comment-13423321 ] Todd Lipcon commented on HADOOP-8624: - The javadoc output must be unrelated - it says I fixed 2 warnings, but I didn't modify any javadoc in this patch at all. I'll commit this momentarily. ProtobufRpcEngine should log all RPCs if TRACE logging is enabled - Key: HADOOP-8624 URL: https://issues.apache.org/jira/browse/HADOOP-8624 Project: Hadoop Common Issue Type: Improvement Components: ipc Affects Versions: 3.0.0, 2.2.0-alpha Reporter: Todd Lipcon Assignee: Todd Lipcon Priority: Minor Attachments: hadoop-8624.txt Since all RPC requests/responses are now ProtoBufs, it's easy to add a TRACE level logging output for ProtobufRpcEngine that actually shows the full content of all calls. This is very handy especially when writing/debugging unit tests, but might also be useful to enable at runtime for short periods of time to debug certain production issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8623) hadoop jar command should respect HADOOP_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Suresh Srinivas updated HADOOP-8623: Priority: Minor (was: Major) Fix Version/s: 2.1.0-alpha Assignee: Steven Willis Issue Type: Improvement (was: Bug) hadoop jar command should respect HADOOP_OPTS - Key: HADOOP-8623 URL: https://issues.apache.org/jira/browse/HADOOP-8623 Project: Hadoop Common Issue Type: Improvement Components: scripts Affects Versions: 0.23.1, 2.0.0-alpha Reporter: Steven Willis Assignee: Steven Willis Priority: Minor Fix For: 2.1.0-alpha, 3.0.0 Attachments: HADOOP-8623.patch The jar command to the hadoop script should use any set HADOOP_OPTS and HADOOP_CLIENT_OPTS environment variables like all the other commands. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-8626) Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user
Jonathan Natkins created HADOOP-8626: Summary: Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user Key: HADOOP-8626 URL: https://issues.apache.org/jira/browse/HADOOP-8626 Project: Hadoop Common Issue Type: Bug Reporter: Jonathan Natkins (amp;(objectClass=user)(sAMAccountName={0}) should have a trailing parenthesis at the end -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8626) Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user
[ https://issues.apache.org/jira/browse/HADOOP-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Natkins updated HADOOP-8626: - Attachment: HADOOP-8626.patch Updates core-default.xml with a valid default setting Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user - Key: HADOOP-8626 URL: https://issues.apache.org/jira/browse/HADOOP-8626 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.0.0-alpha Reporter: Jonathan Natkins Assignee: Jonathan Natkins Attachments: HADOOP-8626.patch (amp;(objectClass=user)(sAMAccountName={0}) should have a trailing parenthesis at the end -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8626) Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user
[ https://issues.apache.org/jira/browse/HADOOP-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HADOOP-8626: --- Component/s: security Target Version/s: 2.2.0-alpha Affects Version/s: 2.0.0-alpha Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user - Key: HADOOP-8626 URL: https://issues.apache.org/jira/browse/HADOOP-8626 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.0.0-alpha Reporter: Jonathan Natkins Assignee: Jonathan Natkins Attachments: HADOOP-8626.patch (amp;(objectClass=user)(sAMAccountName={0}) should have a trailing parenthesis at the end -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8626) Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user
[ https://issues.apache.org/jira/browse/HADOOP-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HADOOP-8626: --- Status: Patch Available (was: Open) Marking PA for Natty. The patch looks good to me. +1 pending Jenkins. Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user - Key: HADOOP-8626 URL: https://issues.apache.org/jira/browse/HADOOP-8626 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.0.0-alpha Reporter: Jonathan Natkins Assignee: Jonathan Natkins Attachments: HADOOP-8626.patch (amp;(objectClass=user)(sAMAccountName={0}) should have a trailing parenthesis at the end -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8619) WritableComparator must implement no-arg constructor
[ https://issues.apache.org/jira/browse/HADOOP-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423527#comment-13423527 ] Brandon Li commented on HADOOP-8619: Thanks for finding the issue. The patch should fix the problem. Actually in Hadoop core components, instead of java Serialization, interface Writable is wildly used to do in-memory(or on disk persistence) serialization and Protobuf is used for remote communication. WritableComparator must implement no-arg constructor Key: HADOOP-8619 URL: https://issues.apache.org/jira/browse/HADOOP-8619 Project: Hadoop Common Issue Type: Improvement Components: io Affects Versions: 3.0.0 Reporter: Radim Kolar Fix For: 0.23.0, 2.0.0-alpha, 3.0.0 Attachments: writable-comparator.txt Because of reasons listed here: http://findbugs.sourceforge.net/bugDescriptions.html#SE_COMPARATOR_SHOULD_BE_SERIALIZABLE comparators should be serializable. To make deserialization work, it is required that all superclasses have no-arg constructor. http://findbugs.sourceforge.net/bugDescriptions.html#SE_NO_SUITABLE_CONSTRUCTOR Simply add no=arg constructor to WritableComparator. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8626) Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user
[ https://issues.apache.org/jira/browse/HADOOP-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13423615#comment-13423615 ] Hadoop QA commented on HADOOP-8626: --- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12538085/HADOOP-8626.patch against trunk revision . +1 @author. The patch does not contain any @author tags. -1 tests included. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 javadoc. The javadoc tool did not generate any warning messages. +1 eclipse:eclipse. The patch built with eclipse:eclipse. +1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings. +1 release audit. The applied patch does not increase the total number of release audit warnings. -1 core tests. The patch failed these unit tests in hadoop-common-project/hadoop-common: org.apache.hadoop.ha.TestZKFailoverController +1 contrib tests. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/1222//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/1222//console This message is automatically generated. Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user - Key: HADOOP-8626 URL: https://issues.apache.org/jira/browse/HADOOP-8626 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 2.0.0-alpha Reporter: Jonathan Natkins Assignee: Jonathan Natkins Attachments: HADOOP-8626.patch (amp;(objectClass=user)(sAMAccountName={0}) should have a trailing parenthesis at the end -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira