[jira] [Assigned] (HADOOP-11133) failed to read correct content of keystore password file in hadoop-kms
[ https://issues.apache.org/jira/browse/HADOOP-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu reassigned HADOOP-11133: --- Assignee: Yi Liu > failed to read correct content of keystore password file in hadoop-kms > -- > > Key: HADOOP-11133 > URL: https://issues.apache.org/jira/browse/HADOOP-11133 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0, 2.6.0 >Reporter: zhubin >Assignee: Yi Liu > > When setup kms service to enable CFS feature, I created the keystore password > file and input the password content (like '123456'). But it failed to start > kms service with below error: > java.io.IOException: Keystore was tampered with, or password was incorrect > As debugging, the failure was caused by the invalid password content due the > code read more char which is invisible (like file ending char). So the right > behavior is to filter out the invisible chars during reading the password > file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11133) failed to read correct content of keystore password file in hadoop-kms
[ https://issues.apache.org/jira/browse/HADOOP-11133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhubin updated HADOOP-11133: Description: When setup kms service to enable CFS feature, I created the keystore password file and input the password content (like '123456'). But it failed to start kms service with below error: java.io.IOException: Keystore was tampered with, or password was incorrect As debugging, the failure was caused by the invalid password content due the code read more char which is invisible (like file ending char). So the right behavior is to filter out the invisible chars during reading the password file. was: When setup kms service to enable CFS feature, I created the keystore password file and input the password content (like '123456'). But it failed to start kms service with below error: java.io.IOException: Keystore was tampered with, or password was incorrect As debugging, the failure was caused by the invalid password content due the code read more char which is invisible (like file ending char). So the right behavior is to filter out the invisible chars during reading password file. > failed to read correct content of keystore password file in hadoop-kms > -- > > Key: HADOOP-11133 > URL: https://issues.apache.org/jira/browse/HADOOP-11133 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 3.0.0, 2.6.0 >Reporter: zhubin > > When setup kms service to enable CFS feature, I created the keystore password > file and input the password content (like '123456'). But it failed to start > kms service with below error: > java.io.IOException: Keystore was tampered with, or password was incorrect > As debugging, the failure was caused by the invalid password content due the > code read more char which is invisible (like file ending char). So the right > behavior is to filter out the invisible chars during reading the password > file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11133) failed to read correct content of keystore password file in hadoop-kms
zhubin created HADOOP-11133: --- Summary: failed to read correct content of keystore password file in hadoop-kms Key: HADOOP-11133 URL: https://issues.apache.org/jira/browse/HADOOP-11133 Project: Hadoop Common Issue Type: Bug Components: security Affects Versions: 3.0.0, 2.6.0 Reporter: zhubin When setup kms service to enable CFS feature, I created the keystore password file and input the password content (like '123456'). But it failed to start kms service with below error: java.io.IOException: Keystore was tampered with, or password was incorrect As debugging, the failure was caused by the invalid password content due the code read more char which is invisible (like file ending char). So the right behavior is to filter out the invisible chars during reading password file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11132) checkHadoopHome still uses HADOOP_HOME
Allen Wittenauer created HADOOP-11132: - Summary: checkHadoopHome still uses HADOOP_HOME Key: HADOOP-11132 URL: https://issues.apache.org/jira/browse/HADOOP-11132 Project: Hadoop Common Issue Type: Bug Reporter: Allen Wittenauer It should be using HADOOP_PREFIX. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11131) getUsersForNetgroupCommand doesn't work for OS X
Allen Wittenauer created HADOOP-11131: - Summary: getUsersForNetgroupCommand doesn't work for OS X Key: HADOOP-11131 URL: https://issues.apache.org/jira/browse/HADOOP-11131 Project: Hadoop Common Issue Type: Bug Reporter: Allen Wittenauer Apple doesn't ship getent, which this command assumes. We should use dscl instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11130) NFS updateMaps OS check is reversed
[ https://issues.apache.org/jira/browse/HADOOP-11130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147337#comment-14147337 ] Allen Wittenauer commented on HADOOP-11130: --- The code looks like this: {code} /** Shell commands to get users and groups */ static final String LINUX_GET_ALL_USERS_CMD = "getent passwd | cut -d: -f1,3"; static final String LINUX_GET_ALL_GROUPS_CMD = "getent group | cut -d: -f1,3"; static final String MAC_GET_ALL_USERS_CMD = "dscl . -list /Users UniqueID"; static final String MAC_GET_ALL_GROUPS_CMD = "dscl . -list /Groups PrimaryGroupID"; ... if (OS.startsWith("Linux")) { updateMapInternal(uMap, "user", LINUX_GET_ALL_USERS_CMD, ":", staticMapping.uidMapping); updateMapInternal(gMap, "group", LINUX_GET_ALL_GROUPS_CMD, ":", staticMapping.gidMapping); } else { // Mac updateMapInternal(uMap, "user", MAC_GET_ALL_USERS_CMD, "\\s+", staticMapping.uidMapping); updateMapInternal(gMap, "group", MAC_GET_ALL_GROUPS_CMD, "\\s+", staticMapping.gidMapping); } {code} dscl is *only* supported on OS X. getent is supported on Linux, Solaris, FreeBSD, ... Ideally, 'LINUX_GET_ALL_USERS_CMD' would get renamed to something less Linux specific, we'd check to see if the OS is Mac, etc. to be much more compatible with other OSes. > NFS updateMaps OS check is reversed > --- > > Key: HADOOP-11130 > URL: https://issues.apache.org/jira/browse/HADOOP-11130 > Project: Hadoop Common > Issue Type: Bug >Reporter: Allen Wittenauer > > getent is fairly standard, dscl is not. Yet the code logic prefers dscl for > non-Linux platforms. This code should for OS X and use dscl and, if not, then > use getent. See comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11130) NFS updateMaps OS check is reversed
Allen Wittenauer created HADOOP-11130: - Summary: NFS updateMaps OS check is reversed Key: HADOOP-11130 URL: https://issues.apache.org/jira/browse/HADOOP-11130 Project: Hadoop Common Issue Type: Bug Reporter: Allen Wittenauer getent is fairly standard, dscl is not. Yet the code logic prefers dscl for non-Linux platforms. This code should for OS X and use dscl and, if not, then use getent. See comments. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.
[ https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147318#comment-14147318 ] Allen Wittenauer commented on HADOOP-11127: --- At least it'd be my own pieces instead of other people's for once. > Improve versioning and compatibility support in native library for downstream > hadoop-common users. > -- > > Key: HADOOP-11127 > URL: https://issues.apache.org/jira/browse/HADOOP-11127 > Project: Hadoop Common > Issue Type: Bug > Components: native >Reporter: Chris Nauroth > > There is no compatibility policy enforced on the JNI function signatures > implemented in the native library. This library typically is deployed to all > nodes in a cluster, built from a specific source code version. However, > downstream applications that want to run in that cluster might choose to > bundle a hadoop-common jar at a different version. Since there is no > compatibility policy, this can cause link errors at runtime when the native > function signatures expected by hadoop-common.jar do not exist in > libhadoop.so/hadoop.dll. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7984) Add hadoop command verbose and debug options
[ https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147315#comment-14147315 ] Allen Wittenauer commented on HADOOP-7984: -- It's probably worth noting that if this is to go into branch-2, you'll need to make two separate patches anyway since branch-2 still has the incredibly crappy shell code. So an incompatible change for trunk is totally ok. :) > Add hadoop command verbose and debug options > > > Key: HADOOP-7984 > URL: https://issues.apache.org/jira/browse/HADOOP-7984 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Eli Collins >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: HADOOP-7984.patch, HADOOP-7984.patch > > > It would be helpful if bin/hadoop had verbose and debug flags. Currently > users need to set an env variable or prefix the command (eg > "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user > friendly. We currently log INFO by default. How about we only log ERROR and > WARN by default, then -verbose triggers INFO and -debug triggers DEBUG? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk
[ https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147303#comment-14147303 ] Yi Liu commented on HADOOP-11125: - {{TestOsSecureRandom#testOsSecureRandomSetConf}} doesn't assert anything and is not necessary, we can simply remove this test. > TestOsSecureRandom sometimes fails in trunk > --- > > Key: HADOOP-11125 > URL: https://issues.apache.org/jira/browse/HADOOP-11125 > Project: Hadoop Common > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console : > {code} > Running org.apache.hadoop.crypto.random.TestOsSecureRandom > Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec > <<< FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom > testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) > Time elapsed: 120.013 sec <<< ERROR! > java.lang.Exception: test timed out after 12 milliseconds > at java.io.FileInputStream.readBytes(Native Method) > at java.io.FileInputStream.read(FileInputStream.java:220) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) > at java.io.BufferedInputStream.read(BufferedInputStream.java:317) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) > at java.io.InputStreamReader.read(InputStreamReader.java:167) > at java.io.BufferedReader.fill(BufferedReader.java:136) > at java.io.BufferedReader.read1(BufferedReader.java:187) > at java.io.BufferedReader.read(BufferedReader.java:261) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715) > at org.apache.hadoop.util.Shell.runCommand(Shell.java:524) > at org.apache.hadoop.util.Shell.run(Shell.java:455) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702) > at > org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7984) Add hadoop command verbose and debug options
[ https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147288#comment-14147288 ] Akira AJISAKA commented on HADOOP-7984: --- Thanks Allen for comments. bq. So please drop the HADOOP_MAPRED_LOGLEVEL and YARN_LOGLEVEL bits and just use HADOOP_LOGLEVEL everywhere. I'll drop them in the next patch. bq. I saw that you changed the log level for the security audit log, but I don't think it has anything but INFO coded. Agree, I'll drop the change. bq. Also, one of the stated goals of this JIRA was to change the default from INFO up to WARN. Is this something we still want to do? I just want to add "--loglevel" option because adding a new option is compatible. I'm thinking it would be better to create a separate jira for changing the default log level (i.e. incompatible change). > Add hadoop command verbose and debug options > > > Key: HADOOP-7984 > URL: https://issues.apache.org/jira/browse/HADOOP-7984 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Eli Collins >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: HADOOP-7984.patch, HADOOP-7984.patch > > > It would be helpful if bin/hadoop had verbose and debug flags. Currently > users need to set an env variable or prefix the command (eg > "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user > friendly. We currently log INFO by default. How about we only log ERROR and > WARN by default, then -verbose triggers INFO and -debug triggers DEBUG? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017
[ https://issues.apache.org/jira/browse/HADOOP-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147262#comment-14147262 ] Yi Liu commented on HADOOP-11129: - Should be HADOOP-11122. > Fix findbug issue introduced by HADOOP-11017 > > > Key: HADOOP-11129 > URL: https://issues.apache.org/jira/browse/HADOOP-11129 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Yi Liu >Assignee: Yi Liu > > This JIRA is to fix findbug issue introduced by HADOOP-11017 > {quote} > Inconsistent synchronization of > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017
[ https://issues.apache.org/jira/browse/HADOOP-11129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yi Liu resolved HADOOP-11129. - Resolution: Duplicate The findbug issue was already reported. Duplicate it with HADOOP-11129. > Fix findbug issue introduced by HADOOP-11017 > > > Key: HADOOP-11129 > URL: https://issues.apache.org/jira/browse/HADOOP-11129 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.6.0 >Reporter: Yi Liu >Assignee: Yi Liu > > This JIRA is to fix findbug issue introduced by HADOOP-11017 > {quote} > Inconsistent synchronization of > org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11101) How about inputstream close statement from catch block to finally block in FileContext#copy() ?
[ https://issues.apache.org/jira/browse/HADOOP-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147235#comment-14147235 ] Hadoop QA commented on HADOOP-11101: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671103/HADOOP-11101_002.patch against trunk revision c86674a. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4805//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4805//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4805//console This message is automatically generated. > How about inputstream close statement from catch block to finally block in > FileContext#copy() ? > --- > > Key: HADOOP-11101 > URL: https://issues.apache.org/jira/browse/HADOOP-11101 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.5.1 >Reporter: skrho >Priority: Minor > Attachments: HADOOP-11101_001.patch, HADOOP-11101_002.patch > > > If IOException is happended, can be catched exception block.. > But another excpetion is happended, can't be catched exception block.. also > Stream object can't be closed.. > try { > in = open(qSrc); > EnumSet createFlag = overwrite ? EnumSet.of( > CreateFlag.CREATE, CreateFlag.OVERWRITE) : > EnumSet.of(CreateFlag.CREATE); > out = create(qDst, createFlag); > IOUtils.copyBytes(in, out, conf, true); > } catch (IOException e) { > IOUtils.closeStream(out); > IOUtils.closeStream(in); > throw e; > } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11129) Fix findbug issue introduced by HADOOP-11017
Yi Liu created HADOOP-11129: --- Summary: Fix findbug issue introduced by HADOOP-11017 Key: HADOOP-11129 URL: https://issues.apache.org/jira/browse/HADOOP-11129 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.6.0 Reporter: Yi Liu Assignee: Yi Liu This JIRA is to fix findbug issue introduced by HADOOP-11017 {quote} Inconsistent synchronization of org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.
[ https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147205#comment-14147205 ] Colin Patrick McCabe commented on HADOOP-11127: --- [~aw]: If you break it, you get to keep the pieces. > Improve versioning and compatibility support in native library for downstream > hadoop-common users. > -- > > Key: HADOOP-11127 > URL: https://issues.apache.org/jira/browse/HADOOP-11127 > Project: Hadoop Common > Issue Type: Bug > Components: native >Reporter: Chris Nauroth > > There is no compatibility policy enforced on the JNI function signatures > implemented in the native library. This library typically is deployed to all > nodes in a cluster, built from a specific source code version. However, > downstream applications that want to run in that cluster might choose to > bundle a hadoop-common jar at a different version. Since there is no > compatibility policy, this can cause link errors at runtime when the native > function signatures expected by hadoop-common.jar do not exist in > libhadoop.so/hadoop.dll. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11128) abstracting out the scale tests for FileSystem Contract tests
Juan Yu created HADOOP-11128: Summary: abstracting out the scale tests for FileSystem Contract tests Key: HADOOP-11128 URL: https://issues.apache.org/jira/browse/HADOOP-11128 Project: Hadoop Common Issue Type: Improvement Reporter: Juan Yu Currently we have some scale tests for openstack and s3a. For now we'll just trust HDFS to handle files >5GB and delete thousands of file in a directory properly. We should abstract out the scale tests so it can be applied to all FileSystems. A few things to consider for scale tests: scale tests rely on the tester having good/stable upload bandwidth, might need large disk space. It needs to be configurable or optional. scale tests might need long time to finish, consider have test timeout configurable if possible -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11101) How about inputstream close statement from catch block to finally block in FileContext#copy() ?
[ https://issues.apache.org/jira/browse/HADOOP-11101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HADOOP-11101: --- Attachment: HADOOP-11101_002.patch Here is my patch.. I erased excpetion block ^^ > How about inputstream close statement from catch block to finally block in > FileContext#copy() ? > --- > > Key: HADOOP-11101 > URL: https://issues.apache.org/jira/browse/HADOOP-11101 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.5.1 >Reporter: skrho >Priority: Minor > Attachments: HADOOP-11101_001.patch, HADOOP-11101_002.patch > > > If IOException is happended, can be catched exception block.. > But another excpetion is happended, can't be catched exception block.. also > Stream object can't be closed.. > try { > in = open(qSrc); > EnumSet createFlag = overwrite ? EnumSet.of( > CreateFlag.CREATE, CreateFlag.OVERWRITE) : > EnumSet.of(CreateFlag.CREATE); > out = create(qDst, createFlag); > IOUtils.copyBytes(in, out, conf, true); > } catch (IOException e) { > IOUtils.closeStream(out); > IOUtils.closeStream(in); > throw e; > } -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.
[ https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147170#comment-14147170 ] Allen Wittenauer commented on HADOOP-11127: --- Should part of this discussion be breaking libhadoop.so apart? It's really gotten way too fat over the years. > Improve versioning and compatibility support in native library for downstream > hadoop-common users. > -- > > Key: HADOOP-11127 > URL: https://issues.apache.org/jira/browse/HADOOP-11127 > Project: Hadoop Common > Issue Type: Bug > Components: native >Reporter: Chris Nauroth > > There is no compatibility policy enforced on the JNI function signatures > implemented in the native library. This library typically is deployed to all > nodes in a cluster, built from a specific source code version. However, > downstream applications that want to run in that cluster might choose to > bundle a hadoop-common jar at a different version. Since there is no > compatibility policy, this can cause link errors at runtime when the native > function signatures expected by hadoop-common.jar do not exist in > libhadoop.so/hadoop.dll. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice
[ https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147168#comment-14147168 ] Allen Wittenauer commented on HADOOP-11120: --- Yup. At least until we can start changing all of the commands to be sane. > hadoop fs -rmr gives wrong advice > - > > Key: HADOOP-11120 > URL: https://issues.apache.org/jira/browse/HADOOP-11120 > Project: Hadoop Common > Issue Type: Bug >Reporter: Allen Wittenauer > Attachments: Screen Shot 2014-09-24 at 3.02.21 PM.png > > > Typing bin/hadoop fs -rmr /a? > gives the output: > rmr: DEPRECATED: Please use 'rm -r' instead. > Typing bin/hadoop fs rm -r /a? > gives the output: > rm: Unknown command -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-6964) Allow compact property description in xml
[ https://issues.apache.org/jira/browse/HADOOP-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147142#comment-14147142 ] Kengo Seki commented on HADOOP-6964: Thank you [~aw]. Unit test failure also seems unrelated. See [HADOOP-11125]. > Allow compact property description in xml > - > > Key: HADOOP-6964 > URL: https://issues.apache.org/jira/browse/HADOOP-6964 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Reporter: Owen O'Malley > Labels: newbie > Attachments: HADOOP-6964.patch > > > We should allow users to use the more compact form of xml elements. For > example, we could allow: > {noformat} > > {noformat} > The old format would also be supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice
[ https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147123#comment-14147123 ] Chris Douglas commented on HADOOP-11120: This doesn't seem too egregious... I've never been a fan of the dash prefix, but both error messages are correct. The proposed fix is for for the deprecation message to include the dash prefix, so a user gets the correct command in one hop instead of two? > hadoop fs -rmr gives wrong advice > - > > Key: HADOOP-11120 > URL: https://issues.apache.org/jira/browse/HADOOP-11120 > Project: Hadoop Common > Issue Type: Bug >Reporter: Allen Wittenauer > Attachments: Screen Shot 2014-09-24 at 3.02.21 PM.png > > > Typing bin/hadoop fs -rmr /a? > gives the output: > rmr: DEPRECATED: Please use 'rm -r' instead. > Typing bin/hadoop fs rm -r /a? > gives the output: > rm: Unknown command -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147071#comment-14147071 ] Chris Nauroth commented on HADOOP-11064: I filed HADOOP-11127 for the follow-up. I already added anyone who commented on the discussion here as a watcher on the new issue. If you've been silently watching, then you might want to add yourself over there. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 > method changes > - > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Chris Nauroth >Priority: Blocker > Fix For: 2.6.0 > > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.
[ https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147059#comment-14147059 ] Chris Nauroth commented on HADOOP-11127: This initially came up during discussion of HADOOP-11064. It would be good for anyone interested to catch up on the full discussion there. This is a summary of where we got in the discussion before switching gears in that jira to focus on a quick interim fix. After further discussion, the following is a summary of potential solutions to this problem: 1. Freeze the libhadoop.so API forever. 2. Library versioning plus maintaining a library on the servers for each supported release. 3. Bundle the .dll or .so file inside a jar somehow so that YARN / Slider can distribute it. #1. Advantages: * ??? #1. Disadvantages: * We're unable to improve libhadoop.so in the future. * There will be puzzling interactions when mixing and matching versions. "New" bugs in libhadoop.so will show up with old hadoop releases, causing confusion in bug trackers. We don't have any way of enforcing C API stability. Jenkins doesn't check for it, most Java programmers don't know how to achieve it. * There is still no ability for applications using new Hadoop versions to make use of old libhadoop.so versions, unless we adopt an even worse compatibility policy that nothing new can be added to libhadoop.so. * Given all of the above, this option seems to be off the table. #2. Advantages: * Simple to implement. * There's already a patch that implements it. * We want libhadoop.so library versioning anyway, even if we later adopt another solution in addition to this #2. Disadvantages: * Admins using Slider / YARN will need to ensure that the appropriate versions of libhadoop are present on the server. #3. Advantages: * "Cleanest" solution, since it allows us to reuse YARN's existing distribution mechanisms. #3. Disadvantages: * There are technical challenges to bundling a library in a jar that we haven't yet tackled. > Improve versioning and compatibility support in native library for downstream > hadoop-common users. > -- > > Key: HADOOP-11127 > URL: https://issues.apache.org/jira/browse/HADOOP-11127 > Project: Hadoop Common > Issue Type: Bug > Components: native >Reporter: Chris Nauroth > > There is no compatibility policy enforced on the JNI function signatures > implemented in the native library. This library typically is deployed to all > nodes in a cluster, built from a specific source code version. However, > downstream applications that want to run in that cluster might choose to > bundle a hadoop-common jar at a different version. Since there is no > compatibility policy, this can cause link errors at runtime when the native > function signatures expected by hadoop-common.jar do not exist in > libhadoop.so/hadoop.dll. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.
Chris Nauroth created HADOOP-11127: -- Summary: Improve versioning and compatibility support in native library for downstream hadoop-common users. Key: HADOOP-11127 URL: https://issues.apache.org/jira/browse/HADOOP-11127 Project: Hadoop Common Issue Type: Bug Components: native Reporter: Chris Nauroth There is no compatibility policy enforced on the JNI function signatures implemented in the native library. This library typically is deployed to all nodes in a cluster, built from a specific source code version. However, downstream applications that want to run in that cluster might choose to bundle a hadoop-common jar at a different version. Since there is no compatibility policy, this can cause link errors at runtime when the native function signatures expected by hadoop-common.jar do not exist in libhadoop.so/hadoop.dll. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11009: -- Resolution: Fixed Fix Version/s: 2.6.0 Status: Resolved (was: Patch Available) > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Fix For: 2.6.0 > > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146999#comment-14146999 ] Gary Steelman commented on HADOOP-11009: Awesome. Thanks [~aw]! > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-11064: --- Resolution: Fixed Fix Version/s: 2.6.0 Assignee: Chris Nauroth (was: Colin Patrick McCabe) Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I committed this to trunk and branch-2. Thanks for reviewing the patch, Colin. Thank you to everyone for the discussion. I'll comment back here after I file the follow-up jira, so that everyone is aware. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 > method changes > - > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Chris Nauroth >Priority: Blocker > Fix For: 2.6.0 > > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146995#comment-14146995 ] Allen Wittenauer commented on HADOOP-11009: --- Hooray! +1 lgtm. Will commit to trunk and branch-2. Thanks! > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146986#comment-14146986 ] Hadoop QA commented on HADOOP-11009: {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671059/HADOOP-11009.5.patch against trunk revision 9fa5a89. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-tools/hadoop-distcp. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4804//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4804//console This message is automatically generated. > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-11064: --- Summary: UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 method changes (was: UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes) > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due to NativeCRC32 > method changes > - > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice
[ https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146973#comment-14146973 ] Allen Wittenauer commented on HADOOP-11120: --- Oh, you meant with the the hadoop fs rm. Yes, I did get the condescending 'Perhaps' message. > hadoop fs -rmr gives wrong advice > - > > Key: HADOOP-11120 > URL: https://issues.apache.org/jira/browse/HADOOP-11120 > Project: Hadoop Common > Issue Type: Bug >Reporter: Allen Wittenauer > Attachments: Screen Shot 2014-09-24 at 3.02.21 PM.png > > > Typing bin/hadoop fs -rmr /a? > gives the output: > rmr: DEPRECATED: Please use 'rm -r' instead. > Typing bin/hadoop fs rm -r /a? > gives the output: > rm: Unknown command -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146970#comment-14146970 ] Hadoop QA commented on HADOOP-11064: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671048/HADOOP-11064.006.patch against trunk revision 9fa5a89. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-common-project/hadoop-common: org.apache.hadoop.crypto.random.TestOsSecureRandom {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4803//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4803//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4803//console This message is automatically generated. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 > method changes > -- > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11009: -- Status: Open (was: Patch Available) > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11009: -- Status: Patch Available (was: Open) > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11120) hadoop fs -rmr gives wrong advice
[ https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11120: -- Attachment: Screen Shot 2014-09-24 at 3.02.21 PM.png > hadoop fs -rmr gives wrong advice > - > > Key: HADOOP-11120 > URL: https://issues.apache.org/jira/browse/HADOOP-11120 > Project: Hadoop Common > Issue Type: Bug >Reporter: Allen Wittenauer > Attachments: Screen Shot 2014-09-24 at 3.02.21 PM.png > > > Typing bin/hadoop fs -rmr /a? > gives the output: > rmr: DEPRECATED: Please use 'rm -r' instead. > Typing bin/hadoop fs rm -r /a? > gives the output: > rm: Unknown command -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11126) Findbugs link in Jenkins needs to be fixed
Ray Chiang created HADOOP-11126: --- Summary: Findbugs link in Jenkins needs to be fixed Key: HADOOP-11126 URL: https://issues.apache.org/jira/browse/HADOOP-11126 Project: Hadoop Common Issue Type: Bug Reporter: Ray Chiang For YARN-2284, the latest Jenkins notification points to the following URL for the Findbugs report: https://builds.apache.org/job/PreCommit-YARN-Build/5103//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html The real URL I found manually is: https://builds.apache.org/job/PreCommit-YARN-Build/5103/artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html It would be good to get this URL correct for future notifications. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice
[ https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146954#comment-14146954 ] Allen Wittenauer commented on HADOOP-11120: --- bq. I believe you might have got the full error message as below Nope. See this screenshot. (which, as an added bonus, also features HADOOP-9!) We clearly have two code paths in play, which makes this even worse. bq. And to use any commands with fsshell we should prefix them with '-'. ...which is why we should be explicit to users about what we want them to do. Hoping they correctly guess our intentions is terrible design. > hadoop fs -rmr gives wrong advice > - > > Key: HADOOP-11120 > URL: https://issues.apache.org/jira/browse/HADOOP-11120 > Project: Hadoop Common > Issue Type: Bug >Reporter: Allen Wittenauer > Attachments: Screen Shot 2014-09-24 at 3.02.21 PM.png > > > Typing bin/hadoop fs -rmr /a? > gives the output: > rmr: DEPRECATED: Please use 'rm -r' instead. > Typing bin/hadoop fs rm -r /a? > gives the output: > rm: Unknown command -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10987) Provide an iterator-based listing API for FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146944#comment-14146944 ] Kihwal Lee commented on HADOOP-10987: - The findbugs warning is not caused by this. {panel} Inconsistent synchronization of org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.delegationTokenSequenceNumber; locked 71% of time {panel} The unit test failures are also unrelated and failures were seen in other precommit builds. > Provide an iterator-based listing API for FileSystem > > > Key: HADOOP-10987 > URL: https://issues.apache.org/jira/browse/HADOOP-10987 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee > Attachments: HADOOP-10987.patch, HADOOP-10987.v2.patch, > HADOOP-10987.v3.patch > > > Iterator based listing methods already exist in {{FileContext}} for both > simple listing and listing with locations. However, {{FileSystem}} lacks the > former. From what I understand, it wasn't added to {{FileSystem}} because it > was believed to be phased out soon. Since {{FileSystem}} is very well alive > today and new features are getting added frequently, I propose adding an > iterator based {{listStatus}} method. As for the name of the new method, we > can use the same name used in {{FileContext}} : {{listStatusIterator()}}. > It will be particularly useful when listing giant directories. Without this, > the client has to build up a huge data structure and hold it in memory. We've > seen client JVMs running out of memory because of this. > Once this change is made, we can modify FsShell, etc. in followup jiras. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk
[ https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146943#comment-14146943 ] Kihwal Lee commented on HADOOP-11125: - movef from mapreduce to common. > TestOsSecureRandom sometimes fails in trunk > --- > > Key: HADOOP-11125 > URL: https://issues.apache.org/jira/browse/HADOOP-11125 > Project: Hadoop Common > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console : > {code} > Running org.apache.hadoop.crypto.random.TestOsSecureRandom > Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec > <<< FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom > testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) > Time elapsed: 120.013 sec <<< ERROR! > java.lang.Exception: test timed out after 12 milliseconds > at java.io.FileInputStream.readBytes(Native Method) > at java.io.FileInputStream.read(FileInputStream.java:220) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) > at java.io.BufferedInputStream.read(BufferedInputStream.java:317) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) > at java.io.InputStreamReader.read(InputStreamReader.java:167) > at java.io.BufferedReader.fill(BufferedReader.java:136) > at java.io.BufferedReader.read1(BufferedReader.java:187) > at java.io.BufferedReader.read(BufferedReader.java:261) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715) > at org.apache.hadoop.util.Shell.runCommand(Shell.java:524) > at org.apache.hadoop.util.Shell.run(Shell.java:455) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702) > at > org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Moved] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk
[ https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee moved MAPREDUCE-6089 to HADOOP-11125: Key: HADOOP-11125 (was: MAPREDUCE-6089) Project: Hadoop Common (was: Hadoop Map/Reduce) > TestOsSecureRandom sometimes fails in trunk > --- > > Key: HADOOP-11125 > URL: https://issues.apache.org/jira/browse/HADOOP-11125 > Project: Hadoop Common > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console : > {code} > Running org.apache.hadoop.crypto.random.TestOsSecureRandom > Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec > <<< FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom > testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) > Time elapsed: 120.013 sec <<< ERROR! > java.lang.Exception: test timed out after 12 milliseconds > at java.io.FileInputStream.readBytes(Native Method) > at java.io.FileInputStream.read(FileInputStream.java:220) > at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) > at java.io.BufferedInputStream.read(BufferedInputStream.java:317) > at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264) > at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306) > at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158) > at java.io.InputStreamReader.read(InputStreamReader.java:167) > at java.io.BufferedReader.fill(BufferedReader.java:136) > at java.io.BufferedReader.read1(BufferedReader.java:187) > at java.io.BufferedReader.read(BufferedReader.java:261) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715) > at org.apache.hadoop.util.Shell.runCommand(Shell.java:524) > at org.apache.hadoop.util.Shell.run(Shell.java:455) > at > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702) > at > org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Steelman updated HADOOP-11009: --- Attachment: HADOOP-11009.5.patch > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch, HADOOP-11009.5.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146935#comment-14146935 ] Gary Steelman commented on HADOOP-11009: Hard-coded values in test cases make me sad. Tests for TestDistCpUtils and TestOptionsParser now pass locally with patch v5. > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HADOOP-11113) Namenode not able to reconnect to KMS after KMS restart
[ https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Charles Lamb reassigned HADOOP-3: - Assignee: Charles Lamb (was: Arun Suresh) > Namenode not able to reconnect to KMS after KMS restart > --- > > Key: HADOOP-3 > URL: https://issues.apache.org/jira/browse/HADOOP-3 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Arun Suresh >Assignee: Charles Lamb > > It is observed that if KMS is restarted without the Namenode being restarted, > NN will not be able to reconnect with KMS. > It seems that the KMS auth cookie goes stale and it does not get flushed, so > the KMSClient in the NN cannot reconnect with the new KMS. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146923#comment-14146923 ] Allen Wittenauer commented on HADOOP-11009: --- TestUniformSizeInputFormat appears unrelated, but TestOptionsParser clearly needs to get updated since it is using distcp for its tests. (insert wah-wah noise here) > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146913#comment-14146913 ] Chris Nauroth commented on HADOOP-11064: Thanks for the review, Colin. After I resolve this, I'll file a separate jira for the follow-ups. We can resume discussion there. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 > method changes > -- > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146909#comment-14146909 ] Colin Patrick McCabe commented on HADOOP-11064: --- Thank you for taking the lead on this, Chris. I still find this an unpleasant solution, but it looks like what we're going to go with for 2.6. Sorry that I have not had time to call a meeting due to schedule constraints. +1 for the patch pending jenkins > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 > method changes > -- > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-11064: --- Attachment: HADOOP-11064.006.patch Here is patch v6 with 2 lines added to suppress deprecation warnings in the test. The findbugs warning is unrelated. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 > method changes > -- > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch, > HADOOP-11064.006.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146896#comment-14146896 ] Hadoop QA commented on HADOOP-11009: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671035/HADOOP-11009.4.patch against trunk revision 9fa5a89. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-tools/hadoop-distcp: org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat org.apache.hadoop.tools.TestOptionsParser {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4802//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4802//console This message is automatically generated. > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11009: -- Status: Patch Available (was: Open) > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11009: -- Status: Open (was: Patch Available) > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146866#comment-14146866 ] Allen Wittenauer commented on HADOOP-11009: --- You didn't have to change the enum. :) Thanks for the update [~gsteelman]! I'll start banging on it and see if I find anything else.If not, we'll get this committed. > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-6964) Allow compact property description in xml
[ https://issues.apache.org/jira/browse/HADOOP-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146850#comment-14146850 ] Allen Wittenauer commented on HADOOP-6964: -- Findbugs failure is unrelated. See HADOOP-11122 . > Allow compact property description in xml > - > > Key: HADOOP-6964 > URL: https://issues.apache.org/jira/browse/HADOOP-6964 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Reporter: Owen O'Malley > Labels: newbie > Attachments: HADOOP-6964.patch > > > We should allow users to use the more compact form of xml elements. For > example, we could allow: > {noformat} > > {noformat} > The old format would also be supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gary Steelman updated HADOOP-11009: --- Attachment: HADOOP-11009.4.patch > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp
[ https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146845#comment-14146845 ] Gary Steelman commented on HADOOP-11009: Thanks for catching that [~aw]. I missed adding t to the default options! I've uploaded patch v4 now, which should also have a test case for the default values. > Add Timestamp Preservation to DistCp > > > Key: HADOOP-11009 > URL: https://issues.apache.org/jira/browse/HADOOP-11009 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.4.0 >Reporter: Gary Steelman >Assignee: Gary Steelman > Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, > HADOOP-11009.3.patch, HADOOP-11009.4.patch > > > Currently access and modification times are not preserved on files copied > using DistCp. This patch adds an option to DistCp for timestamp preservation. > The patch ready, but I understand there is a Contributor form I need to sign > before I can upload it. Can someone point me in the right direction for this > form? Thanks! -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146841#comment-14146841 ] Hadoop QA commented on HADOOP-11064: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12671029/HADOOP-11064.005.patch against trunk revision 9fa5a89. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:red}-1 javac{color}. The applied patch generated 1265 javac compiler warnings (more than the trunk's current 1263 warnings). {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4801//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4801//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Javac warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4801//artifact/PreCommit-HADOOP-Build-patchprocess/diffJavacWarnings.txt Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4801//console This message is automatically generated. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 > method changes > -- > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7984) Add hadoop command verbose and debug options
[ https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146837#comment-14146837 ] Allen Wittenauer commented on HADOOP-7984: -- Hopefully [~cnauroth] won't mind me adding him to this JIRA so that he can look over the batch code. ;) It looks good, with only one issue that I saw. We should avoid adding more of the crazy project split variables. So please drop the HADOOP_MAPRED_LOGLEVEL and YARN_LOGLEVEL bits and just use HADOOP_LOGLEVEL everywhere. Those per-project vars are a source of continual problems out in the real world, and we definitely don't want to propagate more of them! (Easy example: 'hadoop jar' and 'yarn jar' work differently depending upon what is in the various *-env.sh, despite doing essentially the same thing...) I saw that you changed the log level for the security audit log, but I don't think it has anything but INFO coded. It's a coin toss whether or not we should actually change that. Also, one of the stated goals of this JIRA was to change the default from INFO up to WARN. Is this something we still want to do? Comments from the crowd would be good... :) Thanks! > Add hadoop command verbose and debug options > > > Key: HADOOP-7984 > URL: https://issues.apache.org/jira/browse/HADOOP-7984 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Eli Collins >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: HADOOP-7984.patch, HADOOP-7984.patch > > > It would be helpful if bin/hadoop had verbose and debug flags. Currently > users need to set an env variable or prefix the command (eg > "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user > friendly. We currently log INFO by default. How about we only log ERROR and > WARN by default, then -verbose triggers INFO and -debug triggers DEBUG? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes
[ https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-11064: --- Attachment: HADOOP-11064.005.patch I went ahead and prepared a patch that restores {{nativeVerifyChunkedSums}}, marked as deprecated. I'm uploading this as v5. I also added a new test suite, {{TestNativeCrc32}}. The main purpose of this test suite was to exercise the old {{nativeVerifyChunkedSums}} code to help protect us from removing it again. While I was in here, I decided to make it a comprehensive test suite that covers all of the {{NativeCrc32}} methods. > UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 > method changes > -- > > Key: HADOOP-11064 > URL: https://issues.apache.org/jira/browse/HADOOP-11064 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.6.0 > Environment: Hadoop 2.6 cluster, trying to run code containing hadoop > 2.4 JARs >Reporter: Steve Loughran >Assignee: Colin Patrick McCabe >Priority: Blocker > Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, > HADOOP-11064.003.patch, HADOOP-11064.004.patch, HADOOP-11064.005.patch > > > The private native method names and signatures in {{NativeCrc32}} were > changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed > link errors when they try to perform checksums. > This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless > rebuilt and repackaged with the hadoop- 2.6 JARs -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-10987) Provide an iterator-based listing API for FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146572#comment-14146572 ] Hadoop QA commented on HADOOP-10987: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670966/HADOOP-10987.v3.patch against trunk revision ef784a2. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: org.apache.hadoop.crypto.random.TestOsSecureRandom org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4799//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4799//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4799//console This message is automatically generated. > Provide an iterator-based listing API for FileSystem > > > Key: HADOOP-10987 > URL: https://issues.apache.org/jira/browse/HADOOP-10987 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee > Attachments: HADOOP-10987.patch, HADOOP-10987.v2.patch, > HADOOP-10987.v3.patch > > > Iterator based listing methods already exist in {{FileContext}} for both > simple listing and listing with locations. However, {{FileSystem}} lacks the > former. From what I understand, it wasn't added to {{FileSystem}} because it > was believed to be phased out soon. Since {{FileSystem}} is very well alive > today and new features are getting added frequently, I propose adding an > iterator based {{listStatus}} method. As for the name of the new method, we > can use the same name used in {{FileContext}} : {{listStatusIterator()}}. > It will be particularly useful when listing giant directories. Without this, > the client has to build up a huge data structure and hold it in memory. We've > seen client JVMs running out of memory because of this. > Once this change is made, we can modify FsShell, etc. in followup jiras. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11092) hadoop shell commands should print usage if not given a class
[ https://issues.apache.org/jira/browse/HADOOP-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146510#comment-14146510 ] Hudson commented on HADOOP-11092: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1906 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1906/]) HADOOP-11092. hadoop shell commands should print usage if not given a class (aw) (aw: rev 3dc28e2052dd3a8e4cd5888fc4f9e7e37f8bc062) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh * hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs * hadoop-common-project/hadoop-common/src/main/bin/hadoop * hadoop-yarn-project/hadoop-yarn/bin/yarn * hadoop-mapreduce-project/bin/mapred > hadoop shell commands should print usage if not given a class > - > > Key: HADOOP-11092 > URL: https://issues.apache.org/jira/browse/HADOOP-11092 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Reporter: Bruno Mahé >Assignee: Allen Wittenauer > Fix For: 3.0.0 > > Attachments: HADOOP-11092.patch, HDFS-2565.patch, HDFS-2565.patch > > > [root@bigtop-fedora-15 ~]# hdfs foobar > Exception in thread "main" java.lang.NoClassDefFoundError: foobar > Caused by: java.lang.ClassNotFoundException: foobar > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > at java.lang.ClassLoader.loadClass(ClassLoader.java:321) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > at java.lang.ClassLoader.loadClass(ClassLoader.java:266) > Could not find the main class: foobar. Program will exit. > Instead of loading any class, it would be nice to explain the command is not > valid and to call print_usage() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store
[ https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146513#comment-14146513 ] Hudson commented on HADOOP-11017: - FAILURE: Integrated in Hadoop-Mapreduce-trunk #1906 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1906/]) HADOOP-11017. Addendum to fix RM HA. KMS delegation token secret manager should be able to use zookeeper as store. (Arun Suresh via kasha) (kasha: rev ef784a2e08c2452026a85ae382a956ff7deecbd0) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java > KMS delegation token secret manager should be able to use zookeeper as store > > > Key: HADOOP-11017 > URL: https://issues.apache.org/jira/browse/HADOOP-11017 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.6.0 >Reporter: Alejandro Abdelnur >Assignee: Arun Suresh > Fix For: 2.6.0 > > Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, > HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, > HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, > HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, > HADOOP-11017.9.patch, HADOOP-11017.WIP.patch > > > This will allow supporting multiple KMS instances behind a load balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-6964) Allow compact property description in xml
[ https://issues.apache.org/jira/browse/HADOOP-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146459#comment-14146459 ] Hadoop QA commented on HADOOP-6964: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670973/HADOOP-6964.patch against trunk revision ef784a2. {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. There were no new javadoc warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 2.0.3) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 core tests{color}. The patch failed these unit tests in hadoop-common-project/hadoop-common: org.apache.hadoop.crypto.random.TestOsSecureRandom {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/4800//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/4800//artifact/PreCommit-HADOOP-Build-patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4800//console This message is automatically generated. > Allow compact property description in xml > - > > Key: HADOOP-6964 > URL: https://issues.apache.org/jira/browse/HADOOP-6964 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Reporter: Owen O'Malley > Labels: newbie > Attachments: HADOOP-6964.patch > > > We should allow users to use the more compact form of xml elements. For > example, we could allow: > {noformat} > > {noformat} > The old format would also be supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-6964) Allow compact property description in xml
[ https://issues.apache.org/jira/browse/HADOOP-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kengo Seki updated HADOOP-6964: --- Status: Patch Available (was: Open) > Allow compact property description in xml > - > > Key: HADOOP-6964 > URL: https://issues.apache.org/jira/browse/HADOOP-6964 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Reporter: Owen O'Malley > Labels: newbie > Attachments: HADOOP-6964.patch > > > We should allow users to use the more compact form of xml elements. For > example, we could allow: > {noformat} > > {noformat} > The old format would also be supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-6964) Allow compact property description in xml
[ https://issues.apache.org/jira/browse/HADOOP-6964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kengo Seki updated HADOOP-6964: --- Attachment: HADOOP-6964.patch Attaching a draft patch. > Allow compact property description in xml > - > > Key: HADOOP-6964 > URL: https://issues.apache.org/jira/browse/HADOOP-6964 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Reporter: Owen O'Malley > Labels: newbie > Attachments: HADOOP-6964.patch > > > We should allow users to use the more compact form of xml elements. For > example, we could allow: > {noformat} > > {noformat} > The old format would also be supported. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7984) Add hadoop command verbose and debug options
[ https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146354#comment-14146354 ] Akira AJISAKA commented on HADOOP-7984: --- The patch command can not apply the patch because .cmd files have CR+LF but the patch, created by git diff, has no CR+LF. Reviewers can use "git apply -p0 /path/to/patch" to apply the patch locally. > Add hadoop command verbose and debug options > > > Key: HADOOP-7984 > URL: https://issues.apache.org/jira/browse/HADOOP-7984 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Eli Collins >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: HADOOP-7984.patch, HADOOP-7984.patch > > > It would be helpful if bin/hadoop had verbose and debug flags. Currently > users need to set an env variable or prefix the command (eg > "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user > friendly. We currently log INFO by default. How about we only log ERROR and > WARN by default, then -verbose triggers INFO and -debug triggers DEBUG? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store
[ https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146352#comment-14146352 ] Hudson commented on HADOOP-11017: - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1881 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1881/]) HADOOP-11017. Addendum to fix RM HA. KMS delegation token secret manager should be able to use zookeeper as store. (Arun Suresh via kasha) (kasha: rev ef784a2e08c2452026a85ae382a956ff7deecbd0) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java > KMS delegation token secret manager should be able to use zookeeper as store > > > Key: HADOOP-11017 > URL: https://issues.apache.org/jira/browse/HADOOP-11017 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.6.0 >Reporter: Alejandro Abdelnur >Assignee: Arun Suresh > Fix For: 2.6.0 > > Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, > HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, > HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, > HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, > HADOOP-11017.9.patch, HADOOP-11017.WIP.patch > > > This will allow supporting multiple KMS instances behind a load balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11092) hadoop shell commands should print usage if not given a class
[ https://issues.apache.org/jira/browse/HADOOP-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146349#comment-14146349 ] Hudson commented on HADOOP-11092: - SUCCESS: Integrated in Hadoop-Hdfs-trunk #1881 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1881/]) HADOOP-11092. hadoop shell commands should print usage if not given a class (aw) (aw: rev 3dc28e2052dd3a8e4cd5888fc4f9e7e37f8bc062) * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh * hadoop-common-project/hadoop-common/src/main/bin/hadoop * hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs * hadoop-mapreduce-project/bin/mapred * hadoop-yarn-project/hadoop-yarn/bin/yarn > hadoop shell commands should print usage if not given a class > - > > Key: HADOOP-11092 > URL: https://issues.apache.org/jira/browse/HADOOP-11092 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Reporter: Bruno Mahé >Assignee: Allen Wittenauer > Fix For: 3.0.0 > > Attachments: HADOOP-11092.patch, HDFS-2565.patch, HDFS-2565.patch > > > [root@bigtop-fedora-15 ~]# hdfs foobar > Exception in thread "main" java.lang.NoClassDefFoundError: foobar > Caused by: java.lang.ClassNotFoundException: foobar > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > at java.lang.ClassLoader.loadClass(ClassLoader.java:321) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > at java.lang.ClassLoader.loadClass(ClassLoader.java:266) > Could not find the main class: foobar. Program will exit. > Instead of loading any class, it would be nice to explain the command is not > valid and to call print_usage() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-10987) Provide an iterator-based listing API for FileSystem
[ https://issues.apache.org/jira/browse/HADOOP-10987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HADOOP-10987: Attachment: HADOOP-10987.v3.patch fixed javac warnings > Provide an iterator-based listing API for FileSystem > > > Key: HADOOP-10987 > URL: https://issues.apache.org/jira/browse/HADOOP-10987 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee > Attachments: HADOOP-10987.patch, HADOOP-10987.v2.patch, > HADOOP-10987.v3.patch > > > Iterator based listing methods already exist in {{FileContext}} for both > simple listing and listing with locations. However, {{FileSystem}} lacks the > former. From what I understand, it wasn't added to {{FileSystem}} because it > was believed to be phased out soon. Since {{FileSystem}} is very well alive > today and new features are getting added frequently, I propose adding an > iterator based {{listStatus}} method. As for the name of the new method, we > can use the same name used in {{FileContext}} : {{listStatusIterator()}}. > It will be particularly useful when listing giant directories. Without this, > the client has to build up a huge data structure and hold it in memory. We've > seen client JVMs running out of memory because of this. > Once this change is made, we can modify FsShell, etc. in followup jiras. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11123) Uber-JIRA: Hadoop on Java 9
[ https://issues.apache.org/jira/browse/HADOOP-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146228#comment-14146228 ] Steve Loughran commented on HADOOP-11123: - clearly depends on HADOOP-11090 and Java 8 support > Uber-JIRA: Hadoop on Java 9 > --- > > Key: HADOOP-11123 > URL: https://issues.apache.org/jira/browse/HADOOP-11123 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0 > Environment: Java 9 >Reporter: Steve Loughran > > JIRA to cover/track issues related to Hadoop on Java 9. > Java 9 will have some significant changes —one of which is the removal of > various {{com.sun}} classes. These removals need to be handled or Hadoop will > not be able to run on a Java 9 JVM -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11092) hadoop shell commands should print usage if not given a class
[ https://issues.apache.org/jira/browse/HADOOP-11092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146210#comment-14146210 ] Hudson commented on HADOOP-11092: - FAILURE: Integrated in Hadoop-Yarn-trunk #690 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/690/]) HADOOP-11092. hadoop shell commands should print usage if not given a class (aw) (aw: rev 3dc28e2052dd3a8e4cd5888fc4f9e7e37f8bc062) * hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs * hadoop-common-project/hadoop-common/CHANGES.txt * hadoop-mapreduce-project/bin/mapred * hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh * hadoop-common-project/hadoop-common/src/main/bin/hadoop * hadoop-yarn-project/hadoop-yarn/bin/yarn > hadoop shell commands should print usage if not given a class > - > > Key: HADOOP-11092 > URL: https://issues.apache.org/jira/browse/HADOOP-11092 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Reporter: Bruno Mahé >Assignee: Allen Wittenauer > Fix For: 3.0.0 > > Attachments: HADOOP-11092.patch, HDFS-2565.patch, HDFS-2565.patch > > > [root@bigtop-fedora-15 ~]# hdfs foobar > Exception in thread "main" java.lang.NoClassDefFoundError: foobar > Caused by: java.lang.ClassNotFoundException: foobar > at java.net.URLClassLoader$1.run(URLClassLoader.java:217) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:205) > at java.lang.ClassLoader.loadClass(ClassLoader.java:321) > at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) > at java.lang.ClassLoader.loadClass(ClassLoader.java:266) > Could not find the main class: foobar. Program will exit. > Instead of loading any class, it would be nice to explain the command is not > valid and to call print_usage() -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-11017) KMS delegation token secret manager should be able to use zookeeper as store
[ https://issues.apache.org/jira/browse/HADOOP-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146213#comment-14146213 ] Hudson commented on HADOOP-11017: - FAILURE: Integrated in Hadoop-Yarn-trunk #690 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/690/]) HADOOP-11017. Addendum to fix RM HA. KMS delegation token secret manager should be able to use zookeeper as store. (Arun Suresh via kasha) (kasha: rev ef784a2e08c2452026a85ae382a956ff7deecbd0) * hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java > KMS delegation token secret manager should be able to use zookeeper as store > > > Key: HADOOP-11017 > URL: https://issues.apache.org/jira/browse/HADOOP-11017 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.6.0 >Reporter: Alejandro Abdelnur >Assignee: Arun Suresh > Fix For: 2.6.0 > > Attachments: HADOOP-11017.1.patch, HADOOP-11017.10.patch, > HADOOP-11017.11.patch, HADOOP-11017.12.patch, HADOOP-11017.2.patch, > HADOOP-11017.3.patch, HADOOP-11017.4.patch, HADOOP-11017.5.patch, > HADOOP-11017.6.patch, HADOOP-11017.7.patch, HADOOP-11017.8.patch, > HADOOP-11017.9.patch, HADOOP-11017.WIP.patch > > > This will allow supporting multiple KMS instances behind a load balancer. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-11124) Java 9 removes/hides Java internal classes
[ https://issues.apache.org/jira/browse/HADOOP-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-11124: Attachment: JDK Internal API Usage Report for hadoop-2.5.1.html Generated report for java 9 incompatibilites > Java 9 removes/hides Java internal classes > -- > > Key: HADOOP-11124 > URL: https://issues.apache.org/jira/browse/HADOOP-11124 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: JDK Internal API Usage Report for hadoop-2.5.1.html > > > Java 9 removes various internal classes; adapt the code to this. > It should be possible to switch to code that works on Java7+, yet which > adapts to the changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11124) Java 9 removes/hides Java internal classes
Steve Loughran created HADOOP-11124: --- Summary: Java 9 removes/hides Java internal classes Key: HADOOP-11124 URL: https://issues.apache.org/jira/browse/HADOOP-11124 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.0.0 Reporter: Steve Loughran Java 9 removes various internal classes; adapt the code to this. It should be possible to switch to code that works on Java7+, yet which adapts to the changes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11123) Uber-JIRA: Hadoop on Java 9
Steve Loughran created HADOOP-11123: --- Summary: Uber-JIRA: Hadoop on Java 9 Key: HADOOP-11123 URL: https://issues.apache.org/jira/browse/HADOOP-11123 Project: Hadoop Common Issue Type: Task Affects Versions: 3.0.0 Environment: Java 9 Reporter: Steve Loughran JIRA to cover/track issues related to Hadoop on Java 9. Java 9 will have some significant changes —one of which is the removal of various {{com.sun}} classes. These removals need to be handled or Hadoop will not be able to run on a Java 9 JVM -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-7984) Add hadoop command verbose and debug options
[ https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146106#comment-14146106 ] Hadoop QA commented on HADOOP-7984: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12670934/HADOOP-7984.patch against trunk revision ef784a2. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4798//console This message is automatically generated. > Add hadoop command verbose and debug options > > > Key: HADOOP-7984 > URL: https://issues.apache.org/jira/browse/HADOOP-7984 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Eli Collins >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: HADOOP-7984.patch, HADOOP-7984.patch > > > It would be helpful if bin/hadoop had verbose and debug flags. Currently > users need to set an env variable or prefix the command (eg > "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user > friendly. We currently log INFO by default. How about we only log ERROR and > WARN by default, then -verbose triggers INFO and -debug triggers DEBUG? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-7984) Add hadoop command verbose and debug options
[ https://issues.apache.org/jira/browse/HADOOP-7984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira AJISAKA updated HADOOP-7984: -- Attachment: HADOOP-7984.patch Attaching a patch to add "--loglevel" option. I verified it worked on Linux. However, I have not tried it on Windows yet since I don't have Windows environment. > Add hadoop command verbose and debug options > > > Key: HADOOP-7984 > URL: https://issues.apache.org/jira/browse/HADOOP-7984 > Project: Hadoop Common > Issue Type: New Feature > Components: scripts >Reporter: Eli Collins >Assignee: Akira AJISAKA >Priority: Minor > Labels: newbie > Attachments: HADOOP-7984.patch, HADOOP-7984.patch > > > It would be helpful if bin/hadoop had verbose and debug flags. Currently > users need to set an env variable or prefix the command (eg > "HADOOP_ROOT_LOGGER=DEBUG,console hadoop distcp") which isn't very user > friendly. We currently log INFO by default. How about we only log ERROR and > WARN by default, then -verbose triggers INFO and -debug triggers DEBUG? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HADOOP-9992) Modify the NN loadGenerator to optionally run as a MapReduce job
[ https://issues.apache.org/jira/browse/HADOOP-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14146021#comment-14146021 ] Hadoop QA commented on HADOOP-9992: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12629331/hadoop-9992-v3.patch against trunk revision ef784a2. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/4797//console This message is automatically generated. > Modify the NN loadGenerator to optionally run as a MapReduce job > > > Key: HADOOP-9992 > URL: https://issues.apache.org/jira/browse/HADOOP-9992 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akshay Radia >Assignee: Akshay Radia > Attachments: hadoop-9992-v2.patch, hadoop-9992-v3.patch, > hadoop-9992.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9992) Modify the NN loadGenerator to optionally run as a MapReduce job
[ https://issues.apache.org/jira/browse/HADOOP-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sanjay Radia updated HADOOP-9992: - Status: Patch Available (was: Open) > Modify the NN loadGenerator to optionally run as a MapReduce job > > > Key: HADOOP-9992 > URL: https://issues.apache.org/jira/browse/HADOOP-9992 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akshay Radia >Assignee: Akshay Radia > Attachments: hadoop-9992-v2.patch, hadoop-9992-v3.patch, > hadoop-9992.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HADOOP-9992) Modify the NN loadGenerator to optionally run as a MapReduce job
[ https://issues.apache.org/jira/browse/HADOOP-9992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sanjay Radia updated HADOOP-9992: - Status: Open (was: Patch Available) > Modify the NN loadGenerator to optionally run as a MapReduce job > > > Key: HADOOP-9992 > URL: https://issues.apache.org/jira/browse/HADOOP-9992 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akshay Radia >Assignee: Akshay Radia > Attachments: hadoop-9992-v2.patch, hadoop-9992-v3.patch, > hadoop-9992.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)