[jira] [Updated] (HADOOP-9637) TestAggregatedLogFormat fails on Windows
[ https://issues.apache.org/jira/browse/HADOOP-9637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chuan Liu updated HADOOP-9637: -- Attachment: HADOOP-9637-trunk.patch Post a patch. In this patch, I implemented {{NativeIO.POSIX.getFstat()}} for Windows. There are some minimal refactoring in winutils to make this work, including: 1. Moved symlink and directory check from '{{Ls()}}' method to '{{FindFileOwnerAndPermission()}}'. 2. Added an additional mode flag equivalent of S_IFREG on Linux. 3. Added '{{FindFileOwnerAndPermissionByHandle()}}' method -- a lightweight method that calls '{{FindFileOwnerAndPermission()}}'. The Java native method {{fstat()}} on Windows directly calls this new method to get file stat. 4. Enabled two TestNativeIO test cases, {{testFstat}} and {{testFstatClosedFdfor}}, that are previously disabled on Windows. 5. In {{NativeIO.POSIX.getFstat()}}, I used the similar pattern as {{NativeIO.POSIX.chmod()}} to map the Windows Error 6 (ERROR_INVALID_HANDLE: The handle is invalid ) to Linux exception Errno.EBADF. In analog, {{chmod()}} maps Windows error code 3 ERROR_PATH_NOT_FOUND to Linux Errno.ENOENT. > TestAggregatedLogFormat fails on Windows > > > Key: HADOOP-9637 > URL: https://issues.apache.org/jira/browse/HADOOP-9637 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chuan Liu >Assignee: Chuan Liu > Attachments: HADOOP-9637-trunk.patch > > > TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on > Windows. The test case try to simulate a situation where first log file is > owned by different user (probably symlink) and second one by the user itself. > In this situation, the attempt to try to aggregate the logs should fail with > the error message "Owner ... for path ... did not match expected owner ...". > The check on file owner happens at {{AggregatedLogFormat.write()}} method. > The method calls {{SecureIOUtils.openForRead()}} to read the log files before > writing out to the OutputStream. > {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the > file owner and group. We don't have {{NativeIO.Posix.getFstat()}} > implementation on Windows; thus, the failure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c
[ https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680217#comment-13680217 ] V. Karthik Kumar commented on HADOOP-9635: -- Hi [~cmccabe], Thank you for the review. I have incorporated that into the patch. This is now a patch. > Potential Stack Overflow in DomainSocket.c > -- > > Key: HADOOP-9635 > URL: https://issues.apache.org/jira/browse/HADOOP-9635 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.3.0 > Environment: OSX 10.8 >Reporter: V. Karthik Kumar > Labels: patch, security > Fix For: 2.3.0 > > Attachments: > 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch > > > When I was running on OSX, the DataNode was segfaulting. On investigation, it > was tracked down to this code. A potential stack overflow was also > identified. > {code} >utfLength = (*env)->GetStringUTFLength(env, jstr); >if (utfLength > sizeof(path)) { > jthr = newIOException(env, "path is too long! We expected a path " > "no longer than %zd UTF-8 bytes.", sizeof(path)); > goto done; >} > // GetStringUTFRegion does not pad with NUL >(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path); > ... > //strtok_r can set rest pointer to NULL when no tokens found. > //Causes JVM to crash in rest[0] >for (check[0] = '/', check[1] = '\0', rest = path, token = ""; >token && rest[0]; > token = strtok_r(rest, "/", &rest)) { > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c
[ https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] V. Karthik Kumar updated HADOOP-9635: - Fix Version/s: 2.3.0 Labels: patch security (was: ) Target Version/s: 2.3.0 Status: Patch Available (was: Open) > Potential Stack Overflow in DomainSocket.c > -- > > Key: HADOOP-9635 > URL: https://issues.apache.org/jira/browse/HADOOP-9635 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.3.0 > Environment: OSX 10.8 >Reporter: V. Karthik Kumar > Labels: security, patch > Fix For: 2.3.0 > > Attachments: > 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch > > > When I was running on OSX, the DataNode was segfaulting. On investigation, it > was tracked down to this code. A potential stack overflow was also > identified. > {code} >utfLength = (*env)->GetStringUTFLength(env, jstr); >if (utfLength > sizeof(path)) { > jthr = newIOException(env, "path is too long! We expected a path " > "no longer than %zd UTF-8 bytes.", sizeof(path)); > goto done; >} > // GetStringUTFRegion does not pad with NUL >(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path); > ... > //strtok_r can set rest pointer to NULL when no tokens found. > //Causes JVM to crash in rest[0] >for (check[0] = '/', check[1] = '\0', rest = path, token = ""; >token && rest[0]; > token = strtok_r(rest, "/", &rest)) { > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c
[ https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] V. Karthik Kumar updated HADOOP-9635: - Attachment: 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch Incorporated changes for Exception. Also, first formatted patch. > Potential Stack Overflow in DomainSocket.c > -- > > Key: HADOOP-9635 > URL: https://issues.apache.org/jira/browse/HADOOP-9635 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.3.0 > Environment: OSX 10.8 >Reporter: V. Karthik Kumar > Attachments: > 0001-HADOOP-9635-Fix-Potential-Stack-Overflow-in-DomainSo.patch > > > When I was running on OSX, the DataNode was segfaulting. On investigation, it > was tracked down to this code. A potential stack overflow was also > identified. > {code} >utfLength = (*env)->GetStringUTFLength(env, jstr); >if (utfLength > sizeof(path)) { > jthr = newIOException(env, "path is too long! We expected a path " > "no longer than %zd UTF-8 bytes.", sizeof(path)); > goto done; >} > // GetStringUTFRegion does not pad with NUL >(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path); > ... > //strtok_r can set rest pointer to NULL when no tokens found. > //Causes JVM to crash in rest[0] >for (check[0] = '/', check[1] = '\0', rest = path, token = ""; >token && rest[0]; > token = strtok_r(rest, "/", &rest)) { > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c
[ https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] V. Karthik Kumar updated HADOOP-9635: - Attachment: (was: DomainSocket.diff) > Potential Stack Overflow in DomainSocket.c > -- > > Key: HADOOP-9635 > URL: https://issues.apache.org/jira/browse/HADOOP-9635 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.3.0 > Environment: OSX 10.8 >Reporter: V. Karthik Kumar > > When I was running on OSX, the DataNode was segfaulting. On investigation, it > was tracked down to this code. A potential stack overflow was also > identified. > {code} >utfLength = (*env)->GetStringUTFLength(env, jstr); >if (utfLength > sizeof(path)) { > jthr = newIOException(env, "path is too long! We expected a path " > "no longer than %zd UTF-8 bytes.", sizeof(path)); > goto done; >} > // GetStringUTFRegion does not pad with NUL >(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path); > ... > //strtok_r can set rest pointer to NULL when no tokens found. > //Causes JVM to crash in rest[0] >for (check[0] = '/', check[1] = '\0', rest = path, token = ""; >token && rest[0]; > token = strtok_r(rest, "/", &rest)) { > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680182#comment-13680182 ] Hudson commented on HADOOP-9630: Integrated in Hadoop-trunk-Commit #3894 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3894/]) HADOOP-9630. [RPC v9] Remove IpcSerializationType. (Junping Du via llu) (Revision 1491682) Result = SUCCESS llu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491682 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > Labels: rpc > Fix For: 2.1.0-beta > > Attachments: HADOOP-9630.patch > > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Lu updated HADOOP-9630: Resolution: Fixed Fix Version/s: 2.1.0-beta Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Status: Resolved (was: Patch Available) Committed to trunk, branch-2 and 2.1-beta. Thanks Junping! > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > Labels: rpc > Fix For: 2.1.0-beta > > Attachments: HADOOP-9630.patch > > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has "X" in its name
[ https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680180#comment-13680180 ] Xi Fang commented on HADOOP-9624: - Thanks Aaron! > TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root > path has "X" in its name > --- > > Key: HADOOP-9624 > URL: https://issues.apache.org/jira/browse/HADOOP-9624 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 1-win > Environment: Windows >Reporter: Xi Fang >Assignee: Xi Fang >Priority: Minor > Labels: test > Attachments: HADOOP-9624.patch > > > TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. > PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "x" > and "X" in its name. > {code} > final private static PathFilter TEST_X_FILTER = new PathFilter() { > public boolean accept(Path file) { > if(file.getName().contains("x") || file.toString().contains("X")) > return true; > else > return false; > {code} > Some of the test cases construct a path by combining path "TEST_ROOT_DIR" > with a customized partial path. > The problem is that once the enlistment root path has "X" in its name, > "TEST_ROOT_DIR" will also has "X" in its name. The path check will pass even > if the customized partial path doesn't have "X". However, for this case the > path filter is supposed to reject this path. > An easy fix is to change "file.toString().contains("X")" to > "file.getName().contains("X")". Note that org.apache.hadoop.fs.Path.getName() > only returns the final component of this path. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server
[ https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680174#comment-13680174 ] Kevin Minder commented on HADOOP-9533: -- Although meetup.com was recommended to me as a mechanism to schedule a discussion, that doesn't really seem like it will work since this needs to be a virtual. I've schedule a Google Hangout for 12pmPT on Wednesday 6/12. https://plus.google.com/hangouts/_/calendar/a2V2aW4ubWluZGVyQGhvcnRvbndvcmtzLmNvbQ.qa0og2a0gaag9djeviv2rai63c I'm happy to move this around based on availability of those interested. I'm just not sure of the timezones involved. You can email my apache account (kminder at apache) or my jira profile address if you don't want that info here. At any rate for this "pre-meeting", I'd like to discuss what everyone would like to get out of the our time at the Summit and how we can prepare in advance. To seed this I think there are a few things we need to nail down before we get there. 1) The scope of the discussion 2) The basic goals/requirements from various perspectives 3) Agreement on the design discussion logistics (we only have two hours) At Summit we can: 1) Discuss design approaches. I want to stress that these discussions need to be at a fairly high level given the time allocation. Ideally we would have been able to cover this already here but we are rapidly running out of time. 2) Discuss a general implementation approach for any change of this nature 3) Discuss rollout expectations (e.g. Hadoop ?.?) > Centralized Hadoop SSO/Token Server > --- > > Key: HADOOP-9533 > URL: https://issues.apache.org/jira/browse/HADOOP-9533 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Reporter: Larry McCay > Attachments: HSSO-Interaction-Overview-rev-1.docx, > HSSO-Interaction-Overview-rev-1.pdf > > > This is an umbrella Jira filing to oversee a set of proposals for introducing > a new master service for Hadoop Single Sign On (HSSO). > There is an increasing need for pluggable authentication providers that > authenticate both users and services as well as validate tokens in order to > federate identities authenticated by trusted IDPs. These IDPs may be deployed > within the enterprise or third-party IDPs that are external to the enterprise. > These needs speak to a specific pain point: which is a narrow integration > path into the enterprise identity infrastructure. Kerberos is a fine solution > for those that already have it in place or are willing to adopt its use but > there remains a class of user that finds this unacceptable and needs to > integrate with a wider variety of identity management solutions. > Another specific pain point is that of rolling and distributing keys. A > related and integral part of the HSSO server is library called the Credential > Management Framework (CMF), which will be a common library for easing the > management of secrets, keys and credentials. > Initially, the existing delegation, block access and job tokens will continue > to be utilized. There may be some changes required to leverage a PKI based > signature facility rather than shared secrets. This is a means to simplify > the solution for the pain point of distributing shared secrets. > This project will primarily centralize the responsibility of authentication > and federation into a single service that is trusted across the Hadoop > cluster and optionally across multiple clusters. This greatly simplifies a > number of things in the Hadoop ecosystem: > 1.a single token format that is used across all of Hadoop regardless of > authentication method > 2.a single service to have pluggable providers instead of all services > 3.a single token authority that would be trusted across the cluster/s and > through PKI encryption be able to easily issue cryptographically verifiable > tokens > 4.automatic rolling of the token authority’s keys and publishing of the > public key for easy access by those parties that need to verify incoming > tokens > 5.use of PKI for signatures eliminates the need for securely sharing and > distributing shared secrets > In addition to serving as the internal Hadoop SSO service this service will > be leveraged by the Knox Gateway from the cluster perimeter in order to > acquire the Hadoop cluster tokens. The same token mechanism that is used for > internal services will be used to represent user identities. Providing for > interesting scenarios such as SSO across Hadoop clusters within an enterprise > and/or into the cloud. > The HSSO service will be comprised of three major components and capabilities: > 1.Federating IDP – authenticates users/services and issues the common > Hadoop token > 2.Federating SP – validates the toke
[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On
[ https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680172#comment-13680172 ] Kevin Minder commented on HADOOP-9392: -- Although meetup.com was recommended to me as a mechanism to schedule a discussion, that doesn't really seem like it will work since this needs to be a virtual. I've schedule a Google Hangout for 12pmPT on Wednesday 6/12. https://plus.google.com/hangouts/_/calendar/a2V2aW4ubWluZGVyQGhvcnRvbndvcmtzLmNvbQ.qa0og2a0gaag9djeviv2rai63c I'm happy to move this around based on availability of those interested. I'm just not sure of the timezones involved. You can email my apache account (kminder at apache) or my jira profile address if you don't want that info here. At any rate for this "pre-meeting", I'd like to discuss what everyone would like to get out of the our time at the Summit and how we can prepare in advance. To seed this I think there are a few things we need to nail down before we get there. 1) The scope of the discussion 2) The basic goals/requirements from various perspectives 3) Agreement on the design discussion logistics (we only have two hours) At Summit we can: 1) Discuss design approaches. I want to stress that these discussions need to be at a fairly high level given the time allocation. Ideally we would have been able to cover this already here but we are rapidly running out of time. 2) Discuss a general implementation approach for any change of this nature 3) Discuss rollout expectations (e.g. Hadoop ?.?) > Token based authentication and Single Sign On > - > > Key: HADOOP-9392 > URL: https://issues.apache.org/jira/browse/HADOOP-9392 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Reporter: Kai Zheng >Assignee: Kai Zheng > Fix For: 3.0.0 > > Attachments: token-based-authn-plus-sso.pdf > > > This is an umbrella entry for one of project Rhino’s topic, for details of > project Rhino, please refer to > https://github.com/intel-hadoop/project-rhino/. The major goal for this entry > as described in project Rhino was > > “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication > at the RPC layer, via SASL. However this does not provide valuable attributes > such as group membership, classification level, organizational identity, or > support for user defined attributes. Hadoop components must interrogate > external resources for discovering these attributes and at scale this is > problematic. There is also no consistent delegation model. HDFS has a simple > delegation capability, and only Oozie can take limited advantage of it. We > will implement a common token based authentication framework to decouple > internal user and service authentication from external mechanisms used to > support it (like Kerberos)” > > We’d like to start our work from Hadoop-Common and try to provide common > facilities by extending existing authentication framework which support: > 1.Pluggable token provider interface > 2.Pluggable token verification protocol and interface > 3.Security mechanism to distribute secrets in cluster nodes > 4.Delegation model of user authentication -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9632) TestShellCommandFencer will fail if there is a 'host' machine in the network
[ https://issues.apache.org/jira/browse/HADOOP-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680161#comment-13680161 ] Hadoop QA commented on HADOOP-9632: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586805/HADOOP-9632.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2630//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2630//console This message is automatically generated. > TestShellCommandFencer will fail if there is a 'host' machine in the network > > > Key: HADOOP-9632 > URL: https://issues.apache.org/jira/browse/HADOOP-9632 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chuan Liu >Assignee: Chuan Liu >Priority: Minor > Attachments: HADOOP-9632.patch > > > TestShellCommandFencer will fail if there is a machine named ‘host’ in the > network. The %target_address% environment variable used in the test was from > result of InetSocketAddress.getAddress(). The method will return 'host/ip' > instead of only 'host' when the host actually exists in the network. When the > test comparing the log output, it assumes there is no ip in the address. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680160#comment-13680160 ] Luke Lu commented on HADOOP-9630: - The patch lgtm, +1. Will commit shortly. > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > Labels: rpc > Attachments: HADOOP-9630.patch > > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (HADOOP-9633) An incorrect data node might be added to the network topology, an exception is thrown though
[ https://issues.apache.org/jira/browse/HADOOP-9633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers resolved HADOOP-9633. Resolution: Fixed Resolving as a duplicate of HDFS-4521. If we want to fix this in branch-1 as well, let's do it via that JIRA. > An incorrect data node might be added to the network topology, an exception > is thrown though > > > Key: HADOOP-9633 > URL: https://issues.apache.org/jira/browse/HADOOP-9633 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 1.3.0 >Reporter: Xi Fang >Priority: Minor > > In NetworkTopology#add(Node node), an incorrect node may be added to the > cluster even if an exception is thrown. > This is the original code: > {code} > if (clusterMap.add(node)) { > LOG.info("Adding a new node: "+NodeBase.getPath(node)); > if (rack == null) { > numOfRacks++; > } > if (!(node instanceof InnerNode)) { > if (depthOfAllLeaves == -1) { > depthOfAllLeaves = node.getLevel(); > } else { > if (depthOfAllLeaves != node.getLevel()) { > LOG.error("Error: can't add leaf node at depth " + > node.getLevel() + " to topology:\n" + oldTopoStr); > throw new InvalidTopologyException("Invalid network topology. " > + > "You cannot have a rack and a non-rack node at the same " + > "level of the network topology."); > } > } > } > {code} > This is a potential bug, because a wrong leaf node is already added to the > cluster before throwing the exception. However, we can't check this > (depthOfAllLeaves != node.getLevel()) before if (clusterMap.add(node)), > because node.getLevel() will work correctly only after clusterMap.add(node) > has been executed. > A possible solution to this is checking the depthOfAllLeaves in > clusterMap.add(node). Note that this is a recursive call. A check should be > put at the bottom of this recursive call. If check fails, don't add this leaf > and all its upstream racks. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has "X" in its name
[ https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680146#comment-13680146 ] Hadoop QA commented on HADOOP-9624: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586551/HADOOP-9624.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2629//console This message is automatically generated. > TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root > path has "X" in its name > --- > > Key: HADOOP-9624 > URL: https://issues.apache.org/jira/browse/HADOOP-9624 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 1-win > Environment: Windows >Reporter: Xi Fang >Assignee: Xi Fang >Priority: Minor > Labels: test > Attachments: HADOOP-9624.patch > > > TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. > PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "x" > and "X" in its name. > {code} > final private static PathFilter TEST_X_FILTER = new PathFilter() { > public boolean accept(Path file) { > if(file.getName().contains("x") || file.toString().contains("X")) > return true; > else > return false; > {code} > Some of the test cases construct a path by combining path "TEST_ROOT_DIR" > with a customized partial path. > The problem is that once the enlistment root path has "X" in its name, > "TEST_ROOT_DIR" will also has "X" in its name. The path check will pass even > if the customized partial path doesn't have "X". However, for this case the > path filter is supposed to reject this path. > An easy fix is to change "file.toString().contains("X")" to > "file.getName().contains("X")". Note that org.apache.hadoop.fs.Path.getName() > only returns the final component of this path. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9632) TestShellCommandFencer will fail if there is a 'host' machine in the network
[ https://issues.apache.org/jira/browse/HADOOP-9632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HADOOP-9632: --- Status: Patch Available (was: Open) Marking PA for Chuan. > TestShellCommandFencer will fail if there is a 'host' machine in the network > > > Key: HADOOP-9632 > URL: https://issues.apache.org/jira/browse/HADOOP-9632 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chuan Liu >Assignee: Chuan Liu >Priority: Minor > Attachments: HADOOP-9632.patch > > > TestShellCommandFencer will fail if there is a machine named ‘host’ in the > network. The %target_address% environment variable used in the test was from > result of InetSocketAddress.getAddress(). The method will return 'host/ip' > instead of only 'host' when the host actually exists in the network. When the > test comparing the log output, it assumes there is no ip in the address. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9604) Wrong Javadoc of FSDataOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680144#comment-13680144 ] Hudson commented on HADOOP-9604: Integrated in Hadoop-trunk-Commit #3893 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3893/]) HADOOP-9604. Javadoc of FSDataOutputStream is slightly inaccurate. Contributed by Jingguo Yao. (Revision 1491668) Result = SUCCESS atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491668 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataOutputStream.java > Wrong Javadoc of FSDataOutputStream > --- > > Key: HADOOP-9604 > URL: https://issues.apache.org/jira/browse/HADOOP-9604 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 1.0.4 >Reporter: Jingguo Yao >Assignee: Jingguo Yao >Priority: Minor > Fix For: 2.1.0-beta > > Attachments: HADOOP-9604.patch > > Original Estimate: 20m > Remaining Estimate: 20m > > The following Javadoc of FSDataOutputStream is wrong. > {quote} > buffers output through a \{@link BufferedOutputStream\} and creates a > checksum file. > {quote} > FSDataOutputStream has nothing to do with a BufferedOutputStream. Neither it > create a checksum file. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9515) Add general interface for NFS and Mount
[ https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680138#comment-13680138 ] Hadoop QA commented on HADOOP-9515: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12587164/HADOOP-9515.2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 5 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2628//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/2628//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-nfs.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2628//console This message is automatically generated. > Add general interface for NFS and Mount > --- > > Key: HADOOP-9515 > URL: https://issues.apache.org/jira/browse/HADOOP-9515 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Brandon Li >Assignee: Brandon Li > Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch > > > These is the general interface implementation for NFS and Mount protocol, > e.g., some protocol related data structures and etc. It doesn't include the > file system specific implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has "X" in its name
[ https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HADOOP-9624: --- Assignee: Xi Fang Status: Patch Available (was: Open) Marking PA for Xi. > TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root > path has "X" in its name > --- > > Key: HADOOP-9624 > URL: https://issues.apache.org/jira/browse/HADOOP-9624 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 1-win > Environment: Windows >Reporter: Xi Fang >Assignee: Xi Fang >Priority: Minor > Labels: test > Attachments: HADOOP-9624.patch > > > TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. > PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "x" > and "X" in its name. > {code} > final private static PathFilter TEST_X_FILTER = new PathFilter() { > public boolean accept(Path file) { > if(file.getName().contains("x") || file.toString().contains("X")) > return true; > else > return false; > {code} > Some of the test cases construct a path by combining path "TEST_ROOT_DIR" > with a customized partial path. > The problem is that once the enlistment root path has "X" in its name, > "TEST_ROOT_DIR" will also has "X" in its name. The path check will pass even > if the customized partial path doesn't have "X". However, for this case the > path filter is supposed to reject this path. > An easy fix is to change "file.toString().contains("X")" to > "file.getName().contains("X")". Note that org.apache.hadoop.fs.Path.getName() > only returns the final component of this path. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9621) Document/analyze current Hadoop security model
[ https://issues.apache.org/jira/browse/HADOOP-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680134#comment-13680134 ] Aaron T. Myers commented on HADOOP-9621: bq. I added some general details about Tokens to the top of the doc. Sorry if I'm missing something, but where's this doc? Could you perhaps provide a link? > Document/analyze current Hadoop security model > -- > > Key: HADOOP-9621 > URL: https://issues.apache.org/jira/browse/HADOOP-9621 > Project: Hadoop Common > Issue Type: Task > Components: security >Reporter: Brian Swan >Priority: Minor > Labels: documentation > Original Estimate: 336h > Remaining Estimate: 336h > > In light of the proposed changes to Hadoop security in Hadoop-9533 and > Hadoop-9392, having a common, detailed understanding (in the form of a > document) of the benefits/drawbacks of the current security model and how it > works would be useful. The document should address all security principals, > their authentication mechanisms, and handling of shared secrets through the > lens of the following principles: Minimize attack surface area, Establish > secure defaults, Principle of Least privilege, Principle of Defense in depth, > Fail securely, Don’t trust services, Separation of duties, Avoid security by > obscurity, Keep security simple, Fix security issues correctly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680129#comment-13680129 ] Hadoop QA commented on HADOOP-9630: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12587176/HADOOP-9630.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2627//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2627//console This message is automatically generated. > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > Labels: rpc > Attachments: HADOOP-9630.patch > > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9637) TestAggregatedLogFormat fails on Windows
Chuan Liu created HADOOP-9637: - Summary: TestAggregatedLogFormat fails on Windows Key: HADOOP-9637 URL: https://issues.apache.org/jira/browse/HADOOP-9637 Project: Hadoop Common Issue Type: Bug Affects Versions: 3.0.0, 2.1.0-beta Reporter: Chuan Liu Assignee: Chuan Liu TestAggregatedLogFormat.testContainerLogsFileAccess test case fails on Windows. The test case try to simulate a situation where first log file is owned by different user (probably symlink) and second one by the user itself. In this situation, the attempt to try to aggregate the logs should fail with the error message "Owner ... for path ... did not match expected owner ...". The check on file owner happens at {{AggregatedLogFormat.write()}} method. The method calls {{SecureIOUtils.openForRead()}} to read the log files before writing out to the OutputStream. {{SecureIOUtils.openForRead()}} use {{NativeIO.Posix.getFstat()}} to get the file owner and group. We don't have {{NativeIO.Posix.getFstat()}} implementation on Windows; thus, the failure. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9604) Wrong Javadoc of FSDataOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron T. Myers updated HADOOP-9604: --- Resolution: Fixed Fix Version/s: 2.1.0-beta Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) I've just committed this to trunk and branch-2. Thanks a lot for the contribution, Jingguo. > Wrong Javadoc of FSDataOutputStream > --- > > Key: HADOOP-9604 > URL: https://issues.apache.org/jira/browse/HADOOP-9604 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 1.0.4 >Reporter: Jingguo Yao >Assignee: Jingguo Yao >Priority: Minor > Fix For: 2.1.0-beta > > Attachments: HADOOP-9604.patch > > Original Estimate: 20m > Remaining Estimate: 20m > > The following Javadoc of FSDataOutputStream is wrong. > {quote} > buffers output through a \{@link BufferedOutputStream\} and creates a > checksum file. > {quote} > FSDataOutputStream has nothing to do with a BufferedOutputStream. Neither it > create a checksum file. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9604) Wrong Javadoc of FSDataOutputStream
[ https://issues.apache.org/jira/browse/HADOOP-9604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680122#comment-13680122 ] Aaron T. Myers commented on HADOOP-9604: +1, the patch looks good to me. I'm going to commit this momentarily. > Wrong Javadoc of FSDataOutputStream > --- > > Key: HADOOP-9604 > URL: https://issues.apache.org/jira/browse/HADOOP-9604 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 1.0.4 >Reporter: Jingguo Yao >Assignee: Jingguo Yao >Priority: Minor > Attachments: HADOOP-9604.patch > > Original Estimate: 20m > Remaining Estimate: 20m > > The following Javadoc of FSDataOutputStream is wrong. > {quote} > buffers output through a \{@link BufferedOutputStream\} and creates a > checksum file. > {quote} > FSDataOutputStream has nothing to do with a BufferedOutputStream. Neither it > create a checksum file. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9589) Extra master key is created when AbstractDelegationTokenSecretManager is started
[ https://issues.apache.org/jira/browse/HADOOP-9589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680097#comment-13680097 ] Jian He commented on HADOOP-9589: - Hey Aaron, I think this does no major harm, but that we have a redundant key when the secretManager starts. > Extra master key is created when AbstractDelegationTokenSecretManager is > started > > > Key: HADOOP-9589 > URL: https://issues.apache.org/jira/browse/HADOOP-9589 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jian He >Assignee: Jian He > > When AbstractDelegationTokenSecretManager starts , > AbstractDelegationTokenSecretManager.startThreads().updateCurrentKey() > creates the first master key. Immediately after that, ExpiredTokenRemover > thread is started, and it will creates the second master by calling > rollMasterKey on its first loop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-9630: --- Labels: rpc (was: ) Target Version/s: 2.1.0-beta Status: Patch Available (was: Open) > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > Labels: rpc > Attachments: HADOOP-9630.patch > > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HADOOP-9630: --- Attachment: HADOOP-9630.patch Attach a patch to cleanup IpcSerializationType. > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > Attachments: HADOOP-9630.patch > > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680067#comment-13680067 ] Luke Lu commented on HADOOP-9421: - Trying to see why you're not seeing what I'm seeing: perhaps it's not obvious that SaslClient#hasInitialResponse is always false for new connection with token (Digest-MD5 at least, cf. [rfc-2831|https://tools.ietf.org/html/draft-ietf-sasl-rfc2831bis-12#section-2.1])? > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9515) Add general interface for NFS and Mount
[ https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Li updated HADOOP-9515: --- Affects Version/s: 3.0.0 > Add general interface for NFS and Mount > --- > > Key: HADOOP-9515 > URL: https://issues.apache.org/jira/browse/HADOOP-9515 > Project: Hadoop Common > Issue Type: New Feature >Affects Versions: 3.0.0 >Reporter: Brandon Li >Assignee: Brandon Li > Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch > > > These is the general interface implementation for NFS and Mount protocol, > e.g., some protocol related data structures and etc. It doesn't include the > file system specific implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9515) Add general interface for NFS and Mount
[ https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Li updated HADOOP-9515: --- Attachment: HADOOP-9515.2.patch > Add general interface for NFS and Mount > --- > > Key: HADOOP-9515 > URL: https://issues.apache.org/jira/browse/HADOOP-9515 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Brandon Li >Assignee: Brandon Li > Attachments: HADOOP-9515.1.patch, HADOOP-9515.2.patch > > > These is the general interface implementation for NFS and Mount protocol, > e.g., some protocol related data structures and etc. It doesn't include the > file system specific implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9589) Extra master key is created when AbstractDelegationTokenSecretManager is started
[ https://issues.apache.org/jira/browse/HADOOP-9589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680046#comment-13680046 ] Aaron T. Myers commented on HADOOP-9589: Hey Jian, can you summarize here what effect this has, if any? Thanks a lot. > Extra master key is created when AbstractDelegationTokenSecretManager is > started > > > Key: HADOOP-9589 > URL: https://issues.apache.org/jira/browse/HADOOP-9589 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jian He >Assignee: Jian He > > When AbstractDelegationTokenSecretManager starts , > AbstractDelegationTokenSecretManager.startThreads().updateCurrentKey() > creates the first master key. Immediately after that, ExpiredTokenRemover > thread is started, and it will creates the second master by calling > rollMasterKey on its first loop. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8934) Shell command ls should include sort options
[ https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680041#comment-13680041 ] Jing Zhao commented on HADOOP-8934: --- Hi Jonathan, The patch looks pretty good to me. The only minor nit is in the description of LS: {code} public static final String DESCRIPTION = "List the contents that match the specified file pattern. If\n" + "path is not specified, the contents of /user/\n" + "will be listed. For a directory a list of its direct children\n" + "is returned (unless -" + OPTION_DIRECTORY + " option is specified). {code} Looks like if the path is not specified, in trunk we cannot get the content of /user/? And if OPTION_RECURSIVE is specified, we will get more than _direct_ children. > Shell command ls should include sort options > > > Key: HADOOP-8934 > URL: https://issues.apache.org/jira/browse/HADOOP-8934 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Jonathan Allen >Assignee: Jonathan Allen >Priority: Minor > Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, > HADOOP-8934.patch, HADOOP-8934.patch > > > The shell command ls should include options to sort the output similar to the > unix ls command. The following options seem appropriate: > -t : sort by modification time > -S : sort by file size > -r : reverse the sort order > -u : use access time rather than modification time for sort and display -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679985#comment-13679985 ] Luke Lu commented on HADOOP-9421: - bq. The client can't generate an initial SASL response token since it hasn't instantiated the SASL client - leading to an additional roundtrip for SASL As long as the mech is TOKEN (or better CHALLENGE_RESPONSE), why can't you instantiate the sasl client then (after receiving server challenge) with the info from server challenge? Why would an additional roundtrip be necessary unless the mech is not supported by server? > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679981#comment-13679981 ] Arpit Gupta commented on HADOOP-9625: - +1 > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679966#comment-13679966 ] Daryn Sharp commented on HADOOP-9421: - I'm open to improvements, but I'm having a hard reconciling how to add this capability. The SASL client and server must be instantiated with the exact same mechanism/protocol/serverId or the SASL server throws an exception. The initial connect will require a way for the client to solicit the NEGOTIATE otherwise it has no idea what to do. This is what incurs a roundtrip. Here's what it would take for a reconnect: # The client sends an INITIATE using the cached SaslAuth, but w/o instantiating it's SASL client until it receives the CHALLENGE response # The client can't generate an initial SASL response token since it hasn't instantiated the SASL client - leading to an additional roundtrip for SASL # The server's first CHALLENGE response must set the SaslAuth protobuf field which it currently doesn't, but not a big deal # Client now instantiates SASL client based on SaslAuth in CHALLENGE and processes the token > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9613) Updated jersey pom dependencies
[ https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679942#comment-13679942 ] Timothy St. Clair commented on HADOOP-9613: --- Hi folks, This patch is fairly benign, any chance of getting a review? Cheers, Tim > Updated jersey pom dependencies > --- > > Key: HADOOP-9613 > URL: https://issues.apache.org/jira/browse/HADOOP-9613 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.0.5-alpha >Reporter: Timothy St. Clair > Labels: maven > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9613.patch > > > Update pom.xml dependencies exposed when running a mvn-rpmbuild against > system dependencies on Fedora 18. > The existing version is 1.8 which is quite old. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9623) Update jets3t dependency
[ https://issues.apache.org/jira/browse/HADOOP-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679939#comment-13679939 ] Timothy St. Clair commented on HADOOP-9623: --- Hi [~ste...@apache.org], do you know of anyone else who could review? > Update jets3t dependency > > > Key: HADOOP-9623 > URL: https://issues.apache.org/jira/browse/HADOOP-9623 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair > Labels: maven > Attachments: HADOOP-9623.patch > > > Current version referenced in pom is 0.6.1 (Aug 2008), updating to 0.9.0 > enables mvn-rpmbuild to build against system dependencies. > http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679908#comment-13679908 ] Luke Lu commented on HADOOP-9421: - bq. By saying you can live with it, is that a tacit +1 or do you want further changes to this patch? Based on your response, I'm not completely sure that you fully considered my proposal (which is more evolution friendly and only requires a small change to your patch). But I'm fine with a scapegoat for RPC v9 :) > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8934) Shell command ls should include sort options
[ https://issues.apache.org/jira/browse/HADOOP-8934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679876#comment-13679876 ] Jonathan Allen commented on HADOOP-8934: Is there anything stopping this from being committed? > Shell command ls should include sort options > > > Key: HADOOP-8934 > URL: https://issues.apache.org/jira/browse/HADOOP-8934 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Reporter: Jonathan Allen >Assignee: Jonathan Allen >Priority: Minor > Attachments: HADOOP-8934.patch, HADOOP-8934.patch, HADOOP-8934.patch, > HADOOP-8934.patch, HADOOP-8934.patch > > > The shell command ls should include options to sort the output similar to the > unix ls command. The following options seem appropriate: > -t : sort by modification time > -S : sort by file size > -r : reverse the sort order > -u : use access time rather than modification time for sort and display -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679870#comment-13679870 ] Luke Lu commented on HADOOP-9421: - bq. This optimizes a reconnect, but the common case of initial connect will now take an additional round trip penalty by sending a second message to request the NEGOTIATE. No, for common case, server can detect that client selected the correct mechanism (note RpcSaslProto is extensible to contain the appropriate metadata needed for server to verify) and respond with a normal sasl challenge (again extensible to contain server principal) instead of negotiate. So my proposal actually works with majority of failover case (the mechanism doesn't change besides server name etc.) with no additional round trip as well. > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679842#comment-13679842 ] Daryn Sharp commented on HADOOP-9421: - I believe you are proposing the client always sends two messages upon connect: the connect header and a SASL INITIATE. This optimizes a reconnect, but the common case of initial connect will now take an additional round trip penalty by sending a second message to request the NEGOTIATE. To allow for IP failover, I guess that means the server response to an invalid INITIATE is a NEGOTIATE instead of returning authentication failed and closing the connection? Presumably the second bad INITIATE will continue to return auth failed? I'm unclear what the next step is for this patch. I'm not too happy about evolution via AuthProtocol either but it's the only way I can think of to avoid penalizing the initial connect with another roundtrip. By saying you can live with it, is that a tacit +1 or do you want further changes to this patch? > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679812#comment-13679812 ] Chu Tong commented on HADOOP-9487: -- Hi Steve, can you please take a look at this change? Thanks > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch, HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9630) Remove IpcSerializationType
[ https://issues.apache.org/jira/browse/HADOOP-9630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Lu updated HADOOP-9630: Assignee: Junping Du > Remove IpcSerializationType > --- > > Key: HADOOP-9630 > URL: https://issues.apache.org/jira/browse/HADOOP-9630 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Luke Lu >Assignee: Junping Du > > IpcSerializationType is assumed to be protobuf for the forseeable future. Not > to be confused with RpcKind which still supports different RpcEngines. Let's > remove the dead code, which can be confusing to maintain. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9581) hadoop --config non-existent directory should result in error
[ https://issues.apache.org/jira/browse/HADOOP-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679736#comment-13679736 ] Hudson commented on HADOOP-9581: Integrated in Hadoop-trunk-Commit #3890 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/3890/]) HADOOP-9581. hadoop --config non-existent directory should result in error. Contributed by Ashwin Shankar (Revision 1491548) Result = SUCCESS jlowe : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1491548 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh > hadoop --config non-existent directory should result in error > -- > > Key: HADOOP-9581 > URL: https://issues.apache.org/jira/browse/HADOOP-9581 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha >Reporter: Ashwin Shankar >Assignee: Ashwin Shankar > Fix For: 2.1.0-beta, 0.23.9 > > Attachments: HADOOP-9581.txt > > > Courtesy : [~cwchung] > {quote}Providing a non-existent config directory should result in error. > $ hadoop dfs -ls / : shows Hadoop DFS directory > $ hadoop --config bad_config_dir dfs -ls : successful, showing Linux directory > {quote} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679738#comment-13679738 ] Luke Lu commented on HADOOP-9421: - bq. Shorting out the NEGOTIATE for a re-connect becomes a bit complicated. The connection header doesn't contain a length like other RPC packets You can always send the length + RpcSaslProto after the fixed connection header. Server can than send the appropriate challenge or negotiate accordingly. It seems more straight forward than the alternatives here. bq. the only way to signal the server is another authProtocol to not send a NEGOTIATE, It's not the only way (see above). But it could work, which is the saving grace of AuthProtocol :) though SASL2 or HSASL (pronounced as hassle) and its variant is kinda ugly. bq. The client needs the server's NEGOTIATE to correctly instantiate its SASL client. This negates the ability for the client to cache values for an immediate INITIATE. That's why it's called "cached" initiation, server can always send NEGOTIATE as it see fit after fail over. This is also why I prefer always sending RpcSaslProto first, so server can decide what to respond in a straight forward way. Fail over handling is not a common workload, the goal of client cached initiation is to reduce server side processing in common cases like container/task launching, when NN/RM are not failing over left and right. Anyway, though I'm not too happy with the evolution via AuthProtocol approach. I think I can live with it. > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9581) hadoop --config non-existent directory should result in error
[ https://issues.apache.org/jira/browse/HADOOP-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-9581: --- Resolution: Fixed Fix Version/s: 0.23.9 2.1.0-beta Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks, Ashwin! I committed this to trunk, branch-2, branch-2.1-beta, and branch-0.23 > hadoop --config non-existent directory should result in error > -- > > Key: HADOOP-9581 > URL: https://issues.apache.org/jira/browse/HADOOP-9581 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha >Reporter: Ashwin Shankar >Assignee: Ashwin Shankar > Fix For: 2.1.0-beta, 0.23.9 > > Attachments: HADOOP-9581.txt > > > Courtesy : [~cwchung] > {quote}Providing a non-existent config directory should result in error. > $ hadoop dfs -ls / : shows Hadoop DFS directory > $ hadoop --config bad_config_dir dfs -ls : successful, showing Linux directory > {quote} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9581) hadoop --config non-existent directory should result in error
[ https://issues.apache.org/jira/browse/HADOOP-9581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe updated HADOOP-9581: --- Assignee: Ashwin Shankar Summary: hadoop --config non-existent directory should result in error (was: Hadoop --config non-existent directory should result in error ) +1, lgtm. > hadoop --config non-existent directory should result in error > -- > > Key: HADOOP-9581 > URL: https://issues.apache.org/jira/browse/HADOOP-9581 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0, 0.23.7, 2.0.4-alpha >Reporter: Ashwin Shankar >Assignee: Ashwin Shankar > Attachments: HADOOP-9581.txt > > > Courtesy : [~cwchung] > {quote}Providing a non-existent config directory should result in error. > $ hadoop dfs -ls / : shows Hadoop DFS directory > $ hadoop --config bad_config_dir dfs -ls : successful, showing Linux directory > {quote} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9515) Add general interface for NFS and Mount
[ https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679702#comment-13679702 ] Hadoop QA commented on HADOOP-9515: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12587108/HADOOP-9515.1.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 16 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-nfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2626//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/2626//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-nfs.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2626//console This message is automatically generated. > Add general interface for NFS and Mount > --- > > Key: HADOOP-9515 > URL: https://issues.apache.org/jira/browse/HADOOP-9515 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Brandon Li >Assignee: Brandon Li > Attachments: HADOOP-9515.1.patch > > > These is the general interface implementation for NFS and Mount protocol, > e.g., some protocol related data structures and etc. It doesn't include the > file system specific implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9635) Potential Stack Overflow in DomainSocket.c
[ https://issues.apache.org/jira/browse/HADOOP-9635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679695#comment-13679695 ] Colin Patrick McCabe commented on HADOOP-9635: -- Thanks for finding this. bq. jthr = newIOException(env, "path is too long! We expected a path no longer than %zd UTF-8 bytes.", sizeof(path)); You should also update this {{IOException}} to talk about sizeof(path) - 1 bytes. With regard to {{GetStringUTFRegion}}, I wasn't able to find anything in the JNI docs saying that it didn't NULL-terminate. On the other hand, there was nothing in there saying it did. I looked in my copy of the OpenJDK 6 source code, and it was NULL-terminating. However, given the ambiguity in the docs, we should do this. > Potential Stack Overflow in DomainSocket.c > -- > > Key: HADOOP-9635 > URL: https://issues.apache.org/jira/browse/HADOOP-9635 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.3.0 > Environment: OSX 10.8 >Reporter: V. Karthik Kumar > Attachments: DomainSocket.diff > > > When I was running on OSX, the DataNode was segfaulting. On investigation, it > was tracked down to this code. A potential stack overflow was also > identified. > {code} >utfLength = (*env)->GetStringUTFLength(env, jstr); >if (utfLength > sizeof(path)) { > jthr = newIOException(env, "path is too long! We expected a path " > "no longer than %zd UTF-8 bytes.", sizeof(path)); > goto done; >} > // GetStringUTFRegion does not pad with NUL >(*env)->GetStringUTFRegion(env, jstr, 0, utfLength, path); > ... > //strtok_r can set rest pointer to NULL when no tokens found. > //Causes JVM to crash in rest[0] >for (check[0] = '/', check[1] = '\0', rest = path, token = ""; >token && rest[0]; > token = strtok_r(rest, "/", &rest)) { > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9515) Add general interface for NFS and Mount
[ https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Li updated HADOOP-9515: --- Attachment: HADOOP-9515.1.patch > Add general interface for NFS and Mount > --- > > Key: HADOOP-9515 > URL: https://issues.apache.org/jira/browse/HADOOP-9515 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Brandon Li >Assignee: Brandon Li > Attachments: HADOOP-9515.1.patch > > > These is the general interface implementation for NFS and Mount protocol, > e.g., some protocol related data structures and etc. It doesn't include the > file system specific implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9515) Add general interface for NFS and Mount
[ https://issues.apache.org/jira/browse/HADOOP-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Li updated HADOOP-9515: --- Status: Patch Available (was: Open) > Add general interface for NFS and Mount > --- > > Key: HADOOP-9515 > URL: https://issues.apache.org/jira/browse/HADOOP-9515 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Brandon Li >Assignee: Brandon Li > Attachments: HADOOP-9515.1.patch > > > These is the general interface implementation for NFS and Mount protocol, > e.g., some protocol related data structures and etc. It doesn't include the > file system specific implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679663#comment-13679663 ] Daryn Sharp commented on HADOOP-9421: - Shorting out the NEGOTIATE for a re-connect becomes a bit complicated. The connection header doesn't contain a length like other RPC packets, so about the only way to signal the server is another authProtocol to not send a NEGOTIATE, but expect a subsequent INITIATE. As discussed earlier, I'd prefer to defer the reconnect optimization to a followup jira. BTW, I'm currently working on IP failover. We're blocked on HA deployment because we can't manage the logistics of updating confs and restarting every cluster & services like oozie and hdfsproxy, etc when one cluster is HA enabled or its HA config changes. IP failover is the answer but that's blocked too because the service principal changes when failover occurs. The client needs the server's NEGOTIATE to correctly instantiate its SASL client. This negates the ability for the client to cache values for an immediate INITIATE. The IP failover work is based upon this jira. bq. Though DIGEST is not exactly a precise word here, TOKEN is, IMO, even more nebulous. How about CHALLENGE_RESPONSE or simply CR? I'd prefer the rename to be in separate JIRA as well, as it doesn't really affect the wire protocol. Actually, it does affect the wire because that is a value passed over the wire. Ie. "TOKEN" via "DIGEST-MD5", or "TOKEN" via "SCRAM", etc. > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421.patch, HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9617) HA HDFS client is too strict with validating URI authorities
[ https://issues.apache.org/jira/browse/HADOOP-9617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13679633#comment-13679633 ] Aaron T. Myers commented on HADOOP-9617: The new javac warning is because I'm using a Sun-proprietary API in the test. The test uses an Assume check so that this test will not be run on a non-Sun JDK. Daryn, Todd - does this look OK to you? > HA HDFS client is too strict with validating URI authorities > > > Key: HADOOP-9617 > URL: https://issues.apache.org/jira/browse/HADOOP-9617 > Project: Hadoop Common > Issue Type: Bug > Components: fs, ha >Affects Versions: 2.0.5-alpha >Reporter: Aaron T. Myers >Assignee: Aaron T. Myers > Attachments: HADOOP-9617.patch, HADOOP-9617.patch > > > HADOOP-9150 changed the way FS URIs are handled to prevent attempted DNS > resolution of logical URIs. This has the side effect of changing the way > Paths are verified when passed to a FileSystem instance created with an > authority that differs from the authority of the Path. Previous to > HADOOP-9150, a default port would be added to either authority in the event > that either URI did not have a port. Post HADOOP-9150, no default port is > added. This means that a FileSystem instance created using the URI > "hdfs://ha-logical-uri:8020" will no longer process paths containing just the > authority "hdfs://ha-logical-uri", and will throw an error like the following: > {noformat} > java.lang.IllegalArgumentException: Wrong FS: > hdfs://ns1/user/hive/warehouse/sample_07/sample_07.csv, expected: > hdfs://ns1:8020 > at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:625) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:173) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:249) > at > org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:82) > {noformat} > Though this is not necessarily incorrect behavior, it is a > backward-incompatible change that at least breaks certain clients' ability to > connect to an HA HDFS cluster. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9636) UNIX like sort options for ls shell command
[ https://issues.apache.org/jira/browse/HADOOP-9636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Dhussa updated HADOOP-9636: - Resolution: Duplicate Status: Resolved (was: Patch Available) Missed the patch already provided. https://issues.apache.org/jira/browse/HADOOP-8934 Thanks Jonathan > UNIX like sort options for ls shell command > --- > > Key: HADOOP-9636 > URL: https://issues.apache.org/jira/browse/HADOOP-9636 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 3.0.0 >Reporter: Varun Dhussa >Priority: Minor > Attachments: HADOOP-9636-001.patch > > > Add support for unix ls like sort options in fs -ls: > -t : sort by modification time > -S : sort by file size > -r : reverse the sort order > -u : sort by acess time -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira