[jira] [Commented] (HADOOP-9621) Document/analyze current Hadoop security model
[ https://issues.apache.org/jira/browse/HADOOP-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677837#comment-13677837 ] Kyle Leckie commented on HADOOP-9621: - I think this is needed. I have not yet seen an analysis wrt the common security principles. > Document/analyze current Hadoop security model > -- > > Key: HADOOP-9621 > URL: https://issues.apache.org/jira/browse/HADOOP-9621 > Project: Hadoop Common > Issue Type: Task > Components: security >Reporter: Brian Swan >Priority: Minor > Labels: documentation > Original Estimate: 336h > Remaining Estimate: 336h > > In light of the proposed changes to Hadoop security in Hadoop-9533 and > Hadoop-9392, having a common, detailed understanding (in the form of a > document) of the benefits/drawbacks of the current security model and how it > works would be useful. The document should address all security principals, > their authentication mechanisms, and handling of shared secrets through the > lens of the following principles: Minimize attack surface area, Establish > secure defaults, Principle of Least privilege, Principle of Defense in depth, > Fail securely, Don’t trust services, Separation of duties, Avoid security by > obscurity, Keep security simple, Fix security issues correctly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9533) Centralized Hadoop SSO/Token Server
[ https://issues.apache.org/jira/browse/HADOOP-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677835#comment-13677835 ] Kyle Leckie commented on HADOOP-9533: - Hi Daryn, I have assumed that the issues with TLS will be: 1) Key management 2) Possible performance degradation With the intensive use and reliance on the JDKs implementations of TLS I don't expect any known unpatched issues. Older versions of the protocol have weaknesses but we can enforce TLS 1.1+. -- Kyle > Centralized Hadoop SSO/Token Server > --- > > Key: HADOOP-9533 > URL: https://issues.apache.org/jira/browse/HADOOP-9533 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Reporter: Larry McCay > Attachments: HSSO-Interaction-Overview-rev-1.docx, > HSSO-Interaction-Overview-rev-1.pdf > > > This is an umbrella Jira filing to oversee a set of proposals for introducing > a new master service for Hadoop Single Sign On (HSSO). > There is an increasing need for pluggable authentication providers that > authenticate both users and services as well as validate tokens in order to > federate identities authenticated by trusted IDPs. These IDPs may be deployed > within the enterprise or third-party IDPs that are external to the enterprise. > These needs speak to a specific pain point: which is a narrow integration > path into the enterprise identity infrastructure. Kerberos is a fine solution > for those that already have it in place or are willing to adopt its use but > there remains a class of user that finds this unacceptable and needs to > integrate with a wider variety of identity management solutions. > Another specific pain point is that of rolling and distributing keys. A > related and integral part of the HSSO server is library called the Credential > Management Framework (CMF), which will be a common library for easing the > management of secrets, keys and credentials. > Initially, the existing delegation, block access and job tokens will continue > to be utilized. There may be some changes required to leverage a PKI based > signature facility rather than shared secrets. This is a means to simplify > the solution for the pain point of distributing shared secrets. > This project will primarily centralize the responsibility of authentication > and federation into a single service that is trusted across the Hadoop > cluster and optionally across multiple clusters. This greatly simplifies a > number of things in the Hadoop ecosystem: > 1.a single token format that is used across all of Hadoop regardless of > authentication method > 2.a single service to have pluggable providers instead of all services > 3.a single token authority that would be trusted across the cluster/s and > through PKI encryption be able to easily issue cryptographically verifiable > tokens > 4.automatic rolling of the token authority’s keys and publishing of the > public key for easy access by those parties that need to verify incoming > tokens > 5.use of PKI for signatures eliminates the need for securely sharing and > distributing shared secrets > In addition to serving as the internal Hadoop SSO service this service will > be leveraged by the Knox Gateway from the cluster perimeter in order to > acquire the Hadoop cluster tokens. The same token mechanism that is used for > internal services will be used to represent user identities. Providing for > interesting scenarios such as SSO across Hadoop clusters within an enterprise > and/or into the cloud. > The HSSO service will be comprised of three major components and capabilities: > 1.Federating IDP – authenticates users/services and issues the common > Hadoop token > 2.Federating SP – validates the token of trusted external IDPs and issues > the common Hadoop token > 3.Token Authority – management of the common Hadoop tokens – including: > a.Issuance > b.Renewal > c.Revocation > As this is a meta Jira for tracking this overall effort, the details of the > individual efforts will be submitted along with the child Jira filings. > Hadoop-Common would seem to be the most appropriate home for such a service > and its related common facilities. We will also leverage and extend existing > common mechanisms as appropriate. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9527) TestLocalFSFileContextSymlink is broken on Windows
[ https://issues.apache.org/jira/browse/HADOOP-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677790#comment-13677790 ] Ivan Mitic commented on HADOOP-9527: Latest patch looks much better Arpit. Thanks for addressing the comments. I still want to reiterate on above comment #2 and have one additional question: - We have already seen that miss use of Paths or Strings can cause problems because of different path semantics (for example, API consumers passing paths with forward slashes on Windows). File object makes it explicit that a local file system path is required. Let's please add a single public API {{String FileUtil.readLink(File)}} and work based off of that. - FileUtil.java: I am not able to understand this part {code} // Relative links on Windows must be resolvable at the time of // creation. To ensure this we run the shell command in the directory // of the link. {code} Why would a parent directory of a link be the appropriate working directory? > TestLocalFSFileContextSymlink is broken on Windows > -- > > Key: HADOOP-9527 > URL: https://issues.apache.org/jira/browse/HADOOP-9527 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 2.3.0 >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HADOOP-9527.001.patch, HADOOP-9527.002.patch, > HADOOP-9527.003.patch, HADOOP-9527.004.patch, HADOOP-9527.005.patch, > HADOOP-9527.006.patch, HADOOP-9527.007.patch, HADOOP-9527.008.patch, > RenameLink.java > > > Multiple test cases are broken. I didn't look at each failure in detail. > The main cause of the failures appears to be that RawLocalFS.readLink() does > not work on Windows. We need "winutils readlink" to fix the test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9599) hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly
[ https://issues.apache.org/jira/browse/HADOOP-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677783#comment-13677783 ] Mostafa Elhemali commented on HADOOP-9599: -- Thanks Ivan - I've updated the patch to correct this. I still have a "-1 tests included" but in this case it's a fundamental Windows platform fix so I think it should be fine (any unit test on Windows should catch this). > hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly > - > > Key: HADOOP-9599 > URL: https://issues.apache.org/jira/browse/HADOOP-9599 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 > Environment: Windows >Reporter: Mostafa Elhemali >Assignee: Mostafa Elhemali > Attachments: HADOOP-9599.2.patch, HADOOP-9599.3.patch, > HADOOP-9599.patch > > > In Windows, hadoop-config.cmd uses the non-existent-variable HADOOP_CORE_HOME > when setting the JAVA_LIBRAR_PATH variable. It should use HADOOP_HOME. > The net effect is that running e.g. "hdfs namenode" would error out with > UnsatisfiedLinkError because it can't access hadoop.dll. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677706#comment-13677706 ] Hadoop QA commented on HADOOP-9421: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586612/HADOOP-9421.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to introduce 1 new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2616//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HADOOP-Build/2616//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-common.html Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2616//console This message is automatically generated. > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9421) Convert SASL to use ProtoBuf and add lengths for non-blocking processing
[ https://issues.apache.org/jira/browse/HADOOP-9421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-9421: Attachment: HADOOP-9421.patch The IpSerializationType change is orthogonal to this patch, so I'd like to defer to another jira if that's ok? I changed the authMethod in the connection header to specify the authentication protocol - in this case none, or sasl, to allow for future protocols. I think that's what you wanted? It also let me handle the funky switch to simple in a cleaner fashion. I did realize that having the mechanism/proto/serverId tuple is insufficient. Those are really just the fields required to create the SASL server or client, which is independent of what we're actually authenticating. Ex. It's not right to assume DIGEST-MD5 means token, when perhaps SCRAM would be a better replacement. So now I'm passing TOKEN/DIGEST-MD5/... so that some happy day in the future, we can configure the mechanisms for different auth types, and the auth types are a step closer to pluggable. > Convert SASL to use ProtoBuf and add lengths for non-blocking processing > > > Key: HADOOP-9421 > URL: https://issues.apache.org/jira/browse/HADOOP-9421 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 2.0.3-alpha >Reporter: Sanjay Radia >Assignee: Daryn Sharp > Attachments: HADOOP-9421.patch, HADOOP-9421.patch, HADOOP-9421.patch, > HADOOP-9421-v2-demo.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9628) Setup a daily build job for branch-2.1.0-beta
Hitesh Shah created HADOOP-9628: --- Summary: Setup a daily build job for branch-2.1.0-beta Key: HADOOP-9628 URL: https://issues.apache.org/jira/browse/HADOOP-9628 Project: Hadoop Common Issue Type: Bug Reporter: Hitesh Shah -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception
[ https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677602#comment-13677602 ] Junping Du commented on HADOOP-9447: Agree. The information in stack trace sounds adequate enough so may not necessary for defining an additional exception here. Thanks Luke for comments. Steve, I saw in your YARN-530 (v19) already cover code changes here and remove in v20 and v21. Do you want to address it here or YARN-530? > Configuration to include name of failing file/resource when wrapping an XML > parser exception > > > Key: HADOOP-9447 > URL: https://issues.apache.org/jira/browse/HADOOP-9447 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Trivial > Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, > HADOOP-9447-v3.patch, HADOOP-9447-v4.patch > > > Currently, when there is an error parsing an XML file, the name of the file > at fault is logged, but not included in the (wrapped) XML exception. If that > same file/resource name were included in the text of the wrapped exception, > people would be able to find out which file was causing problems -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677575#comment-13677575 ] Sangjin Lee commented on HADOOP-9487: - Personally I am ok with the change in the default behavior. I think it needs to be acknowledged explicitly though. I would then suggest revising the title to better reflect what's being proposed. > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (HADOOP-9622) bzip2 codec can drop records when reading data in splits
[ https://issues.apache.org/jira/browse/HADOOP-9622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Lowe reassigned HADOOP-9622: -- Assignee: Jason Lowe This is a bit tricky to get right, as I ran into PIG-3352 while investigating it. However I think we can detect these CR/LF/CRLF boundary conditions properly, if the line reader that is building the record reads the data byte-by-byte and notices the exact character where the reported position goes past the end of the split. At that point it can decide which of the cases it is in and react properly. That would also solve similar problems that exist for custom, multi-byte delimiters that span block boundaries. Currently the line reader is buffered, and it would be a shame to have to give that up. I think we can still use buffered reads from the codec stream with one critical assumption: the codec will *never* return data spanning two blocks in a single read. I'm assuming that's the case today, since failure to do that would break the existing LineRecordReader->LineReader->SplittableCompressionCodec relationship today. LineReader is buffering data from the codec, but LineRecordReader is checking the codec's position after each record returned. > bzip2 codec can drop records when reading data in splits > > > Key: HADOOP-9622 > URL: https://issues.apache.org/jira/browse/HADOOP-9622 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 2.0.4-alpha, 0.23.8 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Critical > Attachments: HADOOP-9622-testcase.patch > > > Bzip2Codec.BZip2CompressionInputStream can cause records to be dropped when > reading them in splits based on where record delimiters occur relative to > compression block boundaries. > Thanks to [~knoguchi] for discovering this problem while working on PIG-3251. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9447) Configuration to include name of failing file/resource when wrapping an XML parser exception
[ https://issues.apache.org/jira/browse/HADOOP-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677535#comment-13677535 ] Luke Lu commented on HADOOP-9447: - If you look at the stack trace, the generic runtime exception wrapper with proper message is adequate for troubleshooting. If you really want it to be useful try to put a line number in the message as well. > Configuration to include name of failing file/resource when wrapping an XML > parser exception > > > Key: HADOOP-9447 > URL: https://issues.apache.org/jira/browse/HADOOP-9447 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Trivial > Attachments: HADOOP-9447-2.patch, HADOOP-9447.patch, > HADOOP-9447-v3.patch, HADOOP-9447-v4.patch > > > Currently, when there is an error parsing an XML file, the name of the file > at fault is logged, but not included in the (wrapped) XML exception. If that > same file/resource name were included in the text of the wrapped exception, > people would be able to find out which file was causing problems -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs
[ https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677534#comment-13677534 ] Colin Patrick McCabe commented on HADOOP-9439: -- Thanks, very thorough review. I fixed the naming of the parameters to getGroupsForUser and logError. Although it didn't affect correctness, it certainly was very confusing. You're right that {{GetStaticMethodID}} and {{FindClass}} throw exceptions on failure. There is no need to throw another one (and it is probably actually harmful). Thanks for finding that. However, {{NewGlobalRef}} does not throw an exception, but merely returns {{NULL}} when you're out of memory. I will add a comment clarifying why we ignore exceptions in logError. I agree that we should probably just use the existing monitor lock from NativeIO.java. That way, I don't have to modify that code. I wasn't aware that there were other folks poking the user/group functions in Hadoop. Right now, it looks like {{NativeIO#getUserName}} is only called from tests calling {{NativeIO#POSIX#getFstat}}, but of course that may change in the future. Invalid groups are a sore point for {{ShellBasedUnixGroupsMapping}}. If any invalid groups are associated with a user, the "{{groups}}" program will fail with a non-zero return code, and no information is returned. For {{JniBasedUnixGroupsMapping}}, I would prefer to return the groups that were valid, rather than nothing at all. I suppose this is debatable, though. I can test creating such invalid groups. I understand that sometimes it's unnecessary, but I'd rather have {{DeleteLocalRef}} used for all allocations. For one thing, in libhdfs, it really *is* necessary everywhere (the JNI invocation API never automatically disposes of Java references that are made in the invoking C code). It confuses people when they copy a piece of code from one part of the source tree to another and it suddenly becomes incorrect. For another thing, the spec only says that the JVM has to provide at least 16 local references at once, which is not very many at all. It's only one line of overhead per Java reference, and {{DeleteLocalRef}} already ignores NULLs, so I'd rather just have a consistent style everywhere, than try to be clever. > JniBasedUnixGroupsMapping: fix some crash bugs > -- > > Key: HADOOP-9439 > URL: https://issues.apache.org/jira/browse/HADOOP-9439 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.0.4-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Minor > Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, > HDFS-4640.002.patch > > > JniBasedUnixGroupsMapping has some issues. > * sometimes on error paths variables are freed prior to being initialized > * re-allocate buffers less frequently (can reuse the same buffer for multiple > calls to getgrnam) > * allow non-reentrant functions to be used, to work around client bugs > * don't throw IOException from JNI functions if the JNI functions do not > declare this checked exception. > * don't bail out if only one group name among all the ones associated with a > user can't be looked up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677512#comment-13677512 ] Hadoop QA commented on HADOOP-9601: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586577/HADOOP-9601-bench.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2615//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2615//console This message is automatically generated. > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-bench.patch, > HADOOP-9601-trunk-rebase-2.patch, HADOOP-9601-trunk-rebase.patch, > HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677480#comment-13677480 ] Chu Tong commented on HADOOP-9487: -- [~sjlee0], It will by default disable deprecation msgs from Configuration.java. Yes, if we want to make it an option to be turned on by the user, I can definitely change it. > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8468) Umbrella of enhancements to support different failure and locality topologies
[ https://issues.apache.org/jira/browse/HADOOP-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Lu updated HADOOP-8468: Target Version/s: 2.1.0-beta (was: 2.0.2-alpha) Fix Version/s: 1.2.0 > Umbrella of enhancements to support different failure and locality topologies > - > > Key: HADOOP-8468 > URL: https://issues.apache.org/jira/browse/HADOOP-8468 > Project: Hadoop Common > Issue Type: Improvement > Components: ha, io >Affects Versions: 1.0.0, 2.0.0-alpha >Reporter: Junping Du >Assignee: Junping Du > Fix For: 1.2.0 > > Attachments: HADOOP-8468-total.patch, HADOOP-8468-total-v3.patch, > HVE_Hadoop World Meetup 2012.pptx, HVE User Guide on branch-1(draft ).pdf, > Proposal for enchanced failure and locality topologies.pdf, Proposal for > enchanced failure and locality topologies (revised-1.0).pdf > > > The current hadoop network topology (described in some previous issues like: > Hadoop-692) works well in classic three-tiers network when it comes out. > However, it does not take into account other failure models or changes in the > infrastructure that can affect network bandwidth efficiency like: > virtualization. > Virtualized platform has following genes that shouldn't been ignored by > hadoop topology in scheduling tasks, placing replica, do balancing or > fetching block for reading: > 1. VMs on the same physical host are affected by the same hardware failure. > In order to match the reliability of a physical deployment, replication of > data across two virtual machines on the same host should be avoided. > 2. The network between VMs on the same physical host has higher throughput > and lower latency and does not consume any physical switch bandwidth. > Thus, we propose to make hadoop network topology extend-able and introduce a > new level in the hierarchical topology, a node group level, which maps well > onto an infrastructure that is based on a virtualized environment. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677476#comment-13677476 ] Sangjin Lee commented on HADOOP-9487: - [~stayhf] I think your patch would change the *default behavior* to disable the deprecation warnings. Is that intended? I thought the scope of this JIRA was to provide an option of disabling it but not necessarily change the default behavior. Am I mistaken? > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-9601: Attachment: HADOOP-9601-bench.patch The bottleneck for -put does not seems to be verify checksums, but calculateChunkedSums on the client side, which doesn't have a native equiv in NativeCrc32.c I wrote a micro-benchmark, which shows the array buffers are now the same speed as the direct buffers, with the patch. Before {code} Checksumming CRC32+array: 32768 MB took 35944 ms (911.64 MB/s) Checksumming CRC32C+array: 32768 MB took 35517 ms (922.60 MB/s) Checksumming CRC32+direct: 32768 MB took 24318 ms (1347.48 MB/s) Checksumming CRC32C+direct: 32768 MB took 13229 ms (2476.98 MB/s) {code} After {code} Checksumming CRC32+array: 32768 MB took 24399 ms (1343.01 MB/s) Checksumming CRC32C+array: 32768 MB took 13238 ms (2475.30 MB/s) Checksumming CRC32+direct: 32768 MB took 25190 ms (1300.83 MB/s) Checksumming CRC32C+direct: 32768 MB took 13075 ms (2506.16 MB/s) {code} > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-bench.patch, > HADOOP-9601-trunk-rebase-2.patch, HADOOP-9601-trunk-rebase.patch, > HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs
[ https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677465#comment-13677465 ] Todd Lipcon commented on HADOOP-9439: - Also, looks like you missed the following: bq. We may have to actually share this lock and configuration with the lock used in NativeIO.c. bq. yeah, let's do that. Instead you're just sharing between the user_info and group_info, but not the lock used by NativeIO.c (which is actually a java monitor lock rather than a pthread mutex). Given the above, why not something like the following: - move the lock object from NativeIO.c to be defined in the NativeIO.java class as a static field - change the Java code in both places to just do something like: {code} if (shouldLock) { synchronized (lockObject) { return myJniCall(...); } } else { return myJniCall(...); } {code} Would that end up simpler? > JniBasedUnixGroupsMapping: fix some crash bugs > -- > > Key: HADOOP-9439 > URL: https://issues.apache.org/jira/browse/HADOOP-9439 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.0.4-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Minor > Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, > HDFS-4640.002.patch > > > JniBasedUnixGroupsMapping has some issues. > * sometimes on error paths variables are freed prior to being initialized > * re-allocate buffers less frequently (can reuse the same buffer for multiple > calls to getgrnam) > * allow non-reentrant functions to be used, to work around client bugs > * don't throw IOException from JNI functions if the JNI functions do not > declare this checked exception. > * don't bail out if only one group name among all the ones associated with a > user can't be looked up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677460#comment-13677460 ] Chu Tong commented on HADOOP-9487: -- trivial change, no test included > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9439) JniBasedUnixGroupsMapping: fix some crash bugs
[ https://issues.apache.org/jira/browse/HADOOP-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677450#comment-13677450 ] Todd Lipcon commented on HADOOP-9439: - It looks like you've inverted the meaning of the boolean in some places here: {code} + * @param reentrantTrue if we should use the reentrant versions of + * getgrent, getpwent, etc. They are faster, but + * buggy in some implementations. + * + * @return The set of groups associated with a user. + */ + native static String[] getGroupsForUser(String username, boolean reentrant); {code} suggests that the second parameter being 'false' disables locking. But then: {code} public List getGroups(String user) throws IOException { String[] groups = new String[0]; try { - groups = getGroupForUser(user); + groups = getGroupsForUser(user, removeConcurrency); {code} passes 'true' if you want to disable locking. The native implementation seems to agree with the latter (i.e that passing true will introduce the locking) {code} + static private void logError(int groupIdx, String error) { +LOG.error("error looking up the name of group " + groupIdx + ": " + error); {code} This parameter should be 'groupId' not 'groupIdx' {code} + g_log_error_method = (*env)->GetStaticMethodID(env, clazz, "logError", +"(ILjava/lang/String;)V"); + if (!g_log_error_method) { +jthrowable jthr = newRuntimeException(env, +"JniBasedUnixGroupsMapping#anchorNative: failed to look " +"up JniBasedUnixGroupsMapping#logError method\n"); +(*env)->Throw(env, jthr); {code} No need to throw an exception here - GetStaticMethodID already throws NoSuchMethodError if it fails. Same with the {{FindClass}} call below, and I assume NewGlobalRef as well (at least I've never seen this pattern of checking the result of NewGlobalRef). {code} + error_msg = (*env)->NewStringUTF(env, terror(ret)); + if (!error_msg) { +(*env)->ExceptionClear(env); +return; + } {code} Why are you ignoring exceptions in this method? Add a comment explaining this. - In general, you don't need to {{DeleteLocalRef}} inside short-lived JNI methods. It just adds clutter to the code -- they're automatically deleted at the end of the method. It's only important if you plan on doing a lot of allocation/freeing inside the method and need to let GC collect the stuff before the method returns. - Can you explain how you tested the various error code paths here in the JNI? eg did you create a user which has some invalid groups? I'm nervous about missing some bug until we hit it in production. > JniBasedUnixGroupsMapping: fix some crash bugs > -- > > Key: HADOOP-9439 > URL: https://issues.apache.org/jira/browse/HADOOP-9439 > Project: Hadoop Common > Issue Type: Bug > Components: native >Affects Versions: 2.0.4-alpha >Reporter: Colin Patrick McCabe >Assignee: Colin Patrick McCabe >Priority: Minor > Attachments: HADOOP-9439.001.patch, HADOOP-9439.003.patch, > HDFS-4640.002.patch > > > JniBasedUnixGroupsMapping has some issues. > * sometimes on error paths variables are freed prior to being initialized > * re-allocate buffers less frequently (can reuse the same buffer for multiple > calls to getgrnam) > * allow non-reentrant functions to be used, to work around client bugs > * don't throw IOException from JNI functions if the JNI functions do not > declare this checked exception. > * don't bail out if only one group name among all the ones associated with a > user can't be looked up. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677451#comment-13677451 ] Hadoop QA commented on HADOOP-9487: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586560/HADOOP-9487.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2614//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2614//console This message is automatically generated. > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9610) Missing pom dependency in MR-client
[ https://issues.apache.org/jira/browse/HADOOP-9610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677432#comment-13677432 ] Timothy St. Clair commented on HADOOP-9610: --- I figured I would ping this again as it's a very simple mod for review. > Missing pom dependency in MR-client > --- > > Key: HADOOP-9610 > URL: https://issues.apache.org/jira/browse/HADOOP-9610 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair > Labels: maven > Attachments: HADOOP-9610.patch > > > There is a missing dependencies in the mr-client pom.xml that is exposed when > running a mvn-rpmbuild against system dependencies. Regular mvn build > bypasses the issue via its default classpath. patch provided by > pmack...@redhat.com -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chu Tong updated HADOOP-9487: - Status: Patch Available (was: Open) > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chu Tong updated HADOOP-9487: - Attachment: HADOOP-9487.patch Fix as Steve proposed. I tested it on my development cluster using different code paths and it works. > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > Attachments: HADOOP-9487.patch > > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has "X" in its name
[ https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xi Fang updated HADOOP-9624: Description: TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "x" and "X" in its name. {code} final private static PathFilter TEST_X_FILTER = new PathFilter() { public boolean accept(Path file) { if(file.getName().contains("x") || file.toString().contains("X")) return true; else return false; {code} Some of the test cases construct a path by combining path "TEST_ROOT_DIR" with a customized partial path. The problem is that once the enlistment root path has "X" in its name, "TEST_ROOT_DIR" will also has "X" in its name. The path check will pass even if the customized partial path doesn't have "X". However, for this case the path filter is supposed to reject this path. An easy fix is to change "file.toString().contains("X")" to "file.getName().contains("X")". Note that org.apache.hadoop.fs.Path.getName() only returns the final component of this path. was: TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has "X" in its name. Here is the the root cause of the failures. TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "X" in its name. Some of the test cases construct a path by combining path "TEST_ROOT_DIR" with a customized partial path. The problem is that once the enlistment root path has "X" in its name, "TEST_ROOT_DIR" will also has "X" in its name. The path check will pass even if the customized partial path doesn't have "X". However, for this case the path filter is supposed to reject this path. An easy fix is using more complicated char sequence rather than a simple char "X". > TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root > path has "X" in its name > --- > > Key: HADOOP-9624 > URL: https://issues.apache.org/jira/browse/HADOOP-9624 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 1-win > Environment: Windows >Reporter: Xi Fang >Priority: Minor > Labels: test > Attachments: HADOOP-9624.patch > > > TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. > PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "x" > and "X" in its name. > {code} > final private static PathFilter TEST_X_FILTER = new PathFilter() { > public boolean accept(Path file) { > if(file.getName().contains("x") || file.toString().contains("X")) > return true; > else > return false; > {code} > Some of the test cases construct a path by combining path "TEST_ROOT_DIR" > with a customized partial path. > The problem is that once the enlistment root path has "X" in its name, > "TEST_ROOT_DIR" will also has "X" in its name. The path check will pass even > if the customized partial path doesn't have "X". However, for this case the > path filter is supposed to reject this path. > An easy fix is to change "file.toString().contains("X")" to > "file.getName().contains("X")". Note that org.apache.hadoop.fs.Path.getName() > only returns the final component of this path. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9624) TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root path has "X" in its name
[ https://issues.apache.org/jira/browse/HADOOP-9624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xi Fang updated HADOOP-9624: Attachment: HADOOP-9624.patch A patch is attached. > TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root > path has "X" in its name > --- > > Key: HADOOP-9624 > URL: https://issues.apache.org/jira/browse/HADOOP-9624 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 1-win > Environment: Windows >Reporter: Xi Fang >Priority: Minor > Labels: test > Attachments: HADOOP-9624.patch > > > TestFSMainOperationsLocalFileSystem failed when the Hadoop enlistment root > path has "X" in its name. Here is the the root cause of the failures. > TestFSMainOperationsLocalFileSystem extends Class FSMainOperationsBaseTest. > PathFilter FSMainOperationsBaseTest#TEST_X_FILTER checks if a path has "X" in > its name. Some of the test cases construct a path by combining path > "TEST_ROOT_DIR" with a customized partial path. The problem is that once the > enlistment root path has "X" in its name, "TEST_ROOT_DIR" will also has "X" > in its name. The path check will pass even if the customized partial path > doesn't have "X". However, for this case the path filter is supposed to > reject this path. > An easy fix is using more complicated char sequence rather than a simple char > "X". -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9599) hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly
[ https://issues.apache.org/jira/browse/HADOOP-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677333#comment-13677333 ] Hadoop QA commented on HADOOP-9599: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586544/HADOOP-9599.3.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2613//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2613//console This message is automatically generated. > hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly > - > > Key: HADOOP-9599 > URL: https://issues.apache.org/jira/browse/HADOOP-9599 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 > Environment: Windows >Reporter: Mostafa Elhemali >Assignee: Mostafa Elhemali > Attachments: HADOOP-9599.2.patch, HADOOP-9599.3.patch, > HADOOP-9599.patch > > > In Windows, hadoop-config.cmd uses the non-existent-variable HADOOP_CORE_HOME > when setting the JAVA_LIBRAR_PATH variable. It should use HADOOP_HOME. > The net effect is that running e.g. "hdfs namenode" would error out with > UnsatisfiedLinkError because it can't access hadoop.dll. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677334#comment-13677334 ] Todd Lipcon commented on HADOOP-9601: - Hey Gopal. I'll take a look at this patch later today. Did you get a chance to try any benchmarks? You should see good CPU savings even on something like a pseudo-distributed "hadoop fs -put" of a 1GB file. I think there's also a micro-benchmark of the CRC32 stuff floating around in the tree. Would be good to get some numbers before commit to make sure that it really does help as we expect. > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase-2.patch, > HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, > HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9599) hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly
[ https://issues.apache.org/jira/browse/HADOOP-9599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mostafa Elhemali updated HADOOP-9599: - Attachment: HADOOP-9599.3.patch > hadoop-config.cmd doesn't set JAVA_LIBRARY_PATH correctly > - > > Key: HADOOP-9599 > URL: https://issues.apache.org/jira/browse/HADOOP-9599 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0 > Environment: Windows >Reporter: Mostafa Elhemali >Assignee: Mostafa Elhemali > Attachments: HADOOP-9599.2.patch, HADOOP-9599.3.patch, > HADOOP-9599.patch > > > In Windows, hadoop-config.cmd uses the non-existent-variable HADOOP_CORE_HOME > when setting the JAVA_LIBRAR_PATH variable. It should use HADOOP_HOME. > The net effect is that running e.g. "hdfs namenode" would error out with > UnsatisfiedLinkError because it can't access hadoop.dll. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677294#comment-13677294 ] Hadoop QA commented on HADOOP-9601: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586530/HADOOP-9601-trunk-rebase-2.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2612//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2612//console This message is automatically generated. > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase-2.patch, > HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, > HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677264#comment-13677264 ] Arpit Agarwal commented on HADOOP-8982: --- Thanks for committing this Suresh. I filed HADOOP-9627 to fix the test. > TestSocketIOWithTimeout fails on Windows > > > Key: HADOOP-8982 > URL: https://issues.apache.org/jira/browse/HADOOP-8982 > Project: Hadoop Common > Issue Type: Bug > Components: net >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch > > > This is a possible race condition or difference in socket handling on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9627) TestSocketIOTimeout should be rewritten without platform-specific assumptions
Arpit Agarwal created HADOOP-9627: - Summary: TestSocketIOTimeout should be rewritten without platform-specific assumptions Key: HADOOP-9627 URL: https://issues.apache.org/jira/browse/HADOOP-9627 Project: Hadoop Common Issue Type: Bug Components: test Affects Versions: 3.0.0, 2.3.0 Reporter: Arpit Agarwal TestSocketIOTimeout makes some assumptions about the behavior of file channels wrt partial writes that do not appear to hold true on Windows [details in HADOOP-8982]. Currently part of the test is skipped on Windows. This bug is to track fixing the test. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On
[ https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677253#comment-13677253 ] Sanjay Radia commented on HADOOP-9392: -- bq. I suspect you are trying to replace the Hadoop delegation tokens. Kai Assuming that you are planning to replace Hadoop delegation tokens, would this be done in 2 phases where in phase 1 we would use the delegation tokens as is and simply use TAS for authentication? The same question for the block access token which is really more of a capability rather then a authentication token. This is important to know because it will help decide on whether or not to do improvements to the existing delegation tokens that were planned. > Token based authentication and Single Sign On > - > > Key: HADOOP-9392 > URL: https://issues.apache.org/jira/browse/HADOOP-9392 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Reporter: Kai Zheng >Assignee: Kai Zheng > Fix For: 3.0.0 > > Attachments: token-based-authn-plus-sso.pdf > > > This is an umbrella entry for one of project Rhino’s topic, for details of > project Rhino, please refer to > https://github.com/intel-hadoop/project-rhino/. The major goal for this entry > as described in project Rhino was > > “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication > at the RPC layer, via SASL. However this does not provide valuable attributes > such as group membership, classification level, organizational identity, or > support for user defined attributes. Hadoop components must interrogate > external resources for discovering these attributes and at scale this is > problematic. There is also no consistent delegation model. HDFS has a simple > delegation capability, and only Oozie can take limited advantage of it. We > will implement a common token based authentication framework to decouple > internal user and service authentication from external mechanisms used to > support it (like Kerberos)” > > We’d like to start our work from Hadoop-Common and try to provide common > facilities by extending existing authentication framework which support: > 1.Pluggable token provider interface > 2.Pluggable token verification protocol and interface > 3.Security mechanism to distribute secrets in cluster nodes > 4.Delegation model of user authentication -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-9601: Status: Patch Available (was: Open) Do-over > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase-2.patch, > HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, > HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-9601: Attachment: HADOOP-9601-trunk-rebase-2.patch Fixed patch & ran test -Ptest-patch > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase-2.patch, > HADOOP-9601-trunk-rebase.patch, HADOOP-9601-WIP-01.patch, > HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-9601: Status: Open (was: Patch Available) The rebase merge the same code twice. > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase.patch, > HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677218#comment-13677218 ] Hadoop QA commented on HADOOP-9601: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586526/HADOOP-9601-trunk-rebase.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified test files. {color:red}-1 javac{color:red}. The patch appears to cause the build to fail. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2611//console This message is automatically generated. > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase.patch, > HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9618) Add thread which detects JVM pauses
[ https://issues.apache.org/jira/browse/HADOOP-9618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677214#comment-13677214 ] Todd Lipcon commented on HADOOP-9618: - Hey Hitesh. We already have the "EventCounter" log4j appender. Rather than making one-off metrics for this, I think we should just extend the EventCounter to take a list of regular expressions to map to metrics in the logs - eg you could say something the following in the log4j configuration: {code} log4j.appender.EventCounter.WARN.gc-pauses=Detected pause in JVM {code} Does that seem like a more general way of achieving the above? bq. FWIW, -XX:UseGCLogFileRotation is available in JDK 6u34 and 7u2+. Thanks, I forgot about that new feature. Still it's nicer to have this info exposed via log4j, and with a consistent format (the Java GC logs keep changing format and also look different depending on which collector you're using, if I recall correctly) > Add thread which detects JVM pauses > --- > > Key: HADOOP-9618 > URL: https://issues.apache.org/jira/browse/HADOOP-9618 > Project: Hadoop Common > Issue Type: New Feature > Components: util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon > Attachments: hadoop-9618.txt > > > Often times users struggle to understand what happened when a long JVM pause > (GC or otherwise) causes things to malfunction inside a Hadoop daemon. For > example, a long GC pause while logging an edit to the QJM may cause the edit > to timeout, or a long GC pause may make other IPCs to the NameNode timeout. > We should add a simple thread which loops on 1-second sleeps, and if the > sleep ever takes significantly longer than 1 second, log a WARN. This will > make GC pauses obvious in logs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-9601: Assignee: Gopal V Labels: perfomance (was: ) Release Note: Support NativeCrc32 verification for byte[] array backed buffers Status: Patch Available (was: Open) > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon >Assignee: Gopal V > Labels: perfomance > Attachments: HADOOP-9601-trunk-rebase.patch, > HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9601) Support native CRC on byte arrays
[ https://issues.apache.org/jira/browse/HADOOP-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V updated HADOOP-9601: Attachment: HADOOP-9601-trunk-rebase.patch Final patch - rebased onto trunk from branch-2 > Support native CRC on byte arrays > - > > Key: HADOOP-9601 > URL: https://issues.apache.org/jira/browse/HADOOP-9601 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, util >Affects Versions: 3.0.0 >Reporter: Todd Lipcon > Attachments: HADOOP-9601-trunk-rebase.patch, > HADOOP-9601-WIP-01.patch, HADOOP-9601-WIP-02.patch > > > When we first implemented the Native CRC code, we only did so for direct byte > buffers, because these correspond directly to native heap memory and thus > make it easy to access via JNI. We'd generally assumed that accessing byte[] > arrays from JNI was not efficient enough, but now that I know more about JNI > I don't think that's true -- we just need to make sure that the critical > sections where we lock the buffers are short. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8545) Filesystem Implementation for OpenStack Swift
[ https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677162#comment-13677162 ] Hadoop QA commented on HADOOP-8545: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586518/HADOOP-8545-029.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 28 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:red}-1 findbugs{color}. The patch appears to cause Findbugs (version 1.3.9) to fail. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-tools/hadoop-openstack. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2610//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2610//console This message is automatically generated. > Filesystem Implementation for OpenStack Swift > - > > Key: HADOOP-8545 > URL: https://issues.apache.org/jira/browse/HADOOP-8545 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 1.2.0, 2.0.3-alpha >Reporter: Tim Miller >Assignee: Dmitry Mezhensky > Labels: hadoop, patch > Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, > HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-10.patch, > HADOOP-8545-11.patch, HADOOP-8545-12.patch, HADOOP-8545-13.patch, > HADOOP-8545-14.patch, HADOOP-8545-15.patch, HADOOP-8545-16.patch, > HADOOP-8545-17.patch, HADOOP-8545-18.patch, HADOOP-8545-19.patch, > HADOOP-8545-1.patch, HADOOP-8545-20.patch, HADOOP-8545-21.patch, > HADOOP-8545-22.patch, HADOOP-8545-23.patch, HADOOP-8545-24.patch, > HADOOP-8545-25.patch, HADOOP-8545-2.patch, HADOOP-8545-3.patch, > HADOOP-8545-4.patch, HADOOP-8545-5.patch, HADOOP-8545-6.patch, > HADOOP-8545-7.patch, HADOOP-8545-8.patch, HADOOP-8545-9.patch, > HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch > > > ,Add a filesystem implementation for OpenStack Swift object store, similar to > the one which exists today for S3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift
[ https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-8545: --- Status: Patch Available (was: Open) > Filesystem Implementation for OpenStack Swift > - > > Key: HADOOP-8545 > URL: https://issues.apache.org/jira/browse/HADOOP-8545 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.0.3-alpha, 1.2.0 >Reporter: Tim Miller >Assignee: Dmitry Mezhensky > Labels: hadoop, patch > Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, > HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-10.patch, > HADOOP-8545-11.patch, HADOOP-8545-12.patch, HADOOP-8545-13.patch, > HADOOP-8545-14.patch, HADOOP-8545-15.patch, HADOOP-8545-16.patch, > HADOOP-8545-17.patch, HADOOP-8545-18.patch, HADOOP-8545-19.patch, > HADOOP-8545-1.patch, HADOOP-8545-20.patch, HADOOP-8545-21.patch, > HADOOP-8545-22.patch, HADOOP-8545-23.patch, HADOOP-8545-24.patch, > HADOOP-8545-25.patch, HADOOP-8545-2.patch, HADOOP-8545-3.patch, > HADOOP-8545-4.patch, HADOOP-8545-5.patch, HADOOP-8545-6.patch, > HADOOP-8545-7.patch, HADOOP-8545-8.patch, HADOOP-8545-9.patch, > HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch > > > ,Add a filesystem implementation for OpenStack Swift object store, similar to > the one which exists today for S3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift
[ https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-8545: --- Attachment: HADOOP-8545-029.patch revised patch # removed trace level logging of auth messages (this had been commented out, but got re-enabled some time in may by me to debug some auth problems). # added rackspace patch to provide block location info that works on Hadoop 1.0.3, which NPEs if there is a different depth between host location paths and where blocks are. > Filesystem Implementation for OpenStack Swift > - > > Key: HADOOP-8545 > URL: https://issues.apache.org/jira/browse/HADOOP-8545 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 1.2.0, 2.0.3-alpha >Reporter: Tim Miller >Assignee: Dmitry Mezhensky > Labels: hadoop, patch > Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, > HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-10.patch, > HADOOP-8545-11.patch, HADOOP-8545-12.patch, HADOOP-8545-13.patch, > HADOOP-8545-14.patch, HADOOP-8545-15.patch, HADOOP-8545-16.patch, > HADOOP-8545-17.patch, HADOOP-8545-18.patch, HADOOP-8545-19.patch, > HADOOP-8545-1.patch, HADOOP-8545-20.patch, HADOOP-8545-21.patch, > HADOOP-8545-22.patch, HADOOP-8545-23.patch, HADOOP-8545-24.patch, > HADOOP-8545-25.patch, HADOOP-8545-2.patch, HADOOP-8545-3.patch, > HADOOP-8545-4.patch, HADOOP-8545-5.patch, HADOOP-8545-6.patch, > HADOOP-8545-7.patch, HADOOP-8545-8.patch, HADOOP-8545-9.patch, > HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch > > > ,Add a filesystem implementation for OpenStack Swift object store, similar to > the one which exists today for S3. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677089#comment-13677089 ] Chu Tong commented on HADOOP-9487: -- I see. I thought you want to make changes in the apache logging library. Now i am getting your point and it takes sense. > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677066#comment-13677066 ] Hadoop QA commented on HADOOP-9078: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586490/HADOOP-9078-trunk--N8.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 6 new or modified test files. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2609//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2609//console This message is automatically generated. > enhance unit-test coverage of class org.apache.hadoop.fs.FileContext > > > Key: HADOOP-9078 > URL: https://issues.apache.org/jira/browse/HADOOP-9078 > Project: Hadoop Common > Issue Type: Test >Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky > Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, > HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, > HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, > HADOOP-9078-branch-2--N3.patch, HADOOP-9078-branch-2.patch, > HADOOP-9078.patch, > HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, > HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch, > HADOOP-9078-trunk--N6.patch, HADOOP-9078-trunk--N8.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8957) AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677041#comment-13677041 ] Hudson commented on HADOOP-8957: Integrated in Hadoop-Mapreduce-trunk #1448 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1448/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > AbstractFileSystem#IsValidName should be overridden for embedded file systems > like ViewFs > - > > Key: HADOOP-8957 > URL: https://issues.apache.org/jira/browse/HADOOP-8957 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: trunk-win >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8957-branch-2.patch, > HADOOP-8957-branch-trunk-win.2.patch, HADOOP-8957-branch-trunk-win.3.patch, > HADOOP-8957-branch-trunk-win.4.patch, HADOOP-8957.patch, HADOOP-8957.patch, > HADOOP-8957-trunk.4.patch > > > This appears to be a problem with parsing a Windows-specific path, ultimately > throwing InvocationTargetException from AbstractFileSystem.newInstance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9607) Fixes in Javadoc build
[ https://issues.apache.org/jira/browse/HADOOP-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677042#comment-13677042 ] Hudson commented on HADOOP-9607: Integrated in Hadoop-Mapreduce-trunk #1448 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1448/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Fixes in Javadoc build > -- > > Key: HADOOP-9607 > URL: https://issues.apache.org/jira/browse/HADOOP-9607 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair >Priority: Minor > Labels: documentation > Fix For: 2.1.0-beta > > Attachments: HADOOP-9607.patch > > > It appears that some non-ascii characters have crept into the code, which > cause an issue when building javadocs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677046#comment-13677046 ] Hudson commented on HADOOP-8982: Integrated in Hadoop-Mapreduce-trunk #1448 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1448/]) HADOOP-8982. TestSocketIOWithTimeout fails on Windows. Contributed by Chris Nauroth. (Revision 1490124) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490124 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java > TestSocketIOWithTimeout fails on Windows > > > Key: HADOOP-8982 > URL: https://issues.apache.org/jira/browse/HADOOP-8982 > Project: Hadoop Common > Issue Type: Bug > Components: net >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch > > > This is a possible race condition or difference in socket handling on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9605) Update junit dependency
[ https://issues.apache.org/jira/browse/HADOOP-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677044#comment-13677044 ] Hudson commented on HADOOP-9605: Integrated in Hadoop-Mapreduce-trunk #1448 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1448/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update junit dependency > --- > > Key: HADOOP-9605 > URL: https://issues.apache.org/jira/browse/HADOOP-9605 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair > Labels: maven > Fix For: 2.1.0-beta > > Attachments: HADOOP-9605.patch1, HADOOP-9605.patch2 > > > Simple update of the junit dependency to use newer version. E.g. when > running maven-rpmbuild on Fedora 18. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9593) stack trace printed at ERROR for all yarn clients without hadoop.home set
[ https://issues.apache.org/jira/browse/HADOOP-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677047#comment-13677047 ] Hudson commented on HADOOP-9593: Integrated in Hadoop-Mapreduce-trunk #1448 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1448/]) HADOOP-9593. Changing CHANGES.txt to reflect merge to branch-2.1-beta. (Revision 1490105) Result = SUCCESS acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490105 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt > stack trace printed at ERROR for all yarn clients without hadoop.home set > - > > Key: HADOOP-9593 > URL: https://issues.apache.org/jira/browse/HADOOP-9593 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9593-001.patch, HADOOP-9593-002.patch > > > This is the problem of HADOOP-9482 now showing up in a different application > -one whose log4j settings haven't turned off all Shell logging. > Unless you do that, all yarn clients will have a stack trace at error in > their logs, which is generating false alarms and is utterly pointless. Why > does this merit a stack trace? Why log it at error? It's not an error for a > client app to not have these values set as long as they have the relevant > JARs on their classpath. And if they don't, they'll get some classpath error > instead -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9526) TestShellCommandFencer and TestShell fail on Windows
[ https://issues.apache.org/jira/browse/HADOOP-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677043#comment-13677043 ] Hudson commented on HADOOP-9526: Integrated in Hadoop-Mapreduce-trunk #1448 (See [https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1448/]) HADOOP-9526. TestShellCommandFencer and TestShell fail on Windows. Contributed by Arpit Agarwal. (Revision 1490120) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490120 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShell.java > TestShellCommandFencer and TestShell fail on Windows > > > Key: HADOOP-9526 > URL: https://issues.apache.org/jira/browse/HADOOP-9526 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9526.001.patch, HADOOP-9526.002.patch > > > The following TestShellCommandFencer tests fail on Windows. > # testTargetAsEnvironment > # testConfAsEnvironment > # testTargetAsEnvironment > TestShell#testInterval also fails. > All failures look like test issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9626) Add an interface for any exception to serve up an Exit code
[ https://issues.apache.org/jira/browse/HADOOP-9626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677037#comment-13677037 ] Suresh Srinivas commented on HADOOP-9626: - +1 for the idea. > Add an interface for any exception to serve up an Exit code > --- > > Key: HADOOP-9626 > URL: https://issues.apache.org/jira/browse/HADOOP-9626 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 2.1.0-beta >Reporter: Steve Loughran >Priority: Minor > > Various exception included exit codes, specifically > {{Shell.ExitCodeException}}, {{ExitUtils.ExitException()}}. > If all exceptions that wanted to pass up an exit code to the main method > implemented an interface with the method {{int getExitCode()}}, it'd be > easier to extract exit codes from these methods in a unified way, so > generating the desired exit codes on the application itself -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9623) Update jets3t dependency
[ https://issues.apache.org/jira/browse/HADOOP-9623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677035#comment-13677035 ] Timothy St. Clair commented on HADOOP-9623: --- This patch is ready for review. Recommend taking latest(0.9.0) at this point, as 2.X stabilizes. > Update jets3t dependency > > > Key: HADOOP-9623 > URL: https://issues.apache.org/jira/browse/HADOOP-9623 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair > Labels: maven > Attachments: HADOOP-9623.patch > > > Current version referenced in pom is 0.6.1 (Aug 2008), updating to 0.9.0 > enables mvn-rpmbuild to build against system dependencies. > http://jets3t.s3.amazonaws.com/RELEASE_NOTES.html -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9621) Document/analyze current Hadoop security model
[ https://issues.apache.org/jira/browse/HADOOP-9621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13677007#comment-13677007 ] Daryn Sharp commented on HADOOP-9621: - I'm glad someone else is interested in this level of detail. I've been meaning to document the security machinery for a long time, and even submitted an abstract to the summit for this topic but it wasn't accepted. I'm willing to contribute to the effort, but I'm not sure how soon I'll have time to devote. > Document/analyze current Hadoop security model > -- > > Key: HADOOP-9621 > URL: https://issues.apache.org/jira/browse/HADOOP-9621 > Project: Hadoop Common > Issue Type: Task > Components: security >Reporter: Brian Swan >Priority: Minor > Labels: documentation > Original Estimate: 336h > Remaining Estimate: 336h > > In light of the proposed changes to Hadoop security in Hadoop-9533 and > Hadoop-9392, having a common, detailed understanding (in the form of a > document) of the benefits/drawbacks of the current security model and how it > works would be useful. The document should address all security principals, > their authentication mechanisms, and handling of shared secrets through the > lens of the following principles: Minimize attack surface area, Establish > secure defaults, Principle of Least privilege, Principle of Defense in depth, > Fail securely, Don’t trust services, Separation of duties, Avoid security by > obscurity, Keep security simple, Fix security issues correctly. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676989#comment-13676989 ] Hudson commented on HADOOP-8982: Integrated in Hadoop-Hdfs-trunk #1422 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/]) HADOOP-8982. TestSocketIOWithTimeout fails on Windows. Contributed by Chris Nauroth. (Revision 1490124) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490124 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java > TestSocketIOWithTimeout fails on Windows > > > Key: HADOOP-8982 > URL: https://issues.apache.org/jira/browse/HADOOP-8982 > Project: Hadoop Common > Issue Type: Bug > Components: net >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch > > > This is a possible race condition or difference in socket handling on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8957) AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676984#comment-13676984 ] Hudson commented on HADOOP-8957: Integrated in Hadoop-Hdfs-trunk #1422 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > AbstractFileSystem#IsValidName should be overridden for embedded file systems > like ViewFs > - > > Key: HADOOP-8957 > URL: https://issues.apache.org/jira/browse/HADOOP-8957 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: trunk-win >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8957-branch-2.patch, > HADOOP-8957-branch-trunk-win.2.patch, HADOOP-8957-branch-trunk-win.3.patch, > HADOOP-8957-branch-trunk-win.4.patch, HADOOP-8957.patch, HADOOP-8957.patch, > HADOOP-8957-trunk.4.patch > > > This appears to be a problem with parsing a Windows-specific path, ultimately > throwing InvocationTargetException from AbstractFileSystem.newInstance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9526) TestShellCommandFencer and TestShell fail on Windows
[ https://issues.apache.org/jira/browse/HADOOP-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676986#comment-13676986 ] Hudson commented on HADOOP-9526: Integrated in Hadoop-Hdfs-trunk #1422 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/]) HADOOP-9526. TestShellCommandFencer and TestShell fail on Windows. Contributed by Arpit Agarwal. (Revision 1490120) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490120 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShell.java > TestShellCommandFencer and TestShell fail on Windows > > > Key: HADOOP-9526 > URL: https://issues.apache.org/jira/browse/HADOOP-9526 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9526.001.patch, HADOOP-9526.002.patch > > > The following TestShellCommandFencer tests fail on Windows. > # testTargetAsEnvironment > # testConfAsEnvironment > # testTargetAsEnvironment > TestShell#testInterval also fails. > All failures look like test issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9607) Fixes in Javadoc build
[ https://issues.apache.org/jira/browse/HADOOP-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676985#comment-13676985 ] Hudson commented on HADOOP-9607: Integrated in Hadoop-Hdfs-trunk #1422 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Fixes in Javadoc build > -- > > Key: HADOOP-9607 > URL: https://issues.apache.org/jira/browse/HADOOP-9607 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair >Priority: Minor > Labels: documentation > Fix For: 2.1.0-beta > > Attachments: HADOOP-9607.patch > > > It appears that some non-ascii characters have crept into the code, which > cause an issue when building javadocs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9593) stack trace printed at ERROR for all yarn clients without hadoop.home set
[ https://issues.apache.org/jira/browse/HADOOP-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676990#comment-13676990 ] Hudson commented on HADOOP-9593: Integrated in Hadoop-Hdfs-trunk #1422 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/]) HADOOP-9593. Changing CHANGES.txt to reflect merge to branch-2.1-beta. (Revision 1490105) Result = FAILURE acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490105 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt > stack trace printed at ERROR for all yarn clients without hadoop.home set > - > > Key: HADOOP-9593 > URL: https://issues.apache.org/jira/browse/HADOOP-9593 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9593-001.patch, HADOOP-9593-002.patch > > > This is the problem of HADOOP-9482 now showing up in a different application > -one whose log4j settings haven't turned off all Shell logging. > Unless you do that, all yarn clients will have a stack trace at error in > their logs, which is generating false alarms and is utterly pointless. Why > does this merit a stack trace? Why log it at error? It's not an error for a > client app to not have these values set as long as they have the relevant > JARs on their classpath. And if they don't, they'll get some classpath error > instead -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9605) Update junit dependency
[ https://issues.apache.org/jira/browse/HADOOP-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676987#comment-13676987 ] Hudson commented on HADOOP-9605: Integrated in Hadoop-Hdfs-trunk #1422 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/1422/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = FAILURE suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update junit dependency > --- > > Key: HADOOP-9605 > URL: https://issues.apache.org/jira/browse/HADOOP-9605 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair > Labels: maven > Fix For: 2.1.0-beta > > Attachments: HADOOP-9605.patch1, HADOOP-9605.patch2 > > > Simple update of the junit dependency to use newer version. E.g. when > running maven-rpmbuild on Fedora 18. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9078) enhance unit-test coverage of class org.apache.hadoop.fs.FileContext
[ https://issues.apache.org/jira/browse/HADOOP-9078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ivan A. Veselovsky updated HADOOP-9078: --- Attachment: HADOOP-9078-trunk--N8.patch HADOOP-9078-branch-2--N3.patch Patches for "branch-2" and "trunk" are updated because of merge over incoming changes. > enhance unit-test coverage of class org.apache.hadoop.fs.FileContext > > > Key: HADOOP-9078 > URL: https://issues.apache.org/jira/browse/HADOOP-9078 > Project: Hadoop Common > Issue Type: Test >Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6 >Reporter: Ivan A. Veselovsky >Assignee: Ivan A. Veselovsky > Attachments: HADOOP-9078--b.patch, HADOOP-9078-branch-0.23.patch, > HADOOP-9078-branch-2--b.patch, HADOOP-9078-branch-2--c.patch, > HADOOP-9078-branch-2--N1.patch, HADOOP-9078-branch-2--N2.patch, > HADOOP-9078-branch-2--N3.patch, HADOOP-9078-branch-2.patch, > HADOOP-9078.patch, > HADOOP-9078-patch-from-[trunk-gd]-to-[fb-HADOOP-9078-trunk-gd]-N1.patch, > HADOOP-9078-trunk--N1.patch, HADOOP-9078-trunk--N2.patch, > HADOOP-9078-trunk--N6.patch, HADOOP-9078-trunk--N8.patch > > -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9392) Token based authentication and Single Sign On
[ https://issues.apache.org/jira/browse/HADOOP-9392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676939#comment-13676939 ] Kai Zheng commented on HADOOP-9392: --- Daryn- Sorry for late response, your comments are great and very welcome. Identity token is issued by TAS when client authentication is passed, and TAS is trusted by Hadoop services. Token needs to authenticate to service and pluggable client token authenticator/validator is allowed. The authenticator can be configured per service according to service specific security policies to reject invalid tokens. As discussed with Kyle we are considering an Access Token with audience restriction annotations. Sure token should be protected avoiding to be leaked and used by other client/user and we’ll discuss about this separately. As above mentioned, TAS along with its issued tokens is trusted by Hadoop services/servers. The token with its attributes is encrypted and signed. As to which attributes should be contained in identity token, we can discuss about it separately. However, I don’t think group is something special, if we employ fine-grained access control against other attributes like role, they should be important too. Identity attributes can come from various Attribute Authorities in the enterprise outside of the Hadoop cluster. Most importantly we desire to abstract all of this from Hadoop into our proposed frameworks to simplify the configuration, deployment, and administration of large or multiple Hadoop clusters. Based on TokenAuth framework, we’re about to support Kerberos mechanism by KerberosTokenAuthnModule as mentioned in the doc, and the module can be used to authenticate TAS client via Kerberos. In this case TAS client needs to pass Kerberos authentication first via kinit or keytab, then authenticates to the authentication module as accessing a service via service ticket, and finally gets an identity token. The mentioned callback for principal instead of ticket might not be used. Client identity token wraps identity attributes from user, and service identity token wraps service attributes and security policies specific to the service. As default implementation, token realm trusting based authenticator is used to validate client token using service’s token. As discussed with Kyle, custom token validator can be plugined per service to employ advanced validation mechanisms. Note we are considering Access Token and when it’s used this validating of client token against service token might not be applied and the token validator can be simplified. Totally agree with that we/Hadoop should simplify the security configuration and deployment for Hadoop. In TokenAuth deployment, Hadoop only needs to be aware of TAS, without bothering to understand and configure concrete authentication providers. I agree we support multiple clusters, so let’s see how we can provide best support so that TAS can be layered for that. Regarding concrete configuration properties as simple as possible I would like to discuss then separately. > Token based authentication and Single Sign On > - > > Key: HADOOP-9392 > URL: https://issues.apache.org/jira/browse/HADOOP-9392 > Project: Hadoop Common > Issue Type: New Feature > Components: security >Reporter: Kai Zheng >Assignee: Kai Zheng > Fix For: 3.0.0 > > Attachments: token-based-authn-plus-sso.pdf > > > This is an umbrella entry for one of project Rhino’s topic, for details of > project Rhino, please refer to > https://github.com/intel-hadoop/project-rhino/. The major goal for this entry > as described in project Rhino was > > “Core, HDFS, ZooKeeper, and HBase currently support Kerberos authentication > at the RPC layer, via SASL. However this does not provide valuable attributes > such as group membership, classification level, organizational identity, or > support for user defined attributes. Hadoop components must interrogate > external resources for discovering these attributes and at scale this is > problematic. There is also no consistent delegation model. HDFS has a simple > delegation capability, and only Oozie can take limited advantage of it. We > will implement a common token based authentication framework to decouple > internal user and service authentication from external mechanisms used to > support it (like Kerberos)” > > We’d like to start our work from Hadoop-Common and try to provide common > facilities by extending existing authentication framework which support: > 1.Pluggable token provider interface > 2.Pluggable token verification protocol and interface > 3.Security mechanism to distribute secrets in cluster nodes > 4.Delegation model of user authentication -- This message is a
[jira] [Created] (HADOOP-9626) Add an interface for any exception to serve up an Exit code
Steve Loughran created HADOOP-9626: -- Summary: Add an interface for any exception to serve up an Exit code Key: HADOOP-9626 URL: https://issues.apache.org/jira/browse/HADOOP-9626 Project: Hadoop Common Issue Type: Improvement Components: util Affects Versions: 2.1.0-beta Reporter: Steve Loughran Priority: Minor Various exception included exit codes, specifically {{Shell.ExitCodeException}}, {{ExitUtils.ExitException()}}. If all exceptions that wanted to pass up an exit code to the main method implemented an interface with the method {{int getExitCode()}}, it'd be easier to extract exit codes from these methods in a unified way, so generating the desired exit codes on the application itself -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8982) TestSocketIOWithTimeout fails on Windows
[ https://issues.apache.org/jira/browse/HADOOP-8982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676911#comment-13676911 ] Hudson commented on HADOOP-8982: Integrated in Hadoop-Yarn-trunk #232 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/232/]) HADOOP-8982. TestSocketIOWithTimeout fails on Windows. Contributed by Chris Nauroth. (Revision 1490124) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490124 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestSocketIOWithTimeout.java > TestSocketIOWithTimeout fails on Windows > > > Key: HADOOP-8982 > URL: https://issues.apache.org/jira/browse/HADOOP-8982 > Project: Hadoop Common > Issue Type: Bug > Components: net >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8982.1.patch, HADOOP-8982.2.patch > > > This is a possible race condition or difference in socket handling on Windows. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9605) Update junit dependency
[ https://issues.apache.org/jira/browse/HADOOP-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676909#comment-13676909 ] Hudson commented on HADOOP-9605: Integrated in Hadoop-Yarn-trunk #232 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/232/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Update junit dependency > --- > > Key: HADOOP-9605 > URL: https://issues.apache.org/jira/browse/HADOOP-9605 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair > Labels: maven > Fix For: 2.1.0-beta > > Attachments: HADOOP-9605.patch1, HADOOP-9605.patch2 > > > Simple update of the junit dependency to use newer version. E.g. when > running maven-rpmbuild on Fedora 18. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9607) Fixes in Javadoc build
[ https://issues.apache.org/jira/browse/HADOOP-9607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676907#comment-13676907 ] Hudson commented on HADOOP-9607: Integrated in Hadoop-Yarn-trunk #232 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/232/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > Fixes in Javadoc build > -- > > Key: HADOOP-9607 > URL: https://issues.apache.org/jira/browse/HADOOP-9607 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Timothy St. Clair >Priority: Minor > Labels: documentation > Fix For: 2.1.0-beta > > Attachments: HADOOP-9607.patch > > > It appears that some non-ascii characters have crept into the code, which > cause an issue when building javadocs. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9593) stack trace printed at ERROR for all yarn clients without hadoop.home set
[ https://issues.apache.org/jira/browse/HADOOP-9593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676912#comment-13676912 ] Hudson commented on HADOOP-9593: Integrated in Hadoop-Yarn-trunk #232 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/232/]) HADOOP-9593. Changing CHANGES.txt to reflect merge to branch-2.1-beta. (Revision 1490105) Result = SUCCESS acmurthy : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490105 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt > stack trace printed at ERROR for all yarn clients without hadoop.home set > - > > Key: HADOOP-9593 > URL: https://issues.apache.org/jira/browse/HADOOP-9593 > Project: Hadoop Common > Issue Type: Bug > Components: util >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9593-001.patch, HADOOP-9593-002.patch > > > This is the problem of HADOOP-9482 now showing up in a different application > -one whose log4j settings haven't turned off all Shell logging. > Unless you do that, all yarn clients will have a stack trace at error in > their logs, which is generating false alarms and is utterly pointless. Why > does this merit a stack trace? Why log it at error? It's not an error for a > client app to not have these values set as long as they have the relevant > JARs on their classpath. And if they don't, they'll get some classpath error > instead -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-8957) AbstractFileSystem#IsValidName should be overridden for embedded file systems like ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676906#comment-13676906 ] Hudson commented on HADOOP-8957: Integrated in Hadoop-Yarn-trunk #232 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/232/]) Move HADOP-9131, HADOOP-8957 to release 2.0.1 section and HADOOP-9607, HADOOP-9605 to BUG FIXES (Revision 1490119) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490119 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt > AbstractFileSystem#IsValidName should be overridden for embedded file systems > like ViewFs > - > > Key: HADOOP-8957 > URL: https://issues.apache.org/jira/browse/HADOOP-8957 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: trunk-win >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-8957-branch-2.patch, > HADOOP-8957-branch-trunk-win.2.patch, HADOOP-8957-branch-trunk-win.3.patch, > HADOOP-8957-branch-trunk-win.4.patch, HADOOP-8957.patch, HADOOP-8957.patch, > HADOOP-8957-trunk.4.patch > > > This appears to be a problem with parsing a Windows-specific path, ultimately > throwing InvocationTargetException from AbstractFileSystem.newInstance. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9526) TestShellCommandFencer and TestShell fail on Windows
[ https://issues.apache.org/jira/browse/HADOOP-9526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676908#comment-13676908 ] Hudson commented on HADOOP-9526: Integrated in Hadoop-Yarn-trunk #232 (See [https://builds.apache.org/job/Hadoop-Yarn-trunk/232/]) HADOOP-9526. TestShellCommandFencer and TestShell fail on Windows. Contributed by Arpit Agarwal. (Revision 1490120) Result = SUCCESS suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1490120 Files : * /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestShellCommandFencer.java * /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestShell.java > TestShellCommandFencer and TestShell fail on Windows > > > Key: HADOOP-9526 > URL: https://issues.apache.org/jira/browse/HADOOP-9526 > Project: Hadoop Common > Issue Type: Bug > Components: test >Affects Versions: 3.0.0, 2.1.0-beta >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Fix For: 3.0.0, 2.1.0-beta > > Attachments: HADOOP-9526.001.patch, HADOOP-9526.002.patch > > > The following TestShellCommandFencer tests fail on Windows. > # testTargetAsEnvironment > # testConfAsEnvironment > # testTargetAsEnvironment > TestShell#testInterval also fails. > All failures look like test issues. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9489) Eclipse instructions in BUILDING.txt don't work
[ https://issues.apache.org/jira/browse/HADOOP-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676895#comment-13676895 ] Lokesh Basu commented on HADOOP-9489: - Thanks Chris. The patch provided by you helped me build hadoop on my system. I have been trying to build this for two days. Thanks again > Eclipse instructions in BUILDING.txt don't work > --- > > Key: HADOOP-9489 > URL: https://issues.apache.org/jira/browse/HADOOP-9489 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0 >Reporter: Carl Steinbach >Assignee: Chris Nauroth > Attachments: eclipse_hadoop_errors.txt, HADOOP-9489.1.patch > > > I have tried several times to import Hadoop trunk into Eclipse following the > instructions in the BUILDING.txt file, but so far have not been able to get > it to work. > If I use a fresh install of Eclipse 4.2.2, Eclipse will complain about an > undefined M2_REPO environment variable. I discovered that this is defined > automatically by the M2Eclipse plugin, and think that the BUILDING.txt doc > should be updated to explain this. > After installing M2Eclipse I tried importing the code again, and now get over > 2500 errors related to missing class dependencies. Many of these errors > correspond to missing classes in the oah*.proto namespace, which makes me > think that 'mvn eclipse:eclipse' is not triggering protoc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9453) Configuration.loadResource should skip empty resources
[ https://issues.apache.org/jira/browse/HADOOP-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-9453: --- Attachment: HADOOP-9453-002.patch Filename expected in exception string is now driven by constant, and resilient to changes > Configuration.loadResource should skip empty resources > -- > > Key: HADOOP-9453 > URL: https://issues.apache.org/jira/browse/HADOOP-9453 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-9453-002.patch, HADOOP-9453.patch > > > YARN-535 shows that having a 0-byte {yarn-site}} file (created due to the > test itself) breaks configuration loads, as it is a default resource that no > longer parses. > The resource loader code skips missing files -it should do the same for > 0-byte files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9453) Configuration.loadResource should skip empty resources
[ https://issues.apache.org/jira/browse/HADOOP-9453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-9453: --- Status: Open (was: Patch Available) cancel for rebase > Configuration.loadResource should skip empty resources > -- > > Key: HADOOP-9453 > URL: https://issues.apache.org/jira/browse/HADOOP-9453 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-9453.patch > > > YARN-535 shows that having a 0-byte {yarn-site}} file (created due to the > test itself) breaks configuration loads, as it is a default resource that no > longer parses. > The resource loader code skips missing files -it should do the same for > 0-byte files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9487) Deprecation warnings in Configuration should go to their own log or otherwise be suppressible
[ https://issues.apache.org/jira/browse/HADOOP-9487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676865#comment-13676865 ] Steve Loughran commented on HADOOP-9487: No, I'm proposing a separate Commons Logging logger First, the classic log {code} Log LOG = LogFactory.getLog(Configuration.class); {code} Now, a new one purely for deprecation {code} Log LOG_DEPRECATION = LogFactory.getLog("org.apache.hadoop.conf.Configuration.deprecation"); {code} You log deprecation at info: {code} LOG_DEPRECATION.info("mapred.speculative.execution.slowNodeThreshold is deprecated") {code} Now you can tune deprecation warnings in Log4J {code} log4j.logger.org.apache.hadoop.conf.Configuration=INFO log4j.logger.org.apache.hadoop.conf.Configuration.Deprecation=WARN {code} See? No new config options, just tuning log messages via Log4J, which is usuallyy the best place to do it > Deprecation warnings in Configuration should go to their own log or otherwise > be suppressible > - > > Key: HADOOP-9487 > URL: https://issues.apache.org/jira/browse/HADOOP-9487 > Project: Hadoop Common > Issue Type: Improvement > Components: conf >Affects Versions: 3.0.0 >Reporter: Steve Loughran > > Running local pig jobs triggers large quantities of warnings about deprecated > properties -something I don't care about as I'm not in a position to fix > without delving into Pig. > I can suppress them by changing the log level, but that can hide other > warnings that may actually matter. > If there was a special Configuration.deprecated log for all deprecation > messages, this log could be suppressed by people who don't want noisy logs -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676863#comment-13676863 ] Paul Han commented on HADOOP-9625: -- Ok. The patch is for hadoop-env.sh. Unit test is not really applicable. For testing, what I did is : {code} ... make sure a cluster is up and running ... export HADOOP_CONF_DIR=/etc/... export HADOOP_OPTS='-Dfoo=bar' hadoop jar hadoop-mapreduce-examples-2.0.3-alpha-t1.jar pi 3 2 & ps -ef | grep java ... verfiy -Dfoo=bar is in java command line {code} > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676860#comment-13676860 ] Hadoop QA commented on HADOOP-9625: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586470/HADOOP-9625.patch against trunk revision . {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 eclipse:eclipse{color}. The patch built with eclipse:eclipse. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 core tests{color}. The patch passed unit tests in hadoop-common-project/hadoop-common. {color:green}+1 contrib tests{color}. The patch passed contrib unit tests. Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/2608//testReport/ Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2608//console This message is automatically generated. > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Han updated HADOOP-9625: - Status: Patch Available (was: Open) Looks like we don't have CI setup for branch 2.0.5-alpha yet? Let me rebase and submit to trunk instead. > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.4-alpha, 2.0.3-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Han updated HADOOP-9625: - Status: Open (was: Patch Available) > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.4-alpha, 2.0.3-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Han updated HADOOP-9625: - Attachment: HADOOP-9625.patch > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625.patch, HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676822#comment-13676822 ] Hadoop QA commented on HADOOP-9625: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12586466/HADOOP-9625-branch-2.0.5-alpha.patch against trunk revision . {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/2607//console This message is automatically generated. > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13676818#comment-13676818 ] Paul Han commented on HADOOP-9625: -- Thanks Suresh for the reminder! I was trying to submit patch to 2.0.5 releases. Looked like only branch is supported. I'm resubmitting the patch named with branch name. Let's see how it goes. > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Han updated HADOOP-9625: - Status: Patch Available (was: Open) > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.4-alpha, 2.0.3-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Han updated HADOOP-9625: - Attachment: HADOOP-9625-branch-2.0.5-alpha.patch > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.3-alpha, 2.0.4-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625-branch-2.0.5-alpha.patch, HADOOP-9625.patch, > HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HADOOP-9625) HADOOP_OPTS not picked up by hadoop command
[ https://issues.apache.org/jira/browse/HADOOP-9625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Han updated HADOOP-9625: - Status: Open (was: Patch Available) > HADOOP_OPTS not picked up by hadoop command > --- > > Key: HADOOP-9625 > URL: https://issues.apache.org/jira/browse/HADOOP-9625 > Project: Hadoop Common > Issue Type: Improvement > Components: bin, conf >Affects Versions: 2.0.4-alpha, 2.0.3-alpha >Reporter: Paul Han >Priority: Minor > Fix For: 2.0.5-alpha > > Attachments: HADOOP-9625.patch, > HADOOP-9625-release-2.0.5-alpha-rc2.patch > > Original Estimate: 12h > Remaining Estimate: 12h > > When migrating from hadoop 1 to hadoop 2, one thing caused our users grief > are those non-backward-compatible changes. This JIRA is to fix one of those > changes: > HADOOP_OPTS is not picked up any more by hadoop command > With Hadoop 1, HADOOP_OPTS will be picked up by hadoop command. With Hadoop > 2, HADOOP_OPTS will be overwritten by the line in conf/hadoop_env.sh : > export HADOOP_OPTS="-Djava.net.preferIPv4Stack=true" > We should fix this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira