[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557928#comment-13557928
 ] 

Zesheng Wu commented on HADOOP-9223:


ping Harsh

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9190) packaging docs is broken

2013-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557752#comment-13557752
 ] 

Hadoop QA commented on HADOOP-9190:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565576/hadoop9190-1.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2073//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2073//console

This message is automatically generated.

> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9190) packaging docs is broken

2013-01-18 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HADOOP-9190:
--

Attachment: hadoop9190-1.txt

New version of patch removing Forrest from BUILDING.txt

In addition the following files can be deleted:
{noformat}
svn rm hadoop-common-project/hadoop-common/src/main/docs/forrest.properties
svn rm hadoop-hdfs-project/hadoop-hdfs/src/main/docs/forrest.properties
{noformat}

> packaging docs is broken
> 
>
> Key: HADOOP-9190
> URL: https://issues.apache.org/jira/browse/HADOOP-9190
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Thomas Graves
>Assignee: Andy Isaacson
> Attachments: hadoop9190-1.txt, hadoop9190.txt
>
>
> It looks like after the docs got converted to apt format in HADOOP-8427, mvn 
> site package -Pdist,docs no longer works.   If you run mvn site or mvn 
> site:stage by itself they work fine, its when you go to package it that it 
> breaks.
> The error is with broken links, here is one of them:
> broken-links>
>message="hadoop-common-project/hadoop-common/target/docs-src/src/documentation/content/xdocs/HttpAuthentication.xml
>  (No such file or directory)" uri="HttpAuthentication.html">
> 
> 
> 
> 
> 
> 
> 
>   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9206) "Setting up a Single Node Cluster" instructions need improvement in 0.23.5/2.0.2-alpha branches

2013-01-18 Thread Andy Isaacson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557723#comment-13557723
 ] 

Andy Isaacson commented on HADOOP-9206:
---

I've converted the xdoc to SingleNodeSetup.apt.vm in HADOOP-9221.

> "Setting up a Single Node Cluster" instructions need improvement in 
> 0.23.5/2.0.2-alpha branches
> ---
>
> Key: HADOOP-9206
> URL: https://issues.apache.org/jira/browse/HADOOP-9206
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.2-alpha, 0.23.5
>Reporter: Glen Mazza
>
> Hi, in contrast to the easy-to-follow 1.0.4 instructions 
> (http://hadoop.apache.org/docs/r1.0.4/single_node_setup.html) the 0.23.5 and 
> 2.0.2-alpha instructions 
> (http://hadoop.apache.org/docs/r2.0.2-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html)
>  need more clarification -- it seems to be written for people who already 
> know and understand hadoop.  In particular, these points need clarification:
> 1.) Text: "You should be able to obtain the MapReduce tarball from the 
> release."
> Question: What is the MapReduce tarball?  What is its name?  I don't see such 
> an object within the hadoop-0.23.5.tar.gz download.
> 2.) Quote: "NOTE: You will need protoc installed of version 2.4.1 or greater."
> Protoc doesn't have a website you can link to (it's just mentioned offhand 
> when you Google it) -- is it really the case today that Hadoop has a 
> dependency on such a minor project?  At any rate, if you can have a link of 
> where one goes to get/install Protoc that would be good.
> 3.) Quote: "Assuming you have installed hadoop-common/hadoop-hdfs and 
> exported $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME, untar hadoop mapreduce 
> tarball and set environment variable $HADOOP_MAPRED_HOME to the untarred 
> directory."
> I'm not sure what you mean by the forward slashes: hadoop-common/hadoop-hdfs 
> and $HADOOP_COMMON_HOME/$HADOOP_HDFS_HOME -- do you mean & (install both) or 
> *or* just install one of the two?  This needs clarification--please remove 
> the forward slash and replace it with what you're trying to say.  The 
> audience here is complete newbie and they've been brought to this page from 
> here: http://hadoop.apache.org/docs/r0.23.5/ (same with r2.0.2-alpha/) 
> (quote: "Getting Started - The Hadoop documentation includes the information 
> you need to get started using Hadoop. Begin with the Single Node Setup which 
> shows you how to set up a single-node Hadoop installation."), they've 
> downloaded hadoop-0.23.5.tar.gz and want to know what to do next.  Why are 
> there potentially two applications -- hadoop-common and hadoop-hdfs and not 
> just one?  (The download doesn't appear to have two separate apps) -- if 
> there is indeed just one app and we remove the other from the above text to 
> avoid confusion?
> Again, I just downloaded hadoop-0.23.5.tar.gz -- do I need to download more?  
> If so, let us know in the docs here.
> Also, the fragment: "Assuming you have installed 
> hadoop-common/hadoop-hdfs..."  No, I haven't, that's what *this* page is 
> supposed to explain to me how to do -- how do I install these two (or just 
> one of these two)?
> Also, what do I set $HADOOP_COMMON_HOME and/or $HADOOP_HDFS_HOME to?
> 4.) Quote: "NOTE: The following instructions assume you have hdfs running."  
> No, I don't--how do I do this?  Again, this page is supposed to teach me that.
> 5.) Quote: "To start the ResourceManager and NodeManager, you will have to 
> update the configs. Assuming your $HADOOP_CONF_DIR is the configuration 
> directory..."
> Could you clarify here what the "configuration directory" is, it doesn't 
> exist in the 0.23.5 download.  I just see 
> bin,etc,include,lib,libexec,sbin,share folders but no "conf" one.)
> 6.) Quote: "Assuming that the environment variables $HADOOP_COMMON_HOME, 
> $HADOOP_HDFS_HOME, $HADOO_MAPRED_HOME, $YARN_HOME, $JAVA_HOME and 
> $HADOOP_CONF_DIR have been set appropriately."
> We'll need to know what to set YARN_HOME to here.
> Thanks!
> Glen

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557706#comment-13557706
 ] 

Hudson commented on HADOOP-8924:


Integrated in Hadoop-trunk-Commit #3263 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3263/])
HADOOP-8924. Add maven plugin alternative to shell script to save 
package-info.java. Contributed by Alejandro Abdelnur and Chris Nauroth. 
(Revision 1435380)
HADOOP-8924. Revert r1435372 that missed some files (Revision 1435379)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1435380
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/saveVersion.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/HadoopVersionAnnotation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* /hadoop/common/trunk/hadoop-maven-plugins
* /hadoop/common/trunk/hadoop-maven-plugins/pom.xml
* /hadoop/common/trunk/hadoop-maven-plugins/src
* /hadoop/common/trunk/hadoop-maven-plugins/src/main
* /hadoop/common/trunk/hadoop-maven-plugins/src/main/java
* /hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org
* /hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache
* /hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo
* 
/hadoop/common/trunk/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/versioninfo/VersionInfoMojo.java
* /hadoop/common/trunk/hadoop-project/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/scripts/saveVersion.sh
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
* /hadoop/common/trunk/pom.xml

suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1435379
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/saveVersion.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/HadoopVersionAnnotation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* /hadoop/common/trunk/hadoop-project/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/scripts/saveVersion.sh
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
* /hadoop/common/trunk/pom.xml


> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch

[jira] [Updated] (HADOOP-9221) Convert remaining xdocs to APT

2013-01-18 Thread Andy Isaacson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andy Isaacson updated HADOOP-9221:
--

Attachment: hadoop9221-1.txt

Attaching git diff.  This version works around a Maven/APT bug, 
http://jira.codehaus.org/browse/DOXIASITETOOLS-68 that causes a NPE in Maven, 
and also fixes a bunch of formatting failures in the previous version.

> Convert remaining xdocs to APT
> --
>
> Key: HADOOP-9221
> URL: https://issues.apache.org/jira/browse/HADOOP-9221
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.0.2-alpha
>Reporter: Andy Isaacson
>Assignee: Andy Isaacson
> Attachments: hadoop9221-1.txt, hadoop9221.txt
>
>
> The following Forrest XML documents are still present in trunk:
> {noformat}
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/Superusers.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/deployment_layout.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/native_libraries.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/service_level_auth.xml
> hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/single_node_setup.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/SLG_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/faultinject_framework.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_editsviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_imageviewer.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/hftp.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/libhdfs.xml
> hadoop-hdfs-project/hadoop-hdfs/src/main/docs/src/documentation/content/xdocs/webhdfs.xml
> {noformat}
> Several of them are leftover cruft, and all of them are out of date to one 
> degree or another, but it's easiest to simply convert them all to APT and 
> move forward with editing thereafter.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557685#comment-13557685
 ] 

Hudson commented on HADOOP-8924:


Integrated in Hadoop-trunk-Commit #3262 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/3262/])
HADOOP-8924. Add maven plugin alternative to shell script to save 
package-info.java. Contributed by Alejandro Abdelnur and Chris Nauroth. 
(Revision 1435372)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1435372
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/dev-support/saveVersion.sh
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/pom.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/HadoopVersionAnnotation.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/VersionInfo.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/common-version-info.properties
* /hadoop/common/trunk/hadoop-project/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/scripts/saveVersion.sh
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/YarnVersionInfo.java
* 
/hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-version-info.properties
* /hadoop/common/trunk/pom.xml


> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9226) IOUtils.CloseQuietly() to intercept RuntimeExceptions as well as IOExceptions

2013-01-18 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9226.


  Resolution: Invalid
Release Note: That comment about {{cleanup}} made me go and look at 
{{IOUtils}} more closely -it turns out that it's Commons-IO that I'd mistakenly 
import. Their {{closeQuietly()}} doesn't handle exceptions other than IOE -but 
{{cleanup}} does. Marking as invalid.

> IOUtils.CloseQuietly() to intercept RuntimeExceptions as well as IOExceptions
> -
>
> Key: HADOOP-9226
> URL: https://issues.apache.org/jira/browse/HADOOP-9226
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Steve Loughran
>Priority: Minor
>
> A stack trace of mine shows that call of {{IOException.closeQuietly()}} 
> forwarded an NPE up from the JetS3t library's {{close()}} method. We *may* 
> want to have the various {{CloseQuietly()}} method intercept and log such 
> things too, on the assumption that the goal of those close operations is to 
> downgrade all close-time exceptions into log events.
> If people agree that's what we want, I'll do the patch & test

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9227) FileSystemContractBaseTest doesn't test filesystem's mkdir/isDirectory() logic rigorously enough

2013-01-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557674#comment-13557674
 ] 

Steve Loughran commented on HADOOP-9227:


I know about s3 -a separate issue there is that the "don't test umasks" logic 
is in the base class -which stops other filesystems opting out except by 
overriding the test. There should be an override method "FS supports umask"

> FileSystemContractBaseTest doesn't test filesystem's mkdir/isDirectory() 
> logic rigorously enough
> 
>
> Key: HADOOP-9227
> URL: https://issues.apache.org/jira/browse/HADOOP-9227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Trivial
>
> The {{FileSystemContractBaseTest.mkdirs()}} asserts that a newly created 
> directory is true, but way of {{FileStatus.isFile()}}, but doesn't assert 
> that the directory is a dir by way of {{FileStatus.isDir()}}.
> The assertion used is slightly weaker, as the {{isFile()}} test is actually
> {{!isdir && !isSymlink()}}. if an implementation of {{FileSystem.mkdirs()}} 
> created symlinks then the test would still pass.
> There is one test that looks at the {{isDirectory()}} logic, 
> {{testMkdirsWithUmask()}} -but as that test is skipped for the s3 
> filesystems, it is possible for those filesystems (or similar) to not have 
> their directory creation logic stressed enough.
> The fix would be a trivial single line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Add maven plugin alternative to shell script to save package-info.java

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Summary: Add maven plugin alternative to shell script to save 
package-info.java  (was: Hadoop Common creating package-info.java must not 
depend on sh, at least for Windows)

> Add maven plugin alternative to shell script to save package-info.java
> --
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557565#comment-13557565
 ] 

Hadoop QA commented on HADOOP-8924:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565540/HADOOP-8924.7.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-maven-plugins 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2072//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2072//console

This message is automatically generated.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9227) FileSystemContractBaseTest doesn't test filesystem's mkdir/isDirectory() logic rigorously enough

2013-01-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557545#comment-13557545
 ] 

Colin Patrick McCabe commented on HADOOP-9227:
--

Hi Steve,

Thanks for looking at this.  I agree that we should test even more things in 
these tests if possible.

One thing to note: the s3 filesystems don't implement permissions, so testing 
umask there would be quite impossible.

> FileSystemContractBaseTest doesn't test filesystem's mkdir/isDirectory() 
> logic rigorously enough
> 
>
> Key: HADOOP-9227
> URL: https://issues.apache.org/jira/browse/HADOOP-9227
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Trivial
>
> The {{FileSystemContractBaseTest.mkdirs()}} asserts that a newly created 
> directory is true, but way of {{FileStatus.isFile()}}, but doesn't assert 
> that the directory is a dir by way of {{FileStatus.isDir()}}.
> The assertion used is slightly weaker, as the {{isFile()}} test is actually
> {{!isdir && !isSymlink()}}. if an implementation of {{FileSystem.mkdirs()}} 
> created symlinks then the test would still pass.
> There is one test that looks at the {{isDirectory()}} logic, 
> {{testMkdirsWithUmask()}} -but as that test is skipped for the s3 
> filesystems, it is possible for those filesystems (or similar) to not have 
> their directory creation logic stressed enough.
> The fix would be a trivial single line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9226) IOUtils.CloseQuietly() to intercept RuntimeExceptions as well as IOExceptions

2013-01-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557542#comment-13557542
 ] 

Colin Patrick McCabe commented on HADOOP-9226:
--

If we do decide to do this, we should change {{cleanup}} as well.

> IOUtils.CloseQuietly() to intercept RuntimeExceptions as well as IOExceptions
> -
>
> Key: HADOOP-9226
> URL: https://issues.apache.org/jira/browse/HADOOP-9226
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 1.1.1, 2.0.3-alpha
>Reporter: Steve Loughran
>Priority: Minor
>
> A stack trace of mine shows that call of {{IOException.closeQuietly()}} 
> forwarded an NPE up from the JetS3t library's {{close()}} method. We *may* 
> want to have the various {{CloseQuietly()}} method intercept and log such 
> things too, on the assumption that the goal of those close operations is to 
> downgrade all close-time exceptions into log events.
> If people agree that's what we want, I'll do the patch & test

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-18 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557540#comment-13557540
 ] 

Colin Patrick McCabe commented on HADOOP-9225:
--

Hi Vadim,

Thanks for writing this test.  We can always use more testing.

{code}
+  private static boolean isNativeSnappyLoadable() {
+try {
+  boolean loaded = SnappyDecompressor.isNativeCodeLoaded();
+  return loaded;
+} catch (Throwable t) {
+  log.warn("Failed to load snappy: ", t);
+  return false;
+}
+  }
{code}

Is this needed, given that you have this code earlier?

{code}
+  @Before
+  public void before() {
+assumeTrue(NativeCodeLoader.isNativeCodeLoaded()
+&& NativeCodeLoader.buildSupportsSnappy());
...
{code}

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557535#comment-13557535
 ] 

Suresh Srinivas commented on HADOOP-8924:
-

+1 for the trunk patch. I will follow HADOOP-9207 to ensure the comments I made 
earlier is handled.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Status: Patch Available  (was: Open)

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924.7.patch

Attaching version 7 of the patch for trunk and branch-trunk-win.  This adds the 
pom.xml comments and more javadocs for the Maven plugin classes.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924.7.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.7.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924-branch-trunk-win.7.patch

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924-branch-trunk-win.2.patch, 
> HADOOP-8924-branch-trunk-win.3.patch, HADOOP-8924-branch-trunk-win.4.patch, 
> HADOOP-8924-branch-trunk-win.5.patch, HADOOP-8924-branch-trunk-win.6.patch, 
> HADOOP-8924-branch-trunk-win.7.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9230) TestUniformSizeInputFormat fails intermittently

2013-01-18 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557502#comment-13557502
 ] 

Karthik Kambatla commented on HADOOP-9230:
--

checkAgainstLegacy() compares the generated splits against a legacy split 
generation. I don't quite understand the purpose behind this check. Can anyone 
who knows this better throw some light on why we need the test.

I noticed a conflict in the math between UniformSizeInputFormat split 
generation and the legacy generation:

Current:
{code}
long nBytesPerSplit = (long) Math.ceil(totalSizeBytes * 1.0 / numSplits);
{code}

Legacy:
{code}
final long targetsize = totalFileSize / numSplits;
{code}

I would expect the math discrepancy to lead to more than 10% failure rate 
though.

> TestUniformSizeInputFormat fails intermittently
> ---
>
> Key: HADOOP-9230
> URL: https://issues.apache.org/jira/browse/HADOOP-9230
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: distcp
>
> TestUniformSizeFileInputFormat fails intermittently. I ran the test 50 times 
> and noticed 5 failures.
> Haven't noticed any particular pattern to which runs fail.
> A sample stack trace is as follows:
> {noformat}
> java.lang.AssertionError: expected:<1944> but was:<1820>
> at org.junit.Assert.fail(Assert.java:91)
> at org.junit.Assert.failNotEquals(Assert.java:645)
> at org.junit.Assert.assertEquals(Assert.java:126)
> at org.junit.Assert.assertEquals(Assert.java:470)
> at org.junit.Assert.assertEquals(Assert.java:454)
> at 
> org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.checkAgainstLegacy(TestUniformSizeInputFormat.java:244)
> at 
> org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:126)
> at 
> org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:252)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9230) TestUniformSizeInputFormat fails intermittently

2013-01-18 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9230:


 Summary: TestUniformSizeInputFormat fails intermittently
 Key: HADOOP-9230
 URL: https://issues.apache.org/jira/browse/HADOOP-9230
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


TestUniformSizeFileInputFormat fails intermittently. I ran the test 50 times 
and noticed 5 failures.

Haven't noticed any particular pattern to which runs fail.

A sample stack trace is as follows:

{noformat}
java.lang.AssertionError: expected:<1944> but was:<1820>
at org.junit.Assert.fail(Assert.java:91)
at org.junit.Assert.failNotEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:126)
at org.junit.Assert.assertEquals(Assert.java:470)
at org.junit.Assert.assertEquals(Assert.java:454)
at 
org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.checkAgainstLegacy(TestUniformSizeInputFormat.java:244)
at 
org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:126)
at 
org.apache.hadoop.tools.mapred.TestUniformSizeInputFormat.testGetSplits(TestUniformSizeInputFormat.java:252)
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557467#comment-13557467
 ] 

Chris Nauroth commented on HADOOP-8924:
---

Sorry, I missed Suresh's comment about javadocs before uploading that last 
patch.  Thanks for the feedback, Suresh.  I'll do that now.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924-branch-trunk-win.2.patch, 
> HADOOP-8924-branch-trunk-win.3.patch, HADOOP-8924-branch-trunk-win.4.patch, 
> HADOOP-8924-branch-trunk-win.5.patch, HADOOP-8924-branch-trunk-win.6.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9229) IPC: Retry on connection reset or socket timeout during SASL negotiation

2013-01-18 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557465#comment-13557465
 ] 

Kihwal Lee commented on HADOOP-9229:


[~tlipcon] In this scenario, setupIOStreams() will throw an exception without 
retrying, because handleSaslConnectionFailure() gives up. If the auth mode is 
kerberos, it will be retried, but that's still outside of setupConnection() 
without involving handleConnectionFailure(). May be we should add a check for 
connection retry policy in handleSaslConnectionFailure().

[~sureshms] We've also seen this happening against AM. Since there are finite 
number of tasks, retrying would have made the job succeed. This failure mode is 
particularly bad since clients fail without retrying. For requests for which 
only one chance is allowed, this is fatal. Since failed jobs get retried, the 
same situation will likely repeat. If all requests are eventually served, the 
load will go away without doing more damage.  

I agree that if this condition is sustained, the cluster has bigger problem and 
no ipc-level actions will solve that. But for transient overloads, we want the 
system to behave more gracefully. One concern is server accepting too much 
connections and running out of FD, which causes all kinds of bad things. This 
can be prevented by HADOOP-9137. 

> IPC: Retry on connection reset or socket timeout during SASL negotiation
> 
>
> Key: HADOOP-9229
> URL: https://issues.apache.org/jira/browse/HADOOP-9229
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
>Reporter: Kihwal Lee
>
> When an RPC server is overloaded, incoming connections may not get accepted 
> in time, causing listen queue overflow. The impact on client varies depending 
> on the type of OS in use. On Linux, connections in this state look fully 
> connected to the clients, but they are without buffers, thus any data sent to 
> the server will get dropped.
> This won't be a problem for protocols where client first wait for server's 
> greeting. Even for clients-speak-first protocols, it will be fine if the 
> overload is transient and such connections are accepted before the 
> retransmission of dropped packets arrive. Otherwise, clients can hit socket 
> timeout after several retransmissions.  In certain situations, connection 
> will get reset while clients still waiting for ack.
> We have seen this happening to IPC clients during SASL negotiation. Since no 
> call has been sent, we should allow retry when connection reset or socket 
> timeout happens in this stage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924.6.patch

Attaching version 6 of the patch.  The only difference from the prior version 
is that I added comments to pom.xml for hadoop-common and hadoop-yarn-common 
explaining the resource exclusion/inclusion with property substitution.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924.6.patch, HADOOP-8924-branch-trunk-win.2.patch, 
> HADOOP-8924-branch-trunk-win.3.patch, HADOOP-8924-branch-trunk-win.4.patch, 
> HADOOP-8924-branch-trunk-win.5.patch, HADOOP-8924-branch-trunk-win.6.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Attachment: HADOOP-8924-branch-trunk-win.6.patch

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.6.patch, HADOOP-8924-branch-trunk-win.patch, 
> HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557457#comment-13557457
 ] 

Alejandro Abdelnur commented on HADOOP-8924:


Yes, we can simplify the VersionInfo.java if we get rid of the YARN one. 
Regarding the exclude/include of the  props file in the POM, is to do copy with 
filtering.



> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-8924:
--

Status: Open  (was: Patch Available)

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557436#comment-13557436
 ] 

Chris Nauroth commented on HADOOP-8924:
---

We can consolidate to a single {{VersionInfo}} class and delete 
{{YarnVersionInfo}} in the scope of HADOOP-9207, which will convert the build 
to calculate a single checksum across the whole repository.

I'll upload a new patch shortly with the comment added to pom.xml.


> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557434#comment-13557434
 ] 

Suresh Srinivas commented on HADOOP-8924:
-

One other comment - please add javadoc for the newly added classes. For 
example, it is missing for {{Excec}}, {{FileSetUtils}} etc.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8924) Hadoop Common creating package-info.java must not depend on sh, at least for Windows

2013-01-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557430#comment-13557430
 ] 

Suresh Srinivas commented on HADOOP-8924:
-

Looking at the code VersionInfo.java code looks like it can be improved. It has 
_get* instance methods and get* static methods. These static methods are hidden 
by other classes that extend this class. Is it not possible to use one 
implementation of VersionInfo? All that I see different between variants of 
this class is how they decide on the properties file to read. So instead of 
passing {{component}} name in the constructor, can the properties name be 
passed?

In pom.xml file, the change where in resources *-version-info.properties seems 
to be both included and excluded. Adding a brief comment on why this is being 
done will help understanding.

> Hadoop Common creating package-info.java must not depend on sh, at least for 
> Windows
> 
>
> Key: HADOOP-8924
> URL: https://issues.apache.org/jira/browse/HADOOP-8924
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: trunk-win
>Reporter: Chris Nauroth
>Assignee: Alejandro Abdelnur
> Fix For: trunk-win
>
> Attachments: HADOOP-8924.2.patch, HADOOP-8924.3.patch, 
> HADOOP-8924.3.patch, HADOOP-8924.4.patch, HADOOP-8924.5.patch, 
> HADOOP-8924-branch-trunk-win.2.patch, HADOOP-8924-branch-trunk-win.3.patch, 
> HADOOP-8924-branch-trunk-win.4.patch, HADOOP-8924-branch-trunk-win.5.patch, 
> HADOOP-8924-branch-trunk-win.patch, HADOOP-8924.patch
>
>
> Currently, the build process relies on saveVersion.sh to generate 
> package-info.java with a version annotation.  The sh binary may not be 
> available on all developers' machines (e.g. Windows without Cygwin). This 
> issue tracks removal of that dependency in Hadoop Common.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9220) Unnecessary transition to standby in ActiveStandbyElector

2013-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557404#comment-13557404
 ] 

Hadoop QA commented on HADOOP-9220:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12565504/HADOOP-9220.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2071//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2071//console

This message is automatically generated.

> Unnecessary transition to standby in ActiveStandbyElector
> -
>
> Key: HADOOP-9220
> URL: https://issues.apache.org/jira/browse/HADOOP-9220
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-9220.patch, HADOOP-9220.patch
>
>
> When performing a manual failover from one HA node to a second, under some 
> circumstances the second node will transition from standby -> active -> 
> standby -> active. This is with automatic failover enabled, so there is a ZK 
> cluster doing leader election.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9229) IPC: Retry on connection reset or socket timeout during SASL negotiation

2013-01-18 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557360#comment-13557360
 ] 

Suresh Srinivas commented on HADOOP-9229:
-

bq. we should allow retry when connection reset or socket timeout happens in 
this stage.
I know in large clusters, it is possible to hit a condition where too many 
clients connect to the master servers such as namenode and overload it. The 
question is how do we want to handle this condition. There are two possible way 
to look at the solution:
# The overload condition is unexpected, hence the current behavior of degraded 
service where clients get disconnected could be the right behavior.
# If the load is some thing that namenode should handle, hence not an overload 
condition, we should look at scaling number of connections at the namenode. 
There are things that can be tuned here - number of RPC handlers, queue depth 
per RPC handler etc. If that is not sufficient, we may have to make further 
changes to scale connection handling.

One concern I have with retry is - if you have overload condition which results 
in client getting dropped, retry will continue the overload condition for a 
longer duration and make the situation worse.

> IPC: Retry on connection reset or socket timeout during SASL negotiation
> 
>
> Key: HADOOP-9229
> URL: https://issues.apache.org/jira/browse/HADOOP-9229
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
>Reporter: Kihwal Lee
>
> When an RPC server is overloaded, incoming connections may not get accepted 
> in time, causing listen queue overflow. The impact on client varies depending 
> on the type of OS in use. On Linux, connections in this state look fully 
> connected to the clients, but they are without buffers, thus any data sent to 
> the server will get dropped.
> This won't be a problem for protocols where client first wait for server's 
> greeting. Even for clients-speak-first protocols, it will be fine if the 
> overload is transient and such connections are accepted before the 
> retransmission of dropped packets arrive. Otherwise, clients can hit socket 
> timeout after several retransmissions.  In certain situations, connection 
> will get reset while clients still waiting for ack.
> We have seen this happening to IPC clients during SASL negotiation. Since no 
> call has been sent, we should allow retry when connection reset or socket 
> timeout happens in this stage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9220) Unnecessary transition to standby in ActiveStandbyElector

2013-01-18 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9220:
--

Attachment: HADOOP-9220.patch

I've written a test which fails without the patch. Basically it checks that the 
number of times that the HA service transitions to active is as expected.

There is another part to the fix, in addition to the previous patch. In 
ZKFailoverController#recheckElectability() the check may be postponed if the FC 
has ceded its active state and is waiting for a timeout (10s) before rejoining 
the election. The trouble is that the FC may have become active again in the 
intervening time, but recheckElectability() doesn't take account of this (and 
will call ActiveStandbyElector#createLockNodeAsync), and so the FC will 
transition to standby and then to active again. The fix I have implemented 
changes a postponed recheckElectability() to check if the FC is not currently 
active before joining the election.

> Unnecessary transition to standby in ActiveStandbyElector
> -
>
> Key: HADOOP-9220
> URL: https://issues.apache.org/jira/browse/HADOOP-9220
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-9220.patch, HADOOP-9220.patch
>
>
> When performing a manual failover from one HA node to a second, under some 
> circumstances the second node will transition from standby -> active -> 
> standby -> active. This is with automatic failover enabled, so there is a ZK 
> cluster doing leader election.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9220) Unnecessary transition to standby in ActiveStandbyElector

2013-01-18 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-9220:
--

Status: Patch Available  (was: Open)

> Unnecessary transition to standby in ActiveStandbyElector
> -
>
> Key: HADOOP-9220
> URL: https://issues.apache.org/jira/browse/HADOOP-9220
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-9220.patch, HADOOP-9220.patch
>
>
> When performing a manual failover from one HA node to a second, under some 
> circumstances the second node will transition from standby -> active -> 
> standby -> active. This is with automatic failover enabled, so there is a ZK 
> cluster doing leader election.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9229) IPC: Retry on connection reset or socket timeout during SASL negotiation

2013-01-18 Thread Todd Lipcon (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557313#comment-13557313
 ] 

Todd Lipcon commented on HADOOP-9229:
-

Hey Kihwal. Have you been watching HDFS-4404? Looks like basically the same 
issue, if I'm understanding you correctly. In particular, see this comment: 
https://issues.apache.org/jira/browse/HDFS-4404?focusedCommentId=13555680&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13555680

> IPC: Retry on connection reset or socket timeout during SASL negotiation
> 
>
> Key: HADOOP-9229
> URL: https://issues.apache.org/jira/browse/HADOOP-9229
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
>Reporter: Kihwal Lee
>
> When an RPC server is overloaded, incoming connections may not get accepted 
> in time, causing listen queue overflow. The impact on client varies depending 
> on the type of OS in use. On Linux, connections in this state look fully 
> connected to the clients, but they are without buffers, thus any data sent to 
> the server will get dropped.
> This won't be a problem for protocols where client first wait for server's 
> greeting. Even for clients-speak-first protocols, it will be fine if the 
> overload is transient and such connections are accepted before the 
> retransmission of dropped packets arrive. Otherwise, clients can hit socket 
> timeout after several retransmissions.  In certain situations, connection 
> will get reset while clients still waiting for ack.
> We have seen this happening to IPC clients during SASL negotiation. Since no 
> call has been sent, we should allow retry when connection reset or socket 
> timeout happens in this stage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9229) IPC: Retry on connection reset or socket timeout during SASL negotiation

2013-01-18 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-9229:
--

 Summary: IPC: Retry on connection reset or socket timeout during 
SASL negotiation
 Key: HADOOP-9229
 URL: https://issues.apache.org/jira/browse/HADOOP-9229
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.7
Reporter: Kihwal Lee


When an RPC server is overloaded, incoming connections may not get accepted in 
time, causing listen queue overflow. The impact on client varies depending on 
the type of OS in use. On Linux, connections in this state look fully connected 
to the clients, but they are without buffers, thus any data sent to the server 
will get dropped.

This won't be a problem for protocols where client first wait for server's 
greeting. Even for clients-speak-first protocols, it will be fine if the 
overload is transient and such connections are accepted before the 
retransmission of dropped packets arrive. Otherwise, clients can hit socket 
timeout after several retransmissions.  In certain situations, connection will 
get reset while clients still waiting for ack.

We have seen this happening to IPC clients during SASL negotiation. Since no 
call has been sent, we should allow retry when connection reset or socket 
timeout happens in this stage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9228) FileSystemContractTestBase never verifies that files are files

2013-01-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9228:
--

 Summary: FileSystemContractTestBase never verifies that files are 
files
 Key: HADOOP-9228
 URL: https://issues.apache.org/jira/browse/HADOOP-9228
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Minor


{{FileSystemContractTestBase}} never verifies that a newly created file has a 
file status where {{isFile()}} returns true

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9227) FileSystemContractBaseTest doesn't test filesystem's mkdir/isDirectory() logic rigorously enough

2013-01-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9227:
--

 Summary: FileSystemContractBaseTest doesn't test filesystem's 
mkdir/isDirectory() logic rigorously enough
 Key: HADOOP-9227
 URL: https://issues.apache.org/jira/browse/HADOOP-9227
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.0.0
Reporter: Steve Loughran
Priority: Trivial


The {{FileSystemContractBaseTest.mkdirs()}} asserts that a newly created 
directory is true, but way of {{FileStatus.isFile()}}, but doesn't assert that 
the directory is a dir by way of {{FileStatus.isDir()}}.

The assertion used is slightly weaker, as the {{isFile()}} test is actually
{{!isdir && !isSymlink()}}. if an implementation of {{FileSystem.mkdirs()}} 
created symlinks then the test would still pass.

There is one test that looks at the {{isDirectory()}} logic, 
{{testMkdirsWithUmask()}} -but as that test is skipped for the s3 filesystems, 
it is possible for those filesystems (or similar) to not have their directory 
creation logic stressed enough.

The fix would be a trivial single line.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557219#comment-13557219
 ] 

Hudson commented on HADOOP-8849:


Integrated in Hadoop-Mapreduce-trunk #1317 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1317/])
HADOOP-8849. FileUtil#fullyDelete should grant the target directories +rwx 
permissions (Ivan A. Veselovsky via bobby) (Revision 1434868)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434868
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java


> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557200#comment-13557200
 ] 

Zesheng Wu commented on HADOOP-9223:


I've added tests and passed my local testing, resubmitted.

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zesheng Wu updated HADOOP-9223:
---

Attachment: HADOOP-9223.patch

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zesheng Wu updated HADOOP-9223:
---

Attachment: (was: HADOOP-9223.patch)

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557192#comment-13557192
 ] 

Hudson commented on HADOOP-8849:


Integrated in Hadoop-Hdfs-trunk #1289 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1289/])
HADOOP-8849. FileUtil#fullyDelete should grant the target directories +rwx 
permissions (Ivan A. Veselovsky via bobby) (Revision 1434868)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434868
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java


> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9212) Potential deadlock in FileSystem.Cache/IPC/UGI

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557183#comment-13557183
 ] 

Hudson commented on HADOOP-9212:


Integrated in Hadoop-Hdfs-0.23-Build #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/498/])
HADOOP-9212. Potential deadlock in FileSystem.Cache/IPC/UGI (Tom White via 
tgraves) (Revision 1434880)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434880
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Credentials.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Potential deadlock in FileSystem.Cache/IPC/UGI
> --
>
> Key: HADOOP-9212
> URL: https://issues.apache.org/jira/browse/HADOOP-9212
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.0.2-alpha
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: 1_jcarder_result_0.png, HADOOP-9212.patch, 
> HADOOP-9212.patch
>
>
> jcarder found a cycle which could lead to a potential deadlock.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9216) CompressionCodecFactory#getCodecClasses should trim the result of parsing by Configuration.

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557181#comment-13557181
 ] 

Hudson commented on HADOOP-9216:


Integrated in Hadoop-Hdfs-0.23-Build #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/498/])
HADOOP-9216. CompressionCodecFactory#getCodecClasses should trim the result 
of parsing by Configuration. (Tsuyoshi Ozawa via todd) (Revision 1434893)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434893
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/compress/CompressionCodecFactory.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecFactory.java


> CompressionCodecFactory#getCodecClasses should trim the result of parsing by 
> Configuration.
> ---
>
> Key: HADOOP-9216
> URL: https://issues.apache.org/jira/browse/HADOOP-9216
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Tsuyoshi OZAWA
>Assignee: Tsuyoshi OZAWA
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9216.patch
>
>
> CompressionCodecFactory#getCodecClasses doesn't trim its input.
> This can confuse users of CompressionCodecFactory. For example, The setting 
> as follows can cause error because of spaces in the values.
> {quote}
>  conf.set("io.compression.codecs", 
> "  org.apache.hadoop.io.compress.GzipCodec , " +
> " org.apache.hadoop.io.compress.DefaultCodec  , " +
> "org.apache.hadoop.io.compress.BZip2Codec   ");
> {quote}
> This ticket deals with this problem.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557180#comment-13557180
 ] 

Hudson commented on HADOOP-8849:


Integrated in Hadoop-Hdfs-0.23-Build #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/498/])
svn merge -c 1434868 FIXES: HADOOP-8849. FileUtil#fullyDelete should grant 
the target directories +rwx permissions (Ivan A. Veselovsky via bobby) 
(Revision 1434873)

 Result = FAILURE
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434873
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java


> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9155) FsPermission should have different default value, 777 for directory and 666 for file

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557179#comment-13557179
 ] 

Hudson commented on HADOOP-9155:


Integrated in Hadoop-Hdfs-0.23-Build #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/498/])
HADOOP-9155. FsPermission should have different default value, 777 for 
directory and 666 for file (Binglin Chang via tgraves) (Revision 1434864)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434864
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/local/RawLocalFs.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/FsPermission.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/FileContextPermissionBase.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFSFileContextMainOperations.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestLocalFileSystemPermission.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSClient.java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/HdfsFileStatus.java


> FsPermission should have different default value, 777 for directory and 666 
> for file
> 
>
> Key: HADOOP-9155
> URL: https://issues.apache.org/jira/browse/HADOOP-9155
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.2-alpha
>Reporter: Binglin Chang
>Assignee: Binglin Chang
>Priority: Minor
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9155.patch, HADOOP-9155.v2.patch, 
> HADOOP-9155.v3.patch, HADOOP-9155.v3.patch, HADOOP-9155.v3.patch
>
>
> The default permission for {{FileSystem#create}} is the same default as for 
> {{FileSystem#mkdirs}}, namely {{0777}}. It would make more sense for the 
> default to be {{0666}} for files and {{0777}} for directories.  The current 
> default leads to a lot of files being created with the executable bit that 
> really should not be.  One example is anything created with FsShell's 
> copyToLocal.
> For reference, {{fopen}} creates files with a mode of {{0666}} (minus 
> whatever bits are set in the umask; usually {{0022}}.  This seems to be the 
> standard behavior and we should follow it.  This is also a regression since 
> branch-1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9147) Add missing fields to FIleStatus.toString

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557178#comment-13557178
 ] 

Hudson commented on HADOOP-9147:


Integrated in Hadoop-Hdfs-0.23-Build #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/498/])
HADOOP-9147. Add missing fields to FIleStatus.toString.(Jonathan Allen via 
suresh) (Revision 1434853)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434853
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileStatus.java


> Add missing fields to FIleStatus.toString
> -
>
> Key: HADOOP-9147
> URL: https://issues.apache.org/jira/browse/HADOOP-9147
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.2-alpha
>Reporter: Jonathan Allen
>Assignee: Jonathan Allen
>Priority: Trivial
> Fix For: 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-9147.patch, HADOOP-9147.patch, HADOOP-9147.patch, 
> HADOOP-9147.patch
>
>
> The FileStatus.toString method is missing the following fields:
> - modification_time
> - access_time
> - symlink
> These should be added in to aid debugging.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7886) Add toString to FileStatus

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557176#comment-13557176
 ] 

Hudson commented on HADOOP-7886:


Integrated in Hadoop-Hdfs-0.23-Build #498 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/498/])
HADOOP-7886 Add toString to FileStatus (SreeHari via tgraves) (Revision 
1434851)

 Result = FAILURE
tgraves : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434851
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileStatus.java


> Add toString to FileStatus
> --
>
> Key: HADOOP-7886
> URL: https://issues.apache.org/jira/browse/HADOOP-7886
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Jakob Homan
>Assignee: SreeHari
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: HDFS-2215_common.patch, HDFS-2215.patch
>
>
> It would be nice if FileStatus had a reasonable toString, for debugging 
> purposes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557159#comment-13557159
 ] 

Harsh J commented on HADOOP-9223:
-

I missed that prefix part; that should help address my worry. Thanks for 
pointing it out!

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557155#comment-13557155
 ] 

Zesheng Wu commented on HADOOP-9223:


Thanks for your quick reply!
1. In the patch, I added a prefix 'hadoop.property.' for config items, this 
intends to differ the config items passed by -D'hadoop.property.$name=$value' 
from the current normal options passed by -D'$name=$value', I do understand 
your worry, which is also mine, considering this, I added the prefix, which 
just acts like a switch:)
2. About the order of preference, it's just the same as what you said, a system 
property is more preferred than a file. I will add some tests, and submit it 
later
3. Sorry about the my mistake of error specifying of the version:(

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557150#comment-13557150
 ] 

Harsh J commented on HADOOP-9223:
-

Got it and what you propose to add is good. The only worry of mine is that this 
may unintentionally bloat up the config object size which inadvertently affects 
MR and other systems that do not really require this feature.

Perhaps we can add it with a switch that has it turned off by default and only 
enabled if an advanced user needs it?

Also, some tests would be good to have in the added patch, that also 
demonstrate order of preference (I assume it is to be most preferred if the 
value is from a sys-prop rather than a file), etc..

P.s. Fix Version is to be set only after it has been committed into a branch. 
Please use the Target Version alone to track the version where you want it to 
be committed.

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9223:


Target Version/s: 2.0.3-alpha  (was: 2.0.0-alpha)

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HADOOP-9223:


Fix Version/s: (was: 2.0.0-alpha)

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9226) IOUtils.CloseQuietly() to intercept RuntimeExceptions as well as IOExceptions

2013-01-18 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9226:
--

 Summary: IOUtils.CloseQuietly() to intercept RuntimeExceptions as 
well as IOExceptions
 Key: HADOOP-9226
 URL: https://issues.apache.org/jira/browse/HADOOP-9226
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 1.1.1, 2.0.3-alpha
Reporter: Steve Loughran
Priority: Minor


A stack trace of mine shows that call of {{IOException.closeQuietly()}} 
forwarded an NPE up from the JetS3t library's {{close()}} method. We *may* want 
to have the various {{CloseQuietly()}} method intercept and log such things 
too, on the assumption that the goal of those close operations is to downgrade 
all close-time exceptions into log events.

If people agree that's what we want, I'll do the patch & test

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557147#comment-13557147
 ] 

Zesheng Wu commented on HADOOP-9223:


Thanks Harsh. It's certain that the Tool and ToolRunner can fix the above 
senario, but not all programs are suitable to run with Tool or ToolRunner 
interfaces. For example, in our deploy system, we deploy hadoop programs using 
our own deploy scripts(start/stop, etc), suppose that if we want to start 
namenode using our own start.sh, we pass config items by -D options, in this 
senario, the Tool/ToolRunner interface is not so suitalbe.
I wish I expressed myself clearly.

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557142#comment-13557142
 ] 

Hadoop QA commented on HADOOP-9225:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12565461/HADOOP-9225-trunk-a.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2070//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2070//console

This message is automatically generated.

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8136) Enhance hadoop to use a newer version (0.8.1) of the jets3t library

2013-01-18 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557138#comment-13557138
 ] 

Steve Loughran commented on HADOOP-8136:


@Jagane -can you provide the patch for this against the current trunk? I also 
think if we can do a simple version increment without doing any major rewrites, 
that would be more likely to get into branch-1. 

> Enhance hadoop to use a newer version (0.8.1) of the jets3t library
> ---
>
> Key: HADOOP-8136
> URL: https://issues.apache.org/jira/browse/HADOOP-8136
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 1.0.0, 0.22.0, 0.23.3
> Environment: Ubuntu 11.04, 64 bit, JDK 1.6.0_30
>Reporter: Jagane Sundar
> Attachments: HADOOP-8136-0-for_branch_1_0.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Hadoop is built against, and includes, an older version of the Jets3t library 
> - version 0.6.1.
> The current version of the Jets3t library(as of March 2012) is 0.8.1. This 
> new version includes many improvements such as support for "Requester-Pays" 
> buckets.
> Since hadoop includes a copy of the version 0.6.1 jets3t library, and since 
> this version ends up early in the CLASSPATH, any Map Reduce application that 
> wants to use the jets3t library ends up getting version 0.6.1 of the jets3t 
> library. The MR application fails, usually with an error stating that the 
> method signature of some method in the Jets3t library does not match.
> It would be useful to enhance Jets3tNativeFileSystemStore.java to use the API 
> published by the 0.8.1 version of the jets3t library.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-18 Thread Vadim Bondarev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557126#comment-13557126
 ] 

Vadim Bondarev commented on HADOOP-9225:


Since there is on dependency on snappy native library in project and it's not 
builded in mvn project we should :

1. download from http://snappy.googlecode.com/files/snappy-1.0.5.tar.gz and 
build. 
Default installation folder for snappy is {/usr/local/lib}

2. before build do   export LD_LIBRARY_PATH=/usr/local/lib {snappy default 
installation directory}



> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557124#comment-13557124
 ] 

Harsh J commented on HADOOP-9223:
-

Thanks Zesheng. The Tool and ToolRunner interface also support parsing -D 
app-level options to add to the base loaded configs; but am not sure if you can 
use that as well as a more elegant fix for this.

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8849) FileUtil#fullyDelete should grant the target directories +rwx permissions before trying to delete them

2013-01-18 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557120#comment-13557120
 ] 

Hudson commented on HADOOP-8849:


Integrated in Hadoop-Yarn-trunk #100 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/100/])
HADOOP-8849. FileUtil#fullyDelete should grant the target directories +rwx 
permissions (Ivan A. Veselovsky via bobby) (Revision 1434868)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1434868
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java


> FileUtil#fullyDelete should grant the target directories +rwx permissions 
> before trying to delete them
> --
>
> Key: HADOOP-8849
> URL: https://issues.apache.org/jira/browse/HADOOP-8849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Ivan A. Veselovsky
>Assignee: Ivan A. Veselovsky
>Priority: Minor
> Fix For: 3.0.0, 2.0.3-alpha, 0.23.7
>
> Attachments: HADOOP-8849-trunk--5.patch, HADOOP-8849-vs-trunk-4.patch
>
>
> 2 improvements are suggested for implementation of methods 
> org.apache.hadoop.fs.FileUtil.fullyDelete(File) and 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(File):
>  
> 1) We should grant +rwx permissions the target directories before trying to 
> delete them.
> The mentioned methods fail to delete directories that don't have read or 
> execute permissions.
> Actual problem appears if an hdfs-related test is timed out (with a short 
> timeout like tens of seconds), and the forked test process is killed, some 
> directories are left on disk that are not readable and/or executable. This 
> prevents next tests from being executed properly because these directories 
> cannot be deleted with FileUtil#fullyDelete(), so many subsequent tests fail. 
> So, its recommended to grant the read, write, and execute permissions the 
> directories whose content is to be deleted.
> 2) Generic reliability improvement: we shouldn't rely upon File#delete() 
> return value, use File#exists() instead. 
> FileUtil#fullyDelete() uses return value of method java.io.File#delete(), but 
> this is not reliable because File#delete() returns true only if the file was 
> deleted as a result of the #delete() method invocation. E.g. in the following 
> code
> if (f.exists()) { // 1
>   return f.delete(); // 2
> }
> if the file f was deleted by another thread or process between calls "1" and 
> "2", this fragment will return "false", while the file f does not exist upon 
> the method return.
> So, better to write
> if (f.exists()) {
>   f.delete();
>   return !f.exists();
> }

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-18 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9225:
---

Attachment: HADOOP-9225-trunk-a.patch
HADOOP-9225-branch-2-a.patch
HADOOP-9225-branch-0.23-a.patch

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-18 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9225:
---

Status: Patch Available  (was: Open)

> Cover package org.apache.hadoop.compress.Snappy
> ---
>
> Key: HADOOP-9225
> URL: https://issues.apache.org/jira/browse/HADOOP-9225
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9225-branch-0.23-a.patch, 
> HADOOP-9225-branch-2-a.patch, HADOOP-9225-trunk-a.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9225) Cover package org.apache.hadoop.compress.Snappy

2013-01-18 Thread Vadim Bondarev (JIRA)
Vadim Bondarev created HADOOP-9225:
--

 Summary: Cover package org.apache.hadoop.compress.Snappy
 Key: HADOOP-9225
 URL: https://issues.apache.org/jira/browse/HADOOP-9225
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557099#comment-13557099
 ] 

Zesheng Wu commented on HADOOP-9223:


For example, we implemented a deploy and manage toolkit for our hadoop 
clusters, most of the operations are done in shell command line, we do not want 
to specify several config files for each cluster, we use the toolkit to 
generate the config at runtime, and pass it by '-Dhadoop.property.$key=$value', 
it's really handy:)

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557096#comment-13557096
 ] 

Zesheng Wu commented on HADOOP-9223:


Substitution does suffice the basic function, but it needs *-site.xml as a 
template, this isn't very handy sometime.

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9224) RPC.Handler prints response size for each call

2013-01-18 Thread Liyin Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Liang updated HADOOP-9224:


Attachment: 9224.diff

Attach a patch based on branch-1.1

> RPC.Handler prints response size for each call
> --
>
> Key: HADOOP-9224
> URL: https://issues.apache.org/jira/browse/HADOOP-9224
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.2.0
>Reporter: Liyin Liang
>Priority: Minor
> Attachments: 9224.diff
>
>
> Sometimes, the JobTracker's rpc responses can occupy all the bandwidth in our 
> production cluster. In this case, we should find out which kind of request's 
> response size is biggest. So the handler should print the response size for 
> each call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9224) RPC.Handler prints response size for each call

2013-01-18 Thread Liyin Liang (JIRA)
Liyin Liang created HADOOP-9224:
---

 Summary: RPC.Handler prints response size for each call
 Key: HADOOP-9224
 URL: https://issues.apache.org/jira/browse/HADOOP-9224
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 1.2.0
Reporter: Liyin Liang
Priority: Minor


Sometimes, the JobTracker's rpc responses can occupy all the bandwidth in our 
production cluster. In this case, we should find out which kind of request's 
response size is biggest. So the handler should print the response size for 
each call.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557058#comment-13557058
 ] 

Harsh J commented on HADOOP-9223:
-

This is currently possible via substitution: 
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html.
 Does that alone not suffice?

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests

2013-01-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13557057#comment-13557057
 ] 

Hadoop QA commented on HADOOP-9222:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12565449/HADOOP-9222-trunk-a.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2069//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/2069//console

This message is automatically generated.

> Cover package with org.apache.hadoop.io.lz4 unit tests
> --
>
> Key: HADOOP-9222
> URL: https://issues.apache.org/jira/browse/HADOOP-9222
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9222-branch-0.23-a.patch, 
> HADOOP-9222-branch-2-a.patch, HADOOP-9222-trunk-a.patch
>
>
> Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, 
> Lz4Decompressor testing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zesheng Wu updated HADOOP-9223:
---

Description: 
The current hadoop config items are mainly interpolated from the *-site.xml 
files. In our production environment, we need a mechanism that can specify 
config items through system properties, which is something like the gflags in 
system built with C++, it's really very handy.
The main purpose of this patch is to improve the convenience of hadoop systems, 
especially when people do testing or perf tuning, which always need to modify 
the *-site.xml files
If this patch is applied, then people can start hadoop programs in this way: 
java -cp $class_path -Dhadoop.property.$name=$value $program

  was:
The current hadoop config items are mainly interpolated from the *-site.xml 
files. In our production environment, we need a mechanism that can specify 
config items through system properties, which is something like the gflags for 
system built with C++, it's really very handy.
The main purpose of this patch is to improve the convenience of hadoop systems, 
especially when people do testing or perf tuning, which always need to modify 
the *-site.xml files
If this patch is applied, then people can start hadoop programs in this way: 
java -cp $class_path -Dhadoop.property.$name=$value $program


> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags in 
> system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zesheng Wu updated HADOOP-9223:
---

Attachment: HADOOP-9223.patch

> support specify config items through system property
> 
>
> Key: HADOOP-9223
> URL: https://issues.apache.org/jira/browse/HADOOP-9223
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 2.0.0-alpha
>Reporter: Zesheng Wu
>Priority: Minor
>  Labels: configuration, hadoop
> Fix For: 2.0.0-alpha
>
> Attachments: HADOOP-9223.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current hadoop config items are mainly interpolated from the *-site.xml 
> files. In our production environment, we need a mechanism that can specify 
> config items through system properties, which is something like the gflags 
> for system built with C++, it's really very handy.
> The main purpose of this patch is to improve the convenience of hadoop 
> systems, especially when people do testing or perf tuning, which always need 
> to modify the *-site.xml files
> If this patch is applied, then people can start hadoop programs in this way: 
> java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9223) support specify config items through system property

2013-01-18 Thread Zesheng Wu (JIRA)
Zesheng Wu created HADOOP-9223:
--

 Summary: support specify config items through system property
 Key: HADOOP-9223
 URL: https://issues.apache.org/jira/browse/HADOOP-9223
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Zesheng Wu
Priority: Minor
 Fix For: 2.0.0-alpha


The current hadoop config items are mainly interpolated from the *-site.xml 
files. In our production environment, we need a mechanism that can specify 
config items through system properties, which is something like the gflags for 
system built with C++, it's really very handy.
The main purpose of this patch is to improve the convenience of hadoop systems, 
especially when people do testing or perf tuning, which always need to modify 
the *-site.xml files
If this patch is applied, then people can start hadoop programs in this way: 
java -cp $class_path -Dhadoop.property.$name=$value $program

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests

2013-01-18 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9222:
---

Status: Patch Available  (was: Open)

> Cover package with org.apache.hadoop.io.lz4 unit tests
> --
>
> Key: HADOOP-9222
> URL: https://issues.apache.org/jira/browse/HADOOP-9222
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9222-branch-0.23-a.patch, 
> HADOOP-9222-branch-2-a.patch, HADOOP-9222-trunk-a.patch
>
>
> Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, 
> Lz4Decompressor testing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests

2013-01-18 Thread Vadim Bondarev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vadim Bondarev updated HADOOP-9222:
---

Attachment: HADOOP-9222-trunk-a.patch
HADOOP-9222-branch-2-a.patch
HADOOP-9222-branch-0.23-a.patch

> Cover package with org.apache.hadoop.io.lz4 unit tests
> --
>
> Key: HADOOP-9222
> URL: https://issues.apache.org/jira/browse/HADOOP-9222
> Project: Hadoop Common
>  Issue Type: Test
>Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
>Reporter: Vadim Bondarev
> Attachments: HADOOP-9222-branch-0.23-a.patch, 
> HADOOP-9222-branch-2-a.patch, HADOOP-9222-trunk-a.patch
>
>
> Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, 
> Lz4Decompressor testing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9222) Cover package with org.apache.hadoop.io.lz4 unit tests

2013-01-18 Thread Vadim Bondarev (JIRA)
Vadim Bondarev created HADOOP-9222:
--

 Summary: Cover package with org.apache.hadoop.io.lz4 unit tests
 Key: HADOOP-9222
 URL: https://issues.apache.org/jira/browse/HADOOP-9222
 Project: Hadoop Common
  Issue Type: Test
Affects Versions: 3.0.0, 2.0.3-alpha, 0.23.6
Reporter: Vadim Bondarev
 Attachments: HADOOP-9222-branch-0.23-a.patch, 
HADOOP-9222-branch-2-a.patch, HADOOP-9222-trunk-a.patch

Add test class TestLz4CompressorDecompressor with method for Lz4Compressor, 
Lz4Decompressor testing 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira