[jira] [Updated] (HADOOP-7528) Maven build fails in Windows

2011-08-09 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7528:
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks Alejandro!

> Maven build fails in Windows
> 
>
> Key: HADOOP-7528
> URL: https://issues.apache.org/jira/browse/HADOOP-7528
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
> Environment: Windows
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7528v1.patch
>
>
> Maven does not run in window for the following reasons:
> * Enforcer plugin restricts build to Unix
> * Ant run snippets to create TAR are not cygwin friendly

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7528) Maven build fails in Windows

2011-08-09 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13081786#comment-13081786
 ] 

Tom White commented on HADOOP-7528:
---

+1

Tests pass and test-patch gives the following (no new tests since this is a 
build change):

{noformat}
-1 overall.  

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 system test framework.  The patch passed system test framework compile.
{noformat}

> Maven build fails in Windows
> 
>
> Key: HADOOP-7528
> URL: https://issues.apache.org/jira/browse/HADOOP-7528
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
> Environment: Windows
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7528v1.patch
>
>
> Maven does not run in window for the following reasons:
> * Enforcer plugin restricts build to Unix
> * Ant run snippets to create TAR are not cygwin friendly

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7525) Make arguments to test-patch optional

2011-08-08 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7525:
--

Attachment: HADOOP-7525.patch

Here's a patch for this.
* I checked and test-patch.sh already uses smart-apply-patch.sh
* I removed the curl option since it wasn't used.
* I also added a {{--dirty-workspace}} that allows the workspace to have 
uncommitted changes in it. This is useful if you need to move some files around 
in SVN before applying your patch. It's also useful for testing changes to 
test-patch.sh itself.

Usage:
{noformat}
Usage: dev-support/test-patch.sh [options] patch-file | defect-number

Where:
  patch-file is a local patch file containing the changes to test
  defect-number is a JIRA defect number (e.g. 'HADOOP-1234') to test (Jenkins 
only)

Options:
--patch-dir=  The directory for working and output files (default 
'/tmp')
--basedir=The directory to apply the patch to (default current 
directory)
--mvn-cmd=The 'mvn' command to use (default $MAVEN_HOME/bin/mvn, 
or 'mvn')
--ps-cmd= The 'ps' command to use (default 'ps')
--awk-cmd=The 'awk' command to use (default 'awk')
--svn-cmd=The 'svn' command to use (default 'svn')
--grep-cmd=   The 'grep' command to use (default 'grep')
--patch-cmd=  The 'patch' command to use (default 'patch')
--findbugs-home= Findbugs home directory (default FINDBUGS_HOME 
environment variable)
--forrest-home=  Forrest home directory (default FORREST_HOME environment 
variable)
--dirty-workspace  Allow the local SVN workspace to have uncommitted changes

Jenkins-only options:
--jenkins  Run by Jenkins (runs tests and posts results to JIRA)
--support-dir=The directory to find support files in
--wget-cmd=   The 'wget' command to use (default 'wget')
--jira-cmd=   The 'jira' command to use (default 'jira')
--jira-password=   The password for the 'jira' command
--eclipse-home=  Eclipse home directory (default ECLIPSE_HOME environment 
variable)
{noformat}

> Make arguments to test-patch optional
> -
>
> Key: HADOOP-7525
> URL: https://issues.apache.org/jira/browse/HADOOP-7525
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7525.patch
>
>
> Currently you have to specify all the arguments to test-patch.sh, which makes 
> it cumbersome to use. We should make all arguments except the patch file 
> optional. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-7525) Make arguments to test-patch optional

2011-08-07 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White reassigned HADOOP-7525:
-

Assignee: Tom White

> Make arguments to test-patch optional
> -
>
> Key: HADOOP-7525
> URL: https://issues.apache.org/jira/browse/HADOOP-7525
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: scripts
>Reporter: Tom White
>Assignee: Tom White
>
> Currently you have to specify all the arguments to test-patch.sh, which makes 
> it cumbersome to use. We should make all arguments except the patch file 
> optional. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7523) Test org.apache.hadoop.fs.TestFilterFileSystem fails due to java.lang.NoSuchMethodException

2011-08-07 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7523:
--

  Resolution: Fixed
Assignee: John Lee
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, John!

> Test org.apache.hadoop.fs.TestFilterFileSystem fails due to 
> java.lang.NoSuchMethodException
> ---
>
> Key: HADOOP-7523
> URL: https://issues.apache.org/jira/browse/HADOOP-7523
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.23.0
>Reporter: John Lee
>Assignee: John Lee
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: HADOOP-7523.patch
>
>
> Test org.apache.hadoop.fs.TestFilterFileSystem fails due to 
> java.lang.NoSuchMethodException. Here is the error message:
> ---
> Test set: org.apache.hadoop.fs.TestFilterFileSystem
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec <<< 
> FAILURE!
> testFilterFileSystem(org.apache.hadoop.fs.TestFilterFileSystem)  Time 
> elapsed: 0.075 sec  <<< ERROR!
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.FilterFileSystem.copyToLocalFile(boolean, 
> org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path, boolean)
>   at java.lang.Class.getDeclaredMethod(Class.java:1937)
>   at 
> org.apache.hadoop.fs.TestFilterFileSystem.testFilterFileSystem(TestFilterFileSystem.java:157)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:232)
>   at junit.framework.TestSuite.run(TestSuite.java:227)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
>   at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
>   at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
>   at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
>   at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> This prevents a clean build.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7523) Test org.apache.hadoop.fs.TestFilterFileSystem fails due to java.lang.NoSuchMethodException

2011-08-07 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13080736#comment-13080736
 ] 

Tom White commented on HADOOP-7523:
---

+1 This fixes the failing test and all other tests pass. Output from test-patch:

{noformat}
+1 overall.  

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 system test framework.  The patch passed system test framework compile.
{noformat}

> Test org.apache.hadoop.fs.TestFilterFileSystem fails due to 
> java.lang.NoSuchMethodException
> ---
>
> Key: HADOOP-7523
> URL: https://issues.apache.org/jira/browse/HADOOP-7523
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.23.0
>Reporter: John Lee
>Priority: Blocker
> Fix For: 0.23.0
>
> Attachments: HADOOP-7523.patch
>
>
> Test org.apache.hadoop.fs.TestFilterFileSystem fails due to 
> java.lang.NoSuchMethodException. Here is the error message:
> ---
> Test set: org.apache.hadoop.fs.TestFilterFileSystem
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec <<< 
> FAILURE!
> testFilterFileSystem(org.apache.hadoop.fs.TestFilterFileSystem)  Time 
> elapsed: 0.075 sec  <<< ERROR!
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.fs.FilterFileSystem.copyToLocalFile(boolean, 
> org.apache.hadoop.fs.Path, org.apache.hadoop.fs.Path, boolean)
>   at java.lang.Class.getDeclaredMethod(Class.java:1937)
>   at 
> org.apache.hadoop.fs.TestFilterFileSystem.testFilterFileSystem(TestFilterFileSystem.java:157)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at junit.framework.TestCase.runTest(TestCase.java:168)
>   at junit.framework.TestCase.runBare(TestCase.java:134)
>   at junit.framework.TestResult$1.protect(TestResult.java:110)
>   at junit.framework.TestResult.runProtected(TestResult.java:128)
>   at junit.framework.TestResult.run(TestResult.java:113)
>   at junit.framework.TestCase.run(TestCase.java:124)
>   at junit.framework.TestSuite.runTest(TestSuite.java:232)
>   at junit.framework.TestSuite.run(TestSuite.java:227)
>   at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>   at 
> org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:59)
>   at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.executeTestSet(AbstractDirectoryTestSuite.java:120)
>   at 
> org.apache.maven.surefire.suite.AbstractDirectoryTestSuite.execute(AbstractDirectoryTestSuite.java:145)
>   at org.apache.maven.surefire.Surefire.run(Surefire.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.apache.maven.surefire.booter.SurefireBooter.runSuitesInProcess(SurefireBooter.java:290)
>   at 
> org.apache.maven.surefire.booter.SurefireBooter.main(SurefireBooter.java:1017)
> This prevents a clean build.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-08-07 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13080726#comment-13080726
 ] 

Tom White commented on HADOOP-6671:
---

BTW I've just opened HADOOP-7525 to simplify the script.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, HADOOP-6671-AD.patch, 
> HADOOP-6671-AD.sh, HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7525) Make arguments to test-patch optional

2011-08-07 Thread Tom White (JIRA)
Make arguments to test-patch optional
-

 Key: HADOOP-7525
 URL: https://issues.apache.org/jira/browse/HADOOP-7525
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tom White


Currently you have to specify all the arguments to test-patch.sh, which makes 
it cumbersome to use. We should make all arguments except the patch file 
optional. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7520) hadoop-main fails to deploy

2011-08-05 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13080067#comment-13080067
 ] 

Tom White commented on HADOOP-7520:
---

+1 This fixes the problem for me.

> hadoop-main fails to deploy
> ---
>
> Key: HADOOP-7520
> URL: https://issues.apache.org/jira/browse/HADOOP-7520
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7520.patch
>
>
> Doing a Maven deployment hadoop-main (trunk/pom.xml) fails to deploy because 
> it does not have the distribution management information.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7516) Mavenized test-patch.sh must skip tests

2011-08-05 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13080040#comment-13080040
 ] 

Tom White commented on HADOOP-7516:
---

The problem was that 'root' and 'doclet' didn't exist so the 'cd' failed and 
built the whole project (including running tests). This was fixed as a part of 
HADOOP-7515 to only build hadoop-project and hadoop-annotations, so I think 
this issue can be closed now.

> Mavenized test-patch.sh must skip tests
> ---
>
> Key: HADOOP-7516
> URL: https://issues.apache.org/jira/browse/HADOOP-7516
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.23.0
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
> Fix For: 0.23.0
>
> Attachments: HADOOP-7516.patch
>
>
> test-patch.sh calls mvn install with -DskipTests. Tests needs to be skipped.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7517) hadoop common build fails creating docs

2011-08-05 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1308#comment-1308
 ] 

Tom White commented on HADOOP-7517:
---

It looks like Forrest isn't installed on the build machine. 

> hadoop common build fails creating docs
> ---
>
> Key: HADOOP-7517
> URL: https://issues.apache.org/jira/browse/HADOOP-7517
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>
> post hadoop-6671 merge 
> executing the following command fails on creating docs 
> $MAVEN_HOME/bin/mvn clean verify checkstyle:checkstyle findbugs:findbugs 
> -DskipTests -Pbintar -Psrc -Pnative -Pdocs
> {noformat}
> Main:
> [mkdir] Created dir: 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Common-trunk-maven/trunk/hadoop-common/target/docs-src
>  [copy] Copying 33 files to 
> /home/jenkins/jenkins-slave/workspace/Hadoop-Common-trunk-maven/trunk/hadoop-common/target/docs-src
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1:33.807s
> [INFO] Finished at: Fri Aug 05 08:50:43 UTC 2011
> [INFO] Final Memory: 35M/462M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.6:run (site) on project 
> hadoop-common: An Ant BuildException has occured: Execute failed: 
> java.io.IOException: Cannot run program 
> "/home/hudson/tools/forrest/latest/bin/forrest" (in directory 
> "/home/jenkins/jenkins-slave/workspace/Hadoop-Common-trunk-maven/trunk/hadoop-common/target/docs-src"):
>  java.io.IOException: error=2, No such file or directory -> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> [INFO] Scanning for projects...
> {noformat}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7515) test-patch reports the wrong number of javadoc warnings

2011-08-05 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White resolved HADOOP-7515.
---

  Resolution: Fixed
Hadoop Flags: [Reviewed]

I've just committed this.

> test-patch reports the wrong number of javadoc warnings
> ---
>
> Key: HADOOP-7515
> URL: https://issues.apache.org/jira/browse/HADOOP-7515
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7515.patch, HADOOP-7515.patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7499) Add method for doing a sanity check on hostnames in NetUtils

2011-08-04 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13079693#comment-13079693
 ] 

Tom White commented on HADOOP-7499:
---

Thanks Jeffrey. I've created a fix for this in HADOOP-7515.

> Add method for doing a sanity check on hostnames in NetUtils
> 
>
> Key: HADOOP-7499
> URL: https://issues.apache.org/jira/browse/HADOOP-7499
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.23.0
>Reporter: Jeffrey Naisbitt
>Assignee: Jeffrey Naisbitt
> Fix For: 0.23.0
>
> Attachments: HADOOP-7499.patch, patchJavadocWarnings.txt
>
>
> As part of MAPREDUCE-2489, we need a method in NetUtils to do a sanity check 
> on hostnames

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7515) test-patch reports the wrong number of javadoc warnings

2011-08-04 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7515:
--

Attachment: HADOOP-7515.patch

Updated patch with Todd's suggestions.

> test-patch reports the wrong number of javadoc warnings
> ---
>
> Key: HADOOP-7515
> URL: https://issues.apache.org/jira/browse/HADOOP-7515
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7515.patch, HADOOP-7515.patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7515) test-patch reports the wrong number of javadoc warnings

2011-08-04 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7515:
--

Attachment: HADOOP-7515.patch

This patch fixes the triple counting issue. It also changes back the expected 
number to 6.

It also makes MAVEN_HOME optional, fixes the problem where the tests were 
mistakenly being run, and makes a few other minor changes. 

To test it, comment out line 677 ("checkout") to avoid it halting due to a 
modified workspace. 

> test-patch reports the wrong number of javadoc warnings
> ---
>
> Key: HADOOP-7515
> URL: https://issues.apache.org/jira/browse/HADOOP-7515
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7515.patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7515) test-patch reports the wrong number of javadoc warnings

2011-08-04 Thread Tom White (JIRA)
test-patch reports the wrong number of javadoc warnings
---

 Key: HADOOP-7515
 URL: https://issues.apache.org/jira/browse/HADOOP-7515
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Tom White
Assignee: Tom White
 Fix For: 0.23.0




--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7508) compiled nativelib is in wrong directory and it is not picked up by surefire setup

2011-08-03 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7508:
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, Alejandro.

> compiled nativelib is in wrong directory and it is not picked up by surefire 
> setup
> --
>
> Key: HADOOP-7508
> URL: https://issues.apache.org/jira/browse/HADOOP-7508
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7508.patch
>
>
> The location of the compiled native libraries differs from the one surefire 
> plugin (run testcases) is configured to use.
> This makes testcases using nativelibs to fail loading them.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7508) compiled nativelib is in wrong directory and it is not picked up by surefire setup

2011-08-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078973#comment-13078973
 ] 

Tom White commented on HADOOP-7508:
---

+1

> compiled nativelib is in wrong directory and it is not picked up by surefire 
> setup
> --
>
> Key: HADOOP-7508
> URL: https://issues.apache.org/jira/browse/HADOOP-7508
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7508.patch
>
>
> The location of the compiled native libraries differs from the one surefire 
> plugin (run testcases) is configured to use.
> This makes testcases using nativelibs to fail loading them.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7499) Add method for doing a sanity check on hostnames in NetUtils

2011-08-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078952#comment-13078952
 ] 

Tom White commented on HADOOP-7499:
---

I ran the same command. Can you attach the output warning files please?

> Add method for doing a sanity check on hostnames in NetUtils
> 
>
> Key: HADOOP-7499
> URL: https://issues.apache.org/jira/browse/HADOOP-7499
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.23.0
>Reporter: Jeffrey Naisbitt
>Assignee: Jeffrey Naisbitt
> Fix For: 0.23.0
>
> Attachments: HADOOP-7499.patch
>
>
> As part of MAPREDUCE-2489, we need a method in NetUtils to do a sanity check 
> on hostnames

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7481) Wire AOP test in Mavenized Hadoop common

2011-08-03 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7481:
--

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-7412

> Wire AOP test in Mavenized Hadoop common
> 
>
> Key: HADOOP-7481
> URL: https://issues.apache.org/jira/browse/HADOOP-7481
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> We ned add a Maven profile that activates the AOP injection and runs the 
> necessary AOPed tests. I believe there should be a Maven plugin for doing 
> that.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7411) Convert remaining Ant based build to Maven

2011-08-03 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7411:
--

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-7412

> Convert remaining Ant based build to Maven
> --
>
> Key: HADOOP-7411
> URL: https://issues.apache.org/jira/browse/HADOOP-7411
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> Once Mavenization is complete, do a second iteration to remove antrun calls 
> out, this may require writing some Mojos.
> The tricky things are native compilation and symlinks handling (for native 
> libs) when creating packaging.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7410) Mavenize common RPM/DEB

2011-08-03 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7410:
--

Issue Type: Sub-task  (was: Task)
Parent: HADOOP-7412

> Mavenize common RPM/DEB
> ---
>
> Key: HADOOP-7410
> URL: https://issues.apache.org/jira/browse/HADOOP-7410
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>
> Mavenize RPM/DEB generation

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7501) publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo

2011-08-03 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7501:
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, Alejandro!

> publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo
> ---
>
> Key: HADOOP-7501
> URL: https://issues.apache.org/jira/browse/HADOOP-7501
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7501.patch, HADOOP-7501b.patch
>
>
> A *distributionManagement* section must be added to the hadoop-project POM 
> with the SNAPSHOTs section, then 'mvn deploy' will push the artifacts to it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7502) Use canonical (IDE friendly) generated-sources directory for generated sources

2011-08-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078823#comment-13078823
 ] 

Tom White commented on HADOOP-7502:
---

I tried this with Eclipse (using {{mvn eclipse:eclipse}}) but unfortunately it 
still didn't pick up the generated test source tree. I didn't try with 
m2eclipse though (https://issues.sonatype.org/browse/MNGECLIPSE-2387 might be 
relevant here). 

> Use canonical (IDE friendly) generated-sources directory for generated sources
> --
>
> Key: HADOOP-7502
> URL: https://issues.apache.org/jira/browse/HADOOP-7502
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Luke Lu
>Assignee: Luke Lu
> Fix For: 0.23.0
>
> Attachments: hadoop-7502-v1.patch
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7499) Add method for doing a sanity check on hostnames in NetUtils

2011-08-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078819#comment-13078819
 ] 

Tom White commented on HADOOP-7499:
---

> I didn't see any new problems in the javadoc output, so I'm not sure where 
> the -1 is coming from there. I didn't see anything related to this patch.

I tried running test-patch.sh on your patch but I couldn't reproduce this (it 
reported no warnings). Can you have a look in the generated 
patchJavadocWarnings.txt file for the warnings?

> Add method for doing a sanity check on hostnames in NetUtils
> 
>
> Key: HADOOP-7499
> URL: https://issues.apache.org/jira/browse/HADOOP-7499
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.23.0
>Reporter: Jeffrey Naisbitt
>Assignee: Jeffrey Naisbitt
> Fix For: 0.23.0
>
> Attachments: HADOOP-7499.patch
>
>
> As part of MAPREDUCE-2489, we need a method in NetUtils to do a sanity check 
> on hostnames

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7501) publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo

2011-08-03 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078817#comment-13078817
 ] 

Tom White commented on HADOOP-7501:
---

That looks like a good fix to me.

> publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo
> ---
>
> Key: HADOOP-7501
> URL: https://issues.apache.org/jira/browse/HADOOP-7501
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7501.patch, HADOOP-7501b.patch
>
>
> A *distributionManagement* section must be added to the hadoop-project POM 
> with the SNAPSHOTs section, then 'mvn deploy' will push the artifacts to it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7501) publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo

2011-08-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7501:
--

Attachment: HADOOP-7501.patch

I used this patch to successfully publish artifacts to the Apache Snapshot 
repo. However, it was not possible to publish hadoop-assemblies since it 
doesn't inherit from hadoop-project - not sure if that's a problem.

I verified that I could build HDFS and MapReduce with Ant pulling jars from the 
Snapshot repo (ant veryclean jar).

> publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo
> ---
>
> Key: HADOOP-7501
> URL: https://issues.apache.org/jira/browse/HADOOP-7501
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7501.patch
>
>
> A *distributionManagement* section must be added to the hadoop-project POM 
> with the SNAPSHOTs section, then 'mvn deploy' will push the artifacts to it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-7501) publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo

2011-08-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White reassigned HADOOP-7501:
-

Assignee: Tom White

> publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo
> ---
>
> Key: HADOOP-7501
> URL: https://issues.apache.org/jira/browse/HADOOP-7501
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.23.0
>Reporter: Alejandro Abdelnur
>Assignee: Tom White
> Fix For: 0.23.0
>
> Attachments: HADOOP-7501.patch
>
>
> A *distributionManagement* section must be added to the hadoop-project POM 
> with the SNAPSHOTs section, then 'mvn deploy' will push the artifacts to it.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-08-02 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078562#comment-13078562
 ] 

Tom White commented on HADOOP-6671:
---

It looks like you might not be running from the top-level (i.e. the directory 
containing hadoop-common, hdfs, mapreduce etc). Can you try from there? Thanks, 
Tom.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, HADOOP-6671-AD.patch, 
> HADOOP-6671-AD.sh, HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-08-02 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078533#comment-13078533
 ] 

Tom White commented on HADOOP-6671:
---

Hi Nicholas,

Try building common with {{mvn clean install -DskipTests}}, then HDFS with 
{{ant veryclean compile -Dresolvers=internal}}.

I'll see if I can get the hadoop-annotations jar published to the Apache 
snapshot repo so the first step isn't necessary.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, HADOOP-6671-AD.patch, 
> HADOOP-6671-AD.sh, HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7500) Write a script to migrate patches to Maven layout

2011-08-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7500:
--

Attachment: mavenize-common.sed

Here's a simple sed script to fix patches. Usage:

{noformat}
sed -E -f mavenize-common.sed < old.patch > new.patch
{noformat}

Note that there may be some paths that are not converted by this script: these 
will need to be manually changed.

> Write a script to migrate patches to Maven layout
> -
>
> Key: HADOOP-7500
> URL: https://issues.apache.org/jira/browse/HADOOP-7500
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: mavenize-common.sed
>
>
> HADOOP-6671 changed the source directory layout. It would be useful to have a 
> script to fix patches that were written with the old layout.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7500) Write a script to migrate patches to Maven layout

2011-08-02 Thread Tom White (JIRA)
Write a script to migrate patches to Maven layout
-

 Key: HADOOP-7500
 URL: https://issues.apache.org/jira/browse/HADOOP-7500
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Tom White
Assignee: Tom White


HADOOP-6671 changed the source directory layout. It would be useful to have a 
script to fix patches that were written with the old layout.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6671) To use maven for hadoop common builds

2011-08-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-6671:
--

Fix Version/s: 0.23.0

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, HADOOP-6671-AD.patch, 
> HADOOP-6671-AD.sh, HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6671) To use maven for hadoop common builds

2011-08-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-6671:
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've committed this. Thanks, Alejandro!

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, HADOOP-6671-AD.patch, 
> HADOOP-6671-AD.sh, HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-07-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13073098#comment-13073098
 ] 

Tom White commented on HADOOP-6671:
---

+1

I've tried out all the aspects of building common using Maven and it works 
well. I successfully managed to do a cross-project builds for HDFS (HDFS-2196) 
and MapReduce (MAPREDUCE-2741). I created Jenkins jobs for building the tarball 
(https://builds.apache.org/job/Hadoop-Common-trunk-maven/) and test-patch 
(https://builds.apache.org/view/G-L/view/Hadoop/job/PreCommit-HADOOP-Build-maven/)
 and am happy that these can be switched over when this code goes in. (Note 
that they are not running at the moment since the Jenkins Hadoop machines are 
down.)

I've added the Maven equivalents to 
http://wiki.apache.org/hadoop/HowToContribute so it's easy for folks to see how 
to do the common operations (they are in the BUILDING.txt file in this patch 
too).


> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, HADOOP-6671-AD.patch, 
> HADOOP-6671-AD.sh, HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-07-29 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13072935#comment-13072935
 ] 

Tom White commented on HADOOP-6671:
---

There's a small issue with moving hadoop-common/src/test/bin to dev-support 
since it is used as an svn:externals definition from HDFS and MapReduce. I 
suggest that we leave the bin directory there as well as creating the new 
dev-support directory, since the test-patch.sh script has been modified to work 
for Maven, so we need both. When all three projects have been Mavenized then 
we'll be able to remove the bin directory completely (and the svn:externals).

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-AA.patch, HADOOP-6671-AB.patch, 
> HADOOP-6671-AC.patch, HADOOP-6671-AC.sh, 
> HADOOP-6671-cross-project-HDFS.patch, HADOOP-6671-e.patch, 
> HADOOP-6671-f.patch, HADOOP-6671-g.patch, HADOOP-6671-h.patch, 
> HADOOP-6671-i.patch, HADOOP-6671-j.patch, HADOOP-6671-k.sh, 
> HADOOP-6671-l.patch, HADOOP-6671-m.patch, HADOOP-6671-n.patch, 
> HADOOP-6671-o.patch, HADOOP-6671-p.patch, HADOOP-6671-q.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-AA.sh, mvn-layout-AB.sh, 
> mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout-k.sh, mvn-layout-l.sh, 
> mvn-layout-m.sh, mvn-layout-n.sh, mvn-layout-o.sh, mvn-layout-p.sh, 
> mvn-layout-q.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7111) Several TFile tests failing when native libraries are present

2011-07-20 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13068605#comment-13068605
 ] 

Tom White commented on HADOOP-7111:
---

+1

The TODO you added should be removed along with the assertNull on the following 
line, since the Assert.fail is sufficient.

> Several TFile tests failing when native libraries are present
> -
>
> Key: HADOOP-7111
> URL: https://issues.apache.org/jira/browse/HADOOP-7111
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Todd Lipcon
>Assignee: Aaron T. Myers
>Priority: Critical
> Fix For: 0.23.0
>
> Attachments: hadoop-7111.0.patch
>
>
> When running tests with native libraries present, TestTFileByteArrays and 
> TestTFileJClassComparatorByteArrays fail on trunk. They don't seem to fail in 
> 0.20 with native libraries.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-07-18 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13067178#comment-13067178
 ] 

Tom White commented on HADOOP-6671:
---

* On naming, I think the "hadoop-" prefix in module names is unnecessary. I 
would prefer common/common-main, etc.
* I updated the pre-commit job for Maven to test that "bad" patch in 
HADOOP-7413. See the end of 
https://builds.apache.org/view/G-L/view/Hadoop/job/PreCommit-HADOOP-Build-maven/11/console.
 The only missing -1 is for the javadoc, which failed due to missing artifacts 
and can be fixed by adding a line calling "mvn install -DskipTests" in the 
section labelled "Pre-build trunk to verify trunk stability and javac warnings".
* We need to make sure that cross project builds still work. When I tried doing 
"mvn install", then building HDFS I got an error which can be fixed with  
HADOOP-6671-cross-project-HDFS.patch, which I posted a while back. We should 
commit this, and do the same for MapReduce.
* Have you checked that you can run Hadoop from a tarball built using Maven? 
(BTW https://builds.apache.org/job/Hadoop-Common-trunk-maven/ is building 
nightly tarballs using Maven.)
* What needs doing to make for a smooth switchover as smooth for developers? We 
should update http://wiki.apache.org/hadoop/HowToContribute (perhaps make a 
copy so it's available before the switch). Anything else?




> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671-f.patch, HADOOP-6671-g.patch, 
> HADOOP-6671-h.patch, HADOOP-6671-i.patch, HADOOP-6671-j.patch, 
> HADOOP-6671-k.sh, HADOOP-6671-l.patch, HADOOP-6671-m.patch, 
> HADOOP-6671-n.patch, HADOOP-6671-o.patch, HADOOP-6671-p.patch, 
> HADOOP-6671-q.patch, HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-e.sh, mvn-layout-f.sh, 
> mvn-layout-k.sh, mvn-layout-l.sh, mvn-layout-m.sh, mvn-layout-n.sh, 
> mvn-layout-o.sh, mvn-layout-p.sh, mvn-layout-q.sh, mvn-layout.sh, 
> mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7413) Create Jenkins build for Maven patch testing

2011-07-15 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7413:
--

Attachment: HADOOP-7413-bad.patch

Patch for testing test-patch error paths.

> Create Jenkins build for Maven patch testing
> 
>
> Key: HADOOP-7413
> URL: https://issues.apache.org/jira/browse/HADOOP-7413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7413-bad.patch, HADOOP-7413.patch, 
> HADOOP-7413.patch
>
>
> We need an equivalent of https://builds.apache.org/job/PreCommit-HADOOP-Build 
> for the Maven build. Until this is live it would be triggered manually and 
> wouldn't post comments to JIRA.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7413) Create Jenkins build for Maven patch testing

2011-07-14 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7413:
--

Attachment: HADOOP-7413.patch

Another dummy patch.

> Create Jenkins build for Maven patch testing
> 
>
> Key: HADOOP-7413
> URL: https://issues.apache.org/jira/browse/HADOOP-7413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7413.patch, HADOOP-7413.patch
>
>
> We need an equivalent of https://builds.apache.org/job/PreCommit-HADOOP-Build 
> for the Maven build. Until this is live it would be triggered manually and 
> wouldn't post comments to JIRA.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-07-13 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13064986#comment-13064986
 ] 

Tom White commented on HADOOP-6671:
---

I updated https://builds.apache.org/job/Hadoop-Common-trunk-maven/ to use patch 
'n'.

A couple of quick comments:
* I agree with Giri that common/common-main (or even common/main) is a better 
convention than common-main/common.
* Have you considered using the Apache parent POM? 
http://svn.apache.org/repos/asf/maven/pom/tags/maven-parent-9/pom.xml




> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671-f.patch, HADOOP-6671-g.patch, 
> HADOOP-6671-h.patch, HADOOP-6671-i.patch, HADOOP-6671-j.patch, 
> HADOOP-6671-k.sh, HADOOP-6671-l.patch, HADOOP-6671-m.patch, 
> HADOOP-6671-n.patch, HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, common-mvn-layout-i.sh, 
> hadoop-commons-maven.patch, mvn-layout-e.sh, mvn-layout-f.sh, 
> mvn-layout-k.sh, mvn-layout-l.sh, mvn-layout-m.sh, mvn-layout-n.sh, 
> mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7206) Integrate Snappy compression

2011-06-24 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13054418#comment-13054418
 ] 

Tom White commented on HADOOP-7206:
---

TestCodec passes for me both when snappy is not installed and when snappy is 
installed and native compilation is enabled. +1

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: Alejandro Abdelnur
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> HADOOP-7206revertplusnew.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7413) Create Jenkins build for Maven patch testing

2011-06-21 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13052846#comment-13052846
 ] 

Tom White commented on HADOOP-7413:
---

This is just a test patch for the new Jenkins job to try out.

> Create Jenkins build for Maven patch testing
> 
>
> Key: HADOOP-7413
> URL: https://issues.apache.org/jira/browse/HADOOP-7413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7413.patch
>
>
> We need an equivalent of https://builds.apache.org/job/PreCommit-HADOOP-Build 
> for the Maven build. Until this is live it would be triggered manually and 
> wouldn't post comments to JIRA.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7413) Create Jenkins build for Maven patch testing

2011-06-21 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7413:
--

Status: Patch Available  (was: Open)

> Create Jenkins build for Maven patch testing
> 
>
> Key: HADOOP-7413
> URL: https://issues.apache.org/jira/browse/HADOOP-7413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7413.patch
>
>
> We need an equivalent of https://builds.apache.org/job/PreCommit-HADOOP-Build 
> for the Maven build. Until this is live it would be triggered manually and 
> wouldn't post comments to JIRA.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7413) Create Jenkins build for Maven patch testing

2011-06-21 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7413:
--

Attachment: HADOOP-7413.patch

> Create Jenkins build for Maven patch testing
> 
>
> Key: HADOOP-7413
> URL: https://issues.apache.org/jira/browse/HADOOP-7413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7413.patch
>
>
> We need an equivalent of https://builds.apache.org/job/PreCommit-HADOOP-Build 
> for the Maven build. Until this is live it would be triggered manually and 
> wouldn't post comments to JIRA.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-7413) Create Jenkins build for Maven patch testing

2011-06-21 Thread Tom White (JIRA)
Create Jenkins build for Maven patch testing


 Key: HADOOP-7413
 URL: https://issues.apache.org/jira/browse/HADOOP-7413
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Tom White
Assignee: Tom White


We need an equivalent of https://builds.apache.org/job/PreCommit-HADOOP-Build 
for the Maven build. Until this is live it would be triggered manually and 
wouldn't post comments to JIRA.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7408) Add javadoc for SnappyCodec

2011-06-20 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7408:
--

Attachment: HADOOP-7408.patch

Looks good. I've updated your patch to annotate SnappyCodec as public evolving 
(like the other codecs in the package), and to provide a link to the snappy 
homepage from this class.

> Add javadoc for SnappyCodec
> ---
>
> Key: HADOOP-7408
> URL: https://issues.apache.org/jira/browse/HADOOP-7408
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: T Jake Luciani
>Assignee: T Jake Luciani
>Priority: Trivial
> Fix For: 0.23.0
>
> Attachments: HADOOP-7408.patch, v1-HADOOP-7408-add-snappy-javadoc.txt
>
>
> HADOOP-7206 failed to include a javadoc for public methods.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7206) Integrate Snappy compression

2011-06-20 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13052161#comment-13052161
 ] 

Tom White commented on HADOOP-7206:
---

Nicholas, I agree that javadoc is needed. Thanks for pointing it out.

T Jake, would you like to create a new patch which adds javadoc? I think a new 
JIRA would be fine.

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: T Jake Luciani
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7206) Integrate Snappy compression

2011-06-20 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13052112#comment-13052112
 ] 

Tom White commented on HADOOP-7206:
---

I've committed the fix to HADOOP-7407 after manually verifying it. 

> BTW, who has reviewed the patch?

I reviewed the patch. I marked it as "Reviewed" when committing it.

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: T Jake Luciani
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7407) Snappy integration breaks HDFS build.

2011-06-20 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7407:
--

   Resolution: Fixed
Fix Version/s: 0.23.0
 Assignee: Alejandro Abdelnur
 Release Note: 
I've just committed this. Thanks Alejandro.

I manually checked that HDFS and MapReduce both build. Sorry for not doing this 
earlier.
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

> Snappy integration breaks HDFS build.
> -
>
> Key: HADOOP-7407
> URL: https://issues.apache.org/jira/browse/HADOOP-7407
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ivan Kelly
>Assignee: Alejandro Abdelnur
>Priority: Critical
> Fix For: 0.23.0
>
> Attachments: HADOOP-7407.patch
>
>
> The common/ivy/hadoop-common-template.xml submitted with 7206 has a typo 
> which breaks anything that depends on the hadoop-common maven package.
> Instead of java-snappy, you should have 
> snappy-java
> [ivy:resolve] downloading 
> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/hadoop-common/0.23.0-SNAPSHOT/hadoop-common-0.23.0-20110620.163810-177.jar
>  ...
> [ivy:resolve] ...
> [ivy:resolve] ..
> [ivy:resolve] ...
> [ivy:resolve] ...
> [ivy:resolve]  (1631kB)
> [ivy:resolve] .. (0kB)
> [ivy:resolve] [SUCCESSFUL ] 
> org.apache.hadoop#hadoop-common;0.23.0-SNAPSHOT!hadoop-common.jar (8441ms)
> [ivy:resolve] 
> [ivy:resolve] :: problems summary ::
> [ivy:resolve]  WARNINGS
> [ivy:resolve] module not found: 
> org.xerial.snappy#java-snappy;1.0.3-rc2
> [ivy:resolve]  apache-snapshot: tried
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.pom
> [ivy:resolve]   -- artifact 
> org.xerial.snappy#java-snappy;1.0.3-rc2!java-snappy.jar:
> [ivy:resolve]   
> https://repository.apache.org/content/repositories/snapshots/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.jar
> [ivy:resolve]  maven2: tried
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.pom
> [ivy:resolve]   -- artifact 
> org.xerial.snappy#java-snappy;1.0.3-rc2!java-snappy.jar:
> [ivy:resolve]   
> http://repo1.maven.org/maven2/org/xerial/snappy/java-snappy/1.0.3-rc2/java-snappy-1.0.3-rc2.jar
> [ivy:resolve] ::
> [ivy:resolve] ::  UNRESOLVED DEPENDENCIES ::
> [ivy:resolve] ::
> [ivy:resolve] :: org.xerial.snappy#java-snappy;1.0.3-rc2: not 
> found
> [ivy:resolve] ::
> [ivy:resolve] 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7206) Integrate Snappy compression

2011-06-20 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7206:
--

   Resolution: Fixed
Fix Version/s: 0.23.0
 Assignee: T Jake Luciani  (was: issei yoshida)
 Release Note:   (was: This patch bring the snappy compression to Hadoop.

It requires the snappy v1.0.2 or higher,
Because it doesn't use C++ interface but C interface and loads the snappy 
native library dynamically via dlopen.

Its native library is a part of the libhadoop.)
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

I've just committed this. Thanks T Jake and Issei.

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: T Jake Luciani
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7206) Integrate Snappy compression

2011-06-17 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13051423#comment-13051423
 ] 

Tom White commented on HADOOP-7206:
---

I noticed that the compression overhead in this patch is {{(bufferSize >> 3) + 
128 + 3}} which is less than the maximum possible blowup that Snappy allows for 
(http://code.google.com/p/snappy/source/browse/trunk/snappy.cc#55). Should this 
be changed to {{bufferSize / 6 + 32}}?

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: issei yoshida
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-06-14 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13049548#comment-13049548
 ] 

Tom White commented on HADOOP-6671:
---

I updated https://builds.apache.org/job/Hadoop-Common-trunk-maven/ to generate 
Clover reports.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671-f.patch, HADOOP-6671-g.patch, 
> HADOOP-6671-h.patch, HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, 
> hadoop-commons-maven.patch, mvn-layout-e.sh, mvn-layout-f.sh, mvn-layout.sh, 
> mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7389) Use of TestingGroups by tests causes subsequent tests to fail

2011-06-14 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7389:
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, Aaron.

> Use of TestingGroups by tests causes subsequent tests to fail
> -
>
> Key: HADOOP-7389
> URL: https://issues.apache.org/jira/browse/HADOOP-7389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.23.0
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
> Fix For: 0.23.0
>
> Attachments: hadoop-7389.0.patch, hadoop-7389.1.patch
>
>
> As mentioned in HADOOP-6671, 
> {{UserGroupInformation.createUserForTesting(...)}} manipulates static state 
> which can cause test cases which are run after a call to this function to 
> fail.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-06-13 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13048870#comment-13048870
 ] 

Tom White commented on HADOOP-6671:
---

Per-test in Ant (and Maven) creates a new JVM per TestCase class (not per 
method). When I move testGetServerSideGroups() to be the last method in 
TestUserGroupInformation it consistently fails in Ant, which suggests that it 
relies on static state (as Todd suggested).

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671-f.patch, HADOOP-6671-g.patch, 
> HADOOP-6671.patch, HADOOP-6671b.patch, HADOOP-6671c.patch, 
> HADOOP-6671d.patch, build.png, hadoop-commons-maven.patch, mvn-layout-e.sh, 
> mvn-layout-f.sh, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7384) Allow test-patch to be more flexible about patch format

2011-06-13 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13048836#comment-13048836
 ] 

Tom White commented on HADOOP-7384:
---

+1 This looks like a useful change.

> Allow test-patch to be more flexible about patch format
> ---
>
> Key: HADOOP-7384
> URL: https://issues.apache.org/jira/browse/HADOOP-7384
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 0.23.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-7384.txt
>
>
> Right now the test-patch process only accepts patches that are generated as 
> "-p0" relative to common/, hdfs/, or mapreduce/. This has always been 
> annoying for git users where the default patch format is -p1. It's also now 
> annoying for SVN users who may generate a patch relative to trunk/ instead of 
> the subproject subdirectory. We should auto-detect the correct patch level.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-06-12 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13048467#comment-13048467
 ] 

Tom White commented on HADOOP-6671:
---

Eric, 

I noticed the same test failures when running on Jenkins 
(https://builds.apache.org/job/Hadoop-Common-trunk-maven/8/), but I can't 
reproduce locally (on Mac), or on Jenkins with Ant (see 
https://builds.apache.org/job/Hadoop-Common-trunk/lastCompletedBuild/testReport/org.apache.hadoop.security/TestUserGroupInformation/testGetServerSideGroups/).
 Seems like some kind of environment issue.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, 
> hadoop-commons-maven.patch, mvn-layout-e.sh, mvn-layout.sh, mvn-layout.sh, 
> mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-06-10 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13047427#comment-13047427
 ] 

Tom White commented on HADOOP-6671:
---

A couple of other things I noticed:

We need to use a custom doclet for Javadoc (to exclude private classes). 
Something like

{code}
org.apache.hadoop.classification.tools.ExcludePrivateAnnotationsStandardDoclet

  org.apache.hadoop
  hadoop-common
  0.23.0-SNAPSHOT

true
{code}

The releaseaudit target equivalent is needed in Maven.




> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, 
> hadoop-commons-maven.patch, mvn-layout-e.sh, mvn-layout.sh, mvn-layout.sh, 
> mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7372) Remove ref of 20.3 release from branch-0.20 CHANGES.txt

2011-06-09 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13047010#comment-13047010
 ] 

Tom White commented on HADOOP-7372:
---

+1

> Remove ref of 20.3 release from branch-0.20 CHANGES.txt
> ---
>
> Key: HADOOP-7372
> URL: https://issues.apache.org/jira/browse/HADOOP-7372
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Eli Collins
>Assignee: Eli Collins
> Fix For: 0.20.3
>
> Attachments: hadoop-7372-1.patch
>
>
> CHANGES.txt on branch-0.20 claims there was a 0.20.3 release on 1/5. There 
> has not been a 0.20.3 release.
> {noformat}
> Release 0.20.4 - Unreleased
> ...
> Release 0.20.3 - 2011-1-5
> {noformat}
> We should update this to indicate 0.20. is unreleased.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6671) To use maven for hadoop common builds

2011-06-09 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-6671:
--

Attachment: HADOOP-6671-cross-project-HDFS.patch

I've set up a Jenkins job to build common artifacts using Maven: 
https://builds.apache.org/job/Hadoop-Common-trunk-maven/. 

It's building the same artifacts as 
https://builds.apache.org/job/Hadoop-Common-trunk/, including documentation and 
native libraries, and reports for compiler warnings, tests, FindBugs, and 
Checkstyle. The only missing report is for Clover which needs adding to the 
Maven build.

Currently two tests are failing 
(https://builds.apache.org/job/Hadoop-Common-trunk-maven/8/) - I'm not sure 
why, as they pass for me locally using Maven, and on Hudson using Ant.

I also tried a cross-project build using Maven for common and Ant for HDFS. I 
needed the attached patch to get the HDFS build to work - these are changes 
that are needed anyway that we were getting away with using Ivy. MapReduce will 
need similar changes.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-6671-cross-project-HDFS.patch, 
> HADOOP-6671-e.patch, HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, 
> hadoop-commons-maven.patch, mvn-layout-e.sh, mvn-layout.sh, mvn-layout.sh, 
> mvn-layout2.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7362) Create a Lock annotation to document locking (hierarchies) for methods

2011-06-08 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046051#comment-13046051
 ] 

Tom White commented on HADOOP-7362:
---

You're right that GuardedBy is not suitable for documenting locking order. How 
about introducing a LockingOrder annotation that takes an array of values?

> Create a Lock annotation to document locking (hierarchies) for methods
> --
>
> Key: HADOOP-7362
> URL: https://issues.apache.org/jira/browse/HADOOP-7362
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> It will be useful to have better developer docs via a Lock annotation to 
> document locking (hierarchies) for methods.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7323) Add capability to resolve compression codec based on codec name

2011-06-07 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7323:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, Alejandro!

> Add capability to resolve compression codec based on codec name
> ---
>
> Key: HADOOP-7323
> URL: https://issues.apache.org/jira/browse/HADOOP-7323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.21.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-7323.patch, HADOOP-7323.patch, HADOOP-7323.patch, 
> HADOOP-7323b.patch
>
>
> When setting up a compression codec in an MR job the full class name of the 
> codec must be used.
> To ease usability, compression codecs should be resolved by their codec name 
> (ie 'gzip', 'deflate', 'zlib', 'bzip2') instead their full codec class name.
> Besides easy of use for Hadoop users who would use the codec alias instead 
> the full codec class name, it could simplify how HBase resolves loads the 
> codecs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-07 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7350:
--

Attachment: HADOOP-7350.patch

I moved it to take advantage of the ServiceLoader's caching of providers. This 
new patch does that but moves the iteration back to the getCodecClasses() 
method. (This is now like the example in the ServiceLoader documentation.)


> Use ServiceLoader to discover compression codec classes
> ---
>
> Key: HADOOP-7350
> URL: https://issues.apache.org/jira/browse/HADOOP-7350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7350.patch, HADOOP-7350.patch, HADOOP-7350.patch, 
> HADOOP-7350.patch, HADOOP-7350.patch
>
>
> By using a ServiceLoader users wouldn't have to add codec classes to 
> io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
> since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7362) Create a Lock annotation to document locking (hierarchies) for methods

2011-06-07 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045521#comment-13045521
 ] 

Tom White commented on HADOOP-7362:
---

The annotations from "Java Concurrency in Practice" may be appropriate here: 
http://jcip.net/annotations/doc/net/jcip/annotations/package-summary.html, in 
partciular http://jcip.net/annotations/doc/net/jcip/annotations/GuardedBy.html

FindBugs has support for these annotations, not sure about JCarder.

> Create a Lock annotation to document locking (hierarchies) for methods
> --
>
> Key: HADOOP-7362
> URL: https://issues.apache.org/jira/browse/HADOOP-7362
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> It will be useful to have better developer docs via a Lock annotation to 
> document locking (hierarchies) for methods.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-06 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7350:
--

Attachment: HADOOP-7350.patch

Sorry - this should be the right patch now.

> Use ServiceLoader to discover compression codec classes
> ---
>
> Key: HADOOP-7350
> URL: https://issues.apache.org/jira/browse/HADOOP-7350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7350.patch, HADOOP-7350.patch, HADOOP-7350.patch, 
> HADOOP-7350.patch
>
>
> By using a ServiceLoader users wouldn't have to add codec classes to 
> io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
> since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7323) Add capability to resolve compression codec based on codec name

2011-06-06 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7323:
--

Fix Version/s: (was: 0.22.0)
 Hadoop Flags: [Reviewed]
   Status: Patch Available  (was: Open)

> Add capability to resolve compression codec based on codec name
> ---
>
> Key: HADOOP-7323
> URL: https://issues.apache.org/jira/browse/HADOOP-7323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.21.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-7323.patch, HADOOP-7323.patch, HADOOP-7323.patch, 
> HADOOP-7323b.patch
>
>
> When setting up a compression codec in an MR job the full class name of the 
> codec must be used.
> To ease usability, compression codecs should be resolved by their codec name 
> (ie 'gzip', 'deflate', 'zlib', 'bzip2') instead their full codec class name.
> Besides easy of use for Hadoop users who would use the codec alias instead 
> the full codec class name, it could simplify how HBase resolves loads the 
> codecs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7350:
--

Attachment: HADOOP-7350.patch

Slight adjustment to load codec classes only once using a ServiceLoader.

I'll address the HDFS documentation change in another JIRA.

> Use ServiceLoader to discover compression codec classes
> ---
>
> Key: HADOOP-7350
> URL: https://issues.apache.org/jira/browse/HADOOP-7350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7350.patch, HADOOP-7350.patch, HADOOP-7350.patch
>
>
> By using a ServiceLoader users wouldn't have to add codec classes to 
> io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
> since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-02 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7350:
--

Attachment: HADOOP-7350.patch

{quote}
- We should probably remove the codecs from core-default.xml now that they're 
loaded via ServiceLoader
{quote}

Done - see new patch.

{quote}
- Is there a way to inject a new codec programatically through the 
ServiceLoader interface? If so, we could entirely deprecate 
io.compression.codecs. If not, maybe we should rename it to something like 
io.compression.extra.codecs and specify that it's only necessary if you have a 
codec that doesn't expose itself through ServiceLoader?
{quote}

I don't think we need to deprecate or rename io.compression.codecs - it's just 
used to specify _additional_ codecs to the ones that are loaded through a 
ServiceLoader. Note that duplicates are ignored, so there's no problem with 
users older configs having codecs that could be loaded through ServiceLoader.

{quote}
- hdfs-default.xml has an item dfs.image.compression.codec that needs to be 
updated
{quote}

This doesn't need to be updated, although with HADOOP-7323 (and a corresponding 
HDFS change) it could be changed to "default".

> Use ServiceLoader to discover compression codec classes
> ---
>
> Key: HADOOP-7350
> URL: https://issues.apache.org/jira/browse/HADOOP-7350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7350.patch, HADOOP-7350.patch
>
>
> By using a ServiceLoader users wouldn't have to add codec classes to 
> io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
> since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7355) Add audience and stability annotations to HttpServer class

2011-06-02 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13042971#comment-13042971
 ] 

Tom White commented on HADOOP-7355:
---

+1

> Add audience and stability annotations to HttpServer class
> --
>
> Key: HADOOP-7355
> URL: https://issues.apache.org/jira/browse/HADOOP-7355
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: stack
>Assignee: stack
> Attachments: 7355-v2.txt, 7355.txt
>
>
> HttpServer has at least one subclasser in HBase.  Flag this class w/ 
> annotations that make this plain so we avoid regressions like HADOOP-7351

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-01 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7350:
--

Attachment: HADOOP-7350.patch

This patch modifies CompressionCodecFactory.getCodecClasses() to use a service 
loader in addition to reading class names from io.compression.codecs. 

> Use ServiceLoader to discover compression codec classes
> ---
>
> Key: HADOOP-7350
> URL: https://issues.apache.org/jira/browse/HADOOP-7350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io
>Reporter: Tom White
> Attachments: HADOOP-7350.patch
>
>
> By using a ServiceLoader users wouldn't have to add codec classes to 
> io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
> since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-01 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7350:
--

Assignee: Tom White
  Status: Patch Available  (was: Open)

> Use ServiceLoader to discover compression codec classes
> ---
>
> Key: HADOOP-7350
> URL: https://issues.apache.org/jira/browse/HADOOP-7350
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf, io
>Reporter: Tom White
>Assignee: Tom White
> Attachments: HADOOP-7350.patch
>
>
> By using a ServiceLoader users wouldn't have to add codec classes to 
> io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
> since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2011-06-01 Thread Tom White (JIRA)
Use ServiceLoader to discover compression codec classes
---

 Key: HADOOP-7350
 URL: https://issues.apache.org/jira/browse/HADOOP-7350
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf, io
Reporter: Tom White


By using a ServiceLoader users wouldn't have to add codec classes to 
io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7323) Add capability to resolve compression codec based on codec name

2011-06-01 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13042454#comment-13042454
 ] 

Tom White commented on HADOOP-7323:
---

Thinking about this more, overloading getCodecByClassName() may be misleading, 
so it might be better to add a new method called getCodecByName() which returns 
codecs based on class name or alias. There are only a couple of callers of 
getCodecByClassName() (in HDFS) so it doesn't make much difference in terms of 
changing code to use the new method.

To take advantage of the new method expressions of the form

{code}
conf.getClassByName(name).asSubclass(CompressionCodec.class)
{code}

should be replaced with

{code}
CompressionCodecFactory.getCodecByName(name)
{code}

This mainly applies in the MapReduce project.

We should also add a getCodecClassByName() method at the same time, since 
sometimes only the class is needed.

> Add capability to resolve compression codec based on codec name
> ---
>
> Key: HADOOP-7323
> URL: https://issues.apache.org/jira/browse/HADOOP-7323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.21.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.0
>
> Attachments: HADOOP-7323.patch, HADOOP-7323b.patch
>
>
> When setting up a compression codec in an MR job the full class name of the 
> codec must be used.
> To ease usability, compression codecs should be resolved by their codec name 
> (ie 'gzip', 'deflate', 'zlib', 'bzip2') instead their full codec class name.
> Besides easy of use for Hadoop users who would use the codec alias instead 
> the full codec class name, it could simplify how HBase resolves loads the 
> codecs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7334) test-patch should check for hard tabs

2011-05-26 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13040050#comment-13040050
 ] 

Tom White commented on HADOOP-7334:
---

> Do you think we have a need to do something like checkstyle in the future?

We have long had a checkstyle target in ant, and plots of checkstyle violations 
over time (see https://builds.apache.org/job/Hadoop-Common-trunk/), but 
unfortunately we haven't enforced the rule that the number may not increase 
(unlike javadoc or findbugs warnings for example).

Another way of implementing this JIRA would be to enable such a rule (perhaps 
with a weaker set of checkstyle rules than the current set, e.g. drop the 
80-per-line rule).

> test-patch should check for hard tabs
> -
>
> Key: HADOOP-7334
> URL: https://issues.apache.org/jira/browse/HADOOP-7334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-7334.txt, hadoop-7334.txt
>
>
> Our coding guidelines say that hard tabs are disallowed in the Hadoop code, 
> but they sometimes sneak in (there are about 280 in the common codebase at 
> the moment).
> We should run a simple check for this in the test-patch process so it's 
> harder for them to sneak in.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-05-26 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13039986#comment-13039986
 ] 

Tom White commented on HADOOP-6671:
---

The Jenkins scripts in http://svn.apache.org/repos/asf/hadoop/nightly/ will 
need updating too. To test the changes we could have a branch containing the 
Mavenized tree (i.e. with the patch from this issue applied), and a copy of the 
Jenkins nightly build job that uses a Maven version of the nightly script. For 
patch submission we can test the script manually rather than hooking it up to 
Jenkins across the board. We'd only commit this change when we are happy that 
the Jenkins jobs are working properly.

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
> Attachments: HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, 
> hadoop-commons-maven.patch, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh, 
> mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7323) Add capability to resolve compression codec based on codec name

2011-05-25 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13039384#comment-13039384
 ] 

Tom White commented on HADOOP-7323:
---

+1

> Add capability to resolve compression codec based on codec name
> ---
>
> Key: HADOOP-7323
> URL: https://issues.apache.org/jira/browse/HADOOP-7323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.21.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.0
>
> Attachments: HADOOP-7323.patch, HADOOP-7323b.patch
>
>
> When setting up a compression codec in an MR job the full class name of the 
> codec must be used.
> To ease usability, compression codecs should be resolved by their codec name 
> (ie 'gzip', 'deflate', 'zlib', 'bzip2') instead their full codec class name.
> Besides easy of use for Hadoop users who would use the codec alias instead 
> the full codec class name, it could simplify how HBase resolves loads the 
> codecs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6671) To use maven for hadoop common builds

2011-05-24 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13038720#comment-13038720
 ] 

Tom White commented on HADOOP-6671:
---

We currently publish jdiff documentation as a part of a release (e.g. 
http://hadoop.apache.org/common/docs/r0.20.2/jdiff/changes.html). There's also 
HADOOP-7035 which refines this to publish changes categorized by API stability 
and compatibility (there is an example at 
http://people.apache.org/~tomwhite/HADOOP-7035/common/).

HADOOP-7035 will include documenting the process for generating jdiff for a 
release, so I don't think that we need to get it integrated in Maven as a part 
of this issue. (If needed at a later point we could hook it into Maven by 
calling out to the script.) Does that sound reasonable?

> To use maven for hadoop common builds
> -
>
> Key: HADOOP-6671
> URL: https://issues.apache.org/jira/browse/HADOOP-6671
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Giridharan Kesavan
> Attachments: HADOOP-6671.patch, HADOOP-6671b.patch, 
> HADOOP-6671c.patch, HADOOP-6671d.patch, build.png, 
> hadoop-commons-maven.patch, mvn-layout.sh, mvn-layout.sh, mvn-layout2.sh
>
>
> We are now able to publish hadoop artifacts to the maven repo successfully [ 
> Hadoop-6382]
> Drawbacks with the current approach:
> * Use ivy for dependency management with ivy.xml
> * Use maven-ant-task for artifact publishing to the maven repository
> * pom files are not generated dynamically 
> To address this I propose we use maven to build hadoop-common, which would 
> help us to manage dependencies, publish artifacts and have one single xml 
> file(POM) for dependency management and artifact publishing.
> I would like to have a branch created to work on mavenizing  hadoop common.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7323) Add capability to resolve compression codec based on codec name

2011-05-23 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13038326#comment-13038326
 ] 

Tom White commented on HADOOP-7323:
---

This looks good. The only nit I noticed is that the javadoc on codecsByName in 
CompressionCodecFactory is incorrect.

> Add capability to resolve compression codec based on codec name
> ---
>
> Key: HADOOP-7323
> URL: https://issues.apache.org/jira/browse/HADOOP-7323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.21.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 0.22.0
>
> Attachments: HADOOP-7323.patch
>
>
> When setting up a compression codec in an MR job the full class name of the 
> codec must be used.
> To ease usability, compression codecs should be resolved by their codec name 
> (ie 'gzip', 'deflate', 'zlib', 'bzip2') instead their full codec class name.
> Besides easy of use for Hadoop users who would use the codec alias instead 
> the full codec class name, it could simplify how HBase resolves loads the 
> codecs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-7283) Include 32-bit and 64-bit native libraries in Jenkins tarball builds

2011-05-19 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White resolved HADOOP-7283.
---

Resolution: Fixed
  Assignee: Tom White

The build at 
https://builds.apache.org/hudson/view/G-L/view/Hadoop/job/Hadoop-22-Build/ is 
now producing tarballs with the correct native libraries.

> Include 32-bit and 64-bit native libraries in Jenkins tarball builds
> 
>
> Key: HADOOP-7283
> URL: https://issues.apache.org/jira/browse/HADOOP-7283
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Reporter: Tom White
>Assignee: Tom White
>Priority: Blocker
> Fix For: 0.22.0
>
>
> The job at 
> https://builds.apache.org/hudson/view/G-L/view/Hadoop/job/Hadoop-22-Build/ is 
> building tarballs, but they do not currently include both 32-bit and 64-bit 
> native libraries. We should update/duplicate 
> hadoop-nighly/hudsonBuildHadoopRelease.sh to support post-split builds.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6846) Scripts for building Hadoop 0.22.0 release

2011-05-18 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-6846:
--

  Resolution: Fixed
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

> Scripts for building Hadoop 0.22.0 release
> --
>
> Key: HADOOP-6846
> URL: https://issues.apache.org/jira/browse/HADOOP-6846
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.22.0
>
> Attachments: HADOOP-6846.patch, release-scripts.tar.gz
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7215) RPC clients must connect over a network interface corresponding to the host name in the client's kerberos principal key

2011-05-18 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035557#comment-13035557
 ] 

Tom White commented on HADOOP-7215:
---

Can this be closed? Looks like it's been committed.

> RPC clients must connect over a network interface corresponding to the host 
> name in the client's kerberos principal key
> ---
>
> Key: HADOOP-7215
> URL: https://issues.apache.org/jira/browse/HADOOP-7215
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Suresh Srinivas
>Assignee: Suresh Srinivas
>Priority: Blocker
> Fix For: 0.20.203.0, 0.23.0
>
> Attachments: HADOOP-7215.1.trunk.patch, HADOOP-7215.2.trunk.patch, 
> HADOOP-7215.2xx.patch, HADOOP-7215.3.trunk.patch, HADOOP-7215.debug.patch, 
> HADOOP-7215.debug.patch
>
>
> HDFS-7104 introduced a change where RPC server matches client's hostname with 
> the hostname specified in the client's Kerberos principal name. RPC client 
> binds the socket to a random local address, which might not match the 
> hostname specified in the principal name. This results authorization failure 
> of the client at the server.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7269) S3 Native should allow customizable file meta-data (headers)

2011-05-18 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13035545#comment-13035545
 ] 

Tom White commented on HADOOP-7269:
---

This looks good. Can you run Jets3tNativeS3FileSystemContractTest to check that 
it works against real S3? You can set credentials in src/test/core-site.xml.

> S3 Native should allow customizable file meta-data (headers)
> 
>
> Key: HADOOP-7269
> URL: https://issues.apache.org/jira/browse/HADOOP-7269
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Nicholas Telford
>Assignee: Nicholas Telford
>Priority: Minor
> Attachments: HADOOP-7269-S3-metadata-001.diff, 
> HADOOP-7269-S3-metadata-002.diff, HADOOP-7269-S3-metadata-003.diff
>
>
> The S3 Native FileSystem currently writes all files with a set of default 
> headers:
>  * Content-Type: binary/octet-stream
>  * Content-Length: 
>  * Content-MD5: 
> This is a good start, however many applications would benefit from the 
> ability to customize (for example) the Content-Type and Expires headers for 
> the file. Ideally the implementation should be abstract enough to customize 
> all of the available S3 headers and provide a facility for other FileSystems 
> to specify optional file metadata.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7296) The FsPermission(FsPermission) constructor does not use the sticky bit

2011-05-17 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7296:
--

  Resolution: Fixed
Assignee: Siddharth Seth
Hadoop Flags: [Reviewed]
  Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, Siddharth!

> The FsPermission(FsPermission) constructor does not use the sticky bit
> --
>
> Key: HADOOP-7296
> URL: https://issues.apache.org/jira/browse/HADOOP-7296
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0, 0.22.0, 0.23.0
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
>Priority: Minor
> Fix For: 0.22.0
>
> Attachments: HADOOP7296.patch, HADOOP7296_2.patch
>
>
> The FsPermission(FsPermission) constructor copies u, g, o from the supplied 
> FsPermission object but ignores the sticky bit.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7206) Integrate Snappy compression

2011-05-17 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034829#comment-13034829
 ] 

Tom White commented on HADOOP-7206:
---

I actually think that, long-term, Snappy compression belongs in Hadoop, along 
with the other compression codecs. The hadoop-snappy project is a useful 
stopgap until we get regular releases going again, and it allows other projects 
like HBase to use Snappy in the meantime.

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
> Attachments: HADOOP-7206.patch
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6757) NullPointerException for hadoop clients launched from streaming tasks

2011-05-16 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034547#comment-13034547
 ] 

Tom White commented on HADOOP-6757:
---

I think this may be fixed by MAPREDUCE-2372.

> NullPointerException for hadoop clients launched from streaming tasks
> -
>
> Key: HADOOP-6757
> URL: https://issues.apache.org/jira/browse/HADOOP-6757
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Reporter: Amar Kamat
>Assignee: Amar Kamat
> Attachments: BZ-3620565-v1.0.patch, HADOOP-6757-v1.0.patch
>
>
> TaskRunner sets HADOOP_ROOT_LOGGER to info,TLA while launching the child 
> tasks. TLA implicitly assumes that that task-id information will be made 
> available via the 'hadoop.tasklog.taskid' parameter. 'hadoop.tasklog.taskid' 
> is passed to the child task by the TaskRunner via HADOOP_CLIENT_OPTS. When 
> the streaming task launches a hadoop client (say hadoop job -list), the 
> HADOOP_ROOT_LOGGER of the hadoop client is set to 'info,TLA' but 
> hadoop.tasklog.taskid is not set resulting into NPE.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7283) Include 32-bit and 64-bit native libraries in Jenkins tarball builds

2011-05-12 Thread Tom White (JIRA)
Include 32-bit and 64-bit native libraries in Jenkins tarball builds


 Key: HADOOP-7283
 URL: https://issues.apache.org/jira/browse/HADOOP-7283
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Tom White
Priority: Blocker
 Fix For: 0.22.0


The job at 
https://builds.apache.org/hudson/view/G-L/view/Hadoop/job/Hadoop-22-Build/ is 
building tarballs, but they do not currently include both 32-bit and 64-bit 
native libraries. We should update/duplicate 
hadoop-nighly/hudsonBuildHadoopRelease.sh to support post-split builds.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6763) Remove verbose logging from the Groups class

2011-05-11 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-6763:
--

Fix Version/s: 0.22.0

> Remove verbose logging from the Groups class
> 
>
> Key: HADOOP-6763
> URL: https://issues.apache.org/jira/browse/HADOOP-6763
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.21.0
>Reporter: Owen O'Malley
>Assignee: Boris Shkolnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-6598-BP20-Fix.patch, HADOOP-6598-BP20.patch, 
> HADOOP-6598.patch, HADOOP-6763.patch
>
>
> {quote}
> 2010-02-25 08:30:52,269 INFO  security.Groups (Groups.java:(60)) - 
> Group m
> apping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout
> =30
> ...
> 2010-02-25 08:30:57,872 INFO  security.Groups (Groups.java:getGroups(76)) - 
> Retu
> rning cached groups for 'oom'
> {quote}
> should both be demoted to debug level.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6763) Remove verbose logging from the Groups class

2011-05-11 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13032193#comment-13032193
 ] 

Tom White commented on HADOOP-6763:
---

Yes, this should have fix version 0.22.

> Remove verbose logging from the Groups class
> 
>
> Key: HADOOP-6763
> URL: https://issues.apache.org/jira/browse/HADOOP-6763
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.21.0
>Reporter: Owen O'Malley
>Assignee: Boris Shkolnik
> Attachments: HADOOP-6598-BP20-Fix.patch, HADOOP-6598-BP20.patch, 
> HADOOP-6598.patch, HADOOP-6763.patch
>
>
> {quote}
> 2010-02-25 08:30:52,269 INFO  security.Groups (Groups.java:(60)) - 
> Group m
> apping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; 
> cacheTimeout
> =30
> ...
> 2010-02-25 08:30:57,872 INFO  security.Groups (Groups.java:getGroups(76)) - 
> Retu
> rning cached groups for 'oom'
> {quote}
> should both be demoted to debug level.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-6846) Scripts for building Hadoop 0.22.0 release

2011-05-11 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13032117#comment-13032117
 ] 

Tom White commented on HADOOP-6846:
---

I grant the patch for inclusion.

> Scripts for building Hadoop 0.22.0 release
> --
>
> Key: HADOOP-6846
> URL: https://issues.apache.org/jira/browse/HADOOP-6846
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.22.0
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.22.0
>
> Attachments: HADOOP-6846.patch, release-scripts.tar.gz
>
>


--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7068) Ivy resolve force mode should be turned off by default

2011-05-10 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7068:
--

   Resolution: Fixed
Fix Version/s: 0.22.0
 Hadoop Flags: [Reviewed]
   Status: Resolved  (was: Patch Available)

I've just committed this. Thanks, Luke!

> Ivy resolve force mode should be turned off by default
> --
>
> Key: HADOOP-7068
> URL: https://issues.apache.org/jira/browse/HADOOP-7068
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Luke Lu
>Assignee: Luke Lu
> Fix For: 0.22.0
>
> Attachments: hadoop-7068-trunk-v1.patch, hadoop-7068-trunk-v2.patch
>
>
> The problem is introduced by  HADOOP-6486. Which have caused a lot of 
> mysterious artifact issues (unable to downgrade or do parallel dev, without 
> wiping out both m2 and ivy caches etc.) wasting countless hours of dev (many 
> people's) time to track down the issue.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7068) Ivy resolve force mode should be turned off by default

2011-05-10 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13031347#comment-13031347
 ] 

Tom White commented on HADOOP-7068:
---

+1. I agree that this is very confusing and makes it almost impossible to do 
parallel builds on Jenkins boxes. I'm going to go ahead and commit this, 
HDFS-1544, and MAPREDUCE-, unless there are objections.

> Ivy resolve force mode should be turned off by default
> --
>
> Key: HADOOP-7068
> URL: https://issues.apache.org/jira/browse/HADOOP-7068
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Luke Lu
>Assignee: Luke Lu
> Attachments: hadoop-7068-trunk-v1.patch, hadoop-7068-trunk-v2.patch
>
>
> The problem is introduced by  HADOOP-6486. Which have caused a lot of 
> mysterious artifact issues (unable to downgrade or do parallel dev, without 
> wiping out both m2 and ivy caches etc.) wasting countless hours of dev (many 
> people's) time to track down the issue.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7184) Remove deprecated local.cache.size from core-default.xml

2011-04-27 Thread Tom White (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13026100#comment-13026100
 ] 

Tom White commented on HADOOP-7184:
---

+1

> Remove deprecated local.cache.size from core-default.xml
> 
>
> Key: HADOOP-7184
> URL: https://issues.apache.org/jira/browse/HADOOP-7184
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, filecache
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 0.22.0
>
> Attachments: hadoop-7184.txt
>
>
> MAPREDUCE-2379 documents the new name of this parameter 
> (mapreduce.tasktracker.cache.local.size) in mapred-default.xml where it 
> belongs.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7245) FsConfig should use constants in CommonConfigurationKeys

2011-04-27 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7245:
--

Status: Patch Available  (was: Open)

> FsConfig should use constants in CommonConfigurationKeys
> 
>
> Key: HADOOP-7245
> URL: https://issues.apache.org/jira/browse/HADOOP-7245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.22.0
>
> Attachments: HADOOP-7245.patch
>
>
> In particular, FsConfig should use fs.defaultFS instead of the deprecated 
> fs.default.name.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7245) FsConfig should use constants in CommonConfigurationKeys

2011-04-27 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7245:
--

Attachment: HADOOP-7245.patch

This patch changes FsConfig to use the constants defined in 
CommonConfigurationKeys and CommonConfigurationKeysPublic.

> FsConfig should use constants in CommonConfigurationKeys
> 
>
> Key: HADOOP-7245
> URL: https://issues.apache.org/jira/browse/HADOOP-7245
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tom White
>Assignee: Tom White
> Fix For: 0.22.0
>
> Attachments: HADOOP-7245.patch
>
>
> In particular, FsConfig should use fs.defaultFS instead of the deprecated 
> fs.default.name.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7245) FsConfig should use constants in CommonConfigurationKeys

2011-04-27 Thread Tom White (JIRA)
FsConfig should use constants in CommonConfigurationKeys


 Key: HADOOP-7245
 URL: https://issues.apache.org/jira/browse/HADOOP-7245
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White
Assignee: Tom White
 Fix For: 0.22.0


In particular, FsConfig should use fs.defaultFS instead of the deprecated 
fs.default.name.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7244) Documentation change for updated configuration keys

2011-04-26 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7244:
--

Status: Patch Available  (was: Open)

> Documentation change for updated configuration keys
> ---
>
> Key: HADOOP-7244
> URL: https://issues.apache.org/jira/browse/HADOOP-7244
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom White
>Assignee: Tom White
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: HADOOP-7244.patch
>
>
> Common counterpart of HDFS-671.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7244) Documentation change for updated configuration keys

2011-04-26 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7244:
--

Attachment: HADOOP-7244.patch

Here's a patch which fixes the deprecated configuration keys in the forrest 
docs.

> Documentation change for updated configuration keys
> ---
>
> Key: HADOOP-7244
> URL: https://issues.apache.org/jira/browse/HADOOP-7244
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Tom White
>Assignee: Tom White
>Priority: Blocker
> Fix For: 0.22.0
>
> Attachments: HADOOP-7244.patch
>
>
> Common counterpart of HDFS-671.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-7244) Documentation change for updated configuration keys

2011-04-26 Thread Tom White (JIRA)
Documentation change for updated configuration keys
---

 Key: HADOOP-7244
 URL: https://issues.apache.org/jira/browse/HADOOP-7244
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Tom White
Assignee: Tom White
Priority: Blocker
 Fix For: 0.22.0


Common counterpart of HDFS-671.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-7183) WritableComparator.get should not cache comparator objects

2011-04-25 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7183:
--

Attachment: HADOOP-7183.patch

Removed test.

> WritableComparator.get should not cache comparator objects
> --
>
> Key: HADOOP-7183
> URL: https://issues.apache.org/jira/browse/HADOOP-7183
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.22.0
>Reporter: Todd Lipcon
>Assignee: Tom White
>Priority: Blocker
> Fix For: 0.20.3, 0.21.1, 0.22.0
>
> Attachments: HADOOP-7183.patch, HADOOP-7183.patch
>
>
> HADOOP-6881 modified WritableComparator.get such that the constructed 
> WritableComparator gets saved back into the static map. This is fine for 
> stateless comparators, but some comparators have per-instance state, and thus 
> this becomes thread-unsafe and causes errors in the shuffle where multiple 
> threads are doing comparisons. An example of a Comparator with per-instance 
> state is WritableComparator itself.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-6953) start-{dfs,mapred}.sh scripts fail if HADOOP_HOME is not set

2011-04-25 Thread Tom White (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White resolved HADOOP-6953.
---

   Resolution: Duplicate
Fix Version/s: (was: 0.21.1)

> start-{dfs,mapred}.sh scripts fail if HADOOP_HOME is not set
> 
>
> Key: HADOOP-6953
> URL: https://issues.apache.org/jira/browse/HADOOP-6953
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 0.21.0
>Reporter: Tom White
>Assignee: Tom White
>Priority: Blocker
> Fix For: 0.22.0
>
>
> If the HADOOP_HOME environment variable is not set then the start and stop 
> scripts for HDFS and MapReduce fail with "Hadoop common not found.". The 
> start-all.sh and stop-all.sh scripts are not affected.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


<    1   2   3   4   5   6   7   8   9   10   >