Build failed in Jenkins: Hadoop-Common-trunk #542

2012-09-24 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/542/changes

Changes:

[omalley] Fix the length of the secret key.

[vinodkv] YARN-53. Added the missing getGroups API to ResourceManager. 
Contributed by Bo Wang.

[harsh] HADOOP-8158. Interrupting hadoop fs -put from the command line causes a 
LeaseExpiredException. Contributed by Daryn Sharp. (harsh)

[harsh] HADOOP-8151. Error handling in snappy decompressor throws invalid 
exceptions. Contributed by Matt Foley. (harsh)

[harsh] HADOOP-8588. SerializationFactory shouldn't throw a 
NullPointerException if the serializations list is empty. Contributed by Sho 
Shimauchi. (harsh)

--
[...truncated 26982 lines...]
[DEBUG]   (s) debug = false
[DEBUG]   (s) effort = Default
[DEBUG]   (s) failOnError = true
[DEBUG]   (s) findbugsXmlOutput = false
[DEBUG]   (s) findbugsXmlOutputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
[DEBUG]   (s) fork = true
[DEBUG]   (s) includeTests = false
[DEBUG]   (s) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (s) maxHeap = 512
[DEBUG]   (s) nested = false
[DEBUG]   (s) outputDirectory = 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/site
[DEBUG]   (s) outputEncoding = UTF-8
[DEBUG]   (s) pluginArtifacts = 
[org.codehaus.mojo:findbugs-maven-plugin:maven-plugin:2.3.2:, 
com.google.code.findbugs:bcel:jar:1.3.9:compile, 
org.codehaus.gmaven:gmaven-mojo:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-api:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-api:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-1.5:jar:1.3:compile, 
org.codehaus.gmaven.feature:gmaven-feature-support:jar:1.3:compile, 
org.codehaus.groovy:groovy-all-minimal:jar:1.5.8:compile, 
org.apache.ant:ant:jar:1.7.1:compile, 
org.apache.ant:ant-launcher:jar:1.7.1:compile, jline:jline:jar:0.9.94:compile, 
org.codehaus.plexus:plexus-interpolation:jar:1.1:compile, 
org.codehaus.gmaven:gmaven-plugin:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-loader:jar:1.3:compile, 
org.codehaus.gmaven.runtime:gmaven-runtime-support:jar:1.3:compile, 
org.sonatype.gshell:gshell-io:jar:2.0:compile, 
com.thoughtworks.qdox:qdox:jar:1.10:compile, 
org.apache.maven.shared:file-management:jar:1.2.1:compile, 
org.apache.maven.shared:maven-shared-io:jar:1.1:compile, 
commons-lang:commons-lang:jar:2.4:compile, 
org.slf4j:slf4j-api:jar:1.5.10:compile, 
org.sonatype.gossip:gossip:jar:1.2:compile, 
org.apache.maven.reporting:maven-reporting-impl:jar:2.1:compile, 
commons-validator:commons-validator:jar:1.2.0:compile, 
commons-beanutils:commons-beanutils:jar:1.7.0:compile, 
commons-digester:commons-digester:jar:1.6:compile, 
commons-logging:commons-logging:jar:1.0.4:compile, oro:oro:jar:2.0.8:compile, 
xml-apis:xml-apis:jar:1.0.b2:compile, 
org.codehaus.groovy:groovy-all:jar:1.7.4:compile, 
org.apache.maven.reporting:maven-reporting-api:jar:3.0:compile, 
org.apache.maven.doxia:doxia-core:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-logging-api:jar:1.1.3:compile, 
xerces:xercesImpl:jar:2.9.1:compile, 
commons-httpclient:commons-httpclient:jar:3.1:compile, 
commons-codec:commons-codec:jar:1.2:compile, 
org.apache.maven.doxia:doxia-sink-api:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-decoration-model:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-site-renderer:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-xhtml:jar:1.1.3:compile, 
org.apache.maven.doxia:doxia-module-fml:jar:1.1.3:compile, 
org.codehaus.plexus:plexus-i18n:jar:1.0-beta-7:compile, 
org.codehaus.plexus:plexus-velocity:jar:1.1.7:compile, 
org.apache.velocity:velocity:jar:1.5:compile, 
commons-collections:commons-collections:jar:3.2:compile, 
org.apache.maven.shared:maven-doxia-tools:jar:1.2.1:compile, 
commons-io:commons-io:jar:1.4:compile, 
com.google.code.findbugs:findbugs-ant:jar:1.3.9:compile, 
com.google.code.findbugs:findbugs:jar:1.3.9:compile, 
com.google.code.findbugs:jsr305:jar:1.3.9:compile, 
com.google.code.findbugs:jFormatString:jar:1.3.9:compile, 
com.google.code.findbugs:annotations:jar:1.3.9:compile, 
dom4j:dom4j:jar:1.6.1:compile, jaxen:jaxen:jar:1.1.1:compile, 
jdom:jdom:jar:1.0:compile, xom:xom:jar:1.0:compile, 
xerces:xmlParserAPIs:jar:2.6.2:compile, xalan:xalan:jar:2.6.0:compile, 
com.ibm.icu:icu4j:jar:2.6.1:compile, asm:asm:jar:3.1:compile, 
asm:asm-analysis:jar:3.1:compile, asm:asm-commons:jar:3.1:compile, 
asm:asm-util:jar:3.1:compile, asm:asm-tree:jar:3.1:compile, 
asm:asm-xml:jar:3.1:compile, jgoodies:plastic:jar:1.2.0:compile, 
org.codehaus.plexus:plexus-resources:jar:1.0-alpha-4:compile, 
org.codehaus.plexus:plexus-utils:jar:1.5.1:compile]
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG]   (s) relaxed 

Should use Filesystem.setPermission() rather than File.setWritable() to change a file access permission.

2012-09-24 Thread Yanbo Liang
Hi all,

In the current test case of hadoop, if we want to corrupt or disable one
directory or file,
we use File.setWritable(false) such as in TestStorageRestore.java and
TestCheckpoint.java.

But we all know, the implementation of function setWritable() in Java API
is system-dependent,
and there are some jdk bugs in this function implementation.

In https://issues.apache.org/jira/browse/HADOOP-4824 , the patch had remove
the setWritable() function in 0.18.

We can use the Filesystem.setPermission() to change the file's access
permission,
and it supplies native method which invoke the operating system API in
Hadoop source code.

So should we change setWritable() to setPermission()?

Thanks,
Yanbo


[jira] [Created] (HADOOP-8836) UGI should throw exception in case winutils.exe cannot be loaded

2012-09-24 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8836:
--

 Summary: UGI should throw exception in case winutils.exe cannot be 
loaded
 Key: HADOOP-8836
 URL: https://issues.apache.org/jira/browse/HADOOP-8836
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bikas Saha
Assignee: Bikas Saha


In upstream projects like Hive its hard to see why getting user group 
information failed because the API swallows the exception. One of the cases is 
when winutils is not present where Hadoop expects it to be.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8187) Improve the discovery of the jvm library during the build process

2012-09-24 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-8187.
--

Resolution: Duplicate

 Improve the discovery of the jvm library during the build process
 -

 Key: HADOOP-8187
 URL: https://issues.apache.org/jira/browse/HADOOP-8187
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Devaraj Das

 Improve the discovery of the jvm library during the build of native 
 libraries/libhdfs/fuse-dfs, etc. A couple of different ways are currently 
 used (discussed in HADOOP-6924). We should clean this part up and also 
 consider builds of native stuff on OSX.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-8187) Improve the discovery of the jvm library during the build process

2012-09-24 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reopened HADOOP-8187:
--


Description of how to improve this is in the linked jiras.

 Improve the discovery of the jvm library during the build process
 -

 Key: HADOOP-8187
 URL: https://issues.apache.org/jira/browse/HADOOP-8187
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Devaraj Das

 Improve the discovery of the jvm library during the build of native 
 libraries/libhdfs/fuse-dfs, etc. A couple of different ways are currently 
 used (discussed in HADOOP-6924). We should clean this part up and also 
 consider builds of native stuff on OSX.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8837) add component Yarn to @InterfaceAudience.LimitedPrivate classes

2012-09-24 Thread Brandon Li (JIRA)
Brandon Li created HADOOP-8837:
--

 Summary: add component Yarn to @InterfaceAudience.LimitedPrivate 
classes
 Key: HADOOP-8837
 URL: https://issues.apache.org/jira/browse/HADOOP-8837
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Brandon Li
Priority: Trivial


With Yarn becoming a new Hadoop subcomponent, some classes (e.g., HttpServer) 
with InterfaceAudience.LimitedPrivate annotation need to include Yarn as one 
audience. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-2318) All C++ builds should use the autoconf tools

2012-09-24 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-2318.
--

Resolution: Fixed

As Brian Bockelman pointed out more than a year ago, this is way out of date.  
We now use Maven and CMake in trunk.

Even in branch-1, the complaints listed here are way out of date.  libhdfs 
compiles with autotools in branch-1, and the 64-bit compile does work for 
libhdfs in branch-1.

Please reopen this with a different description (and possibly target-version) 
if there's still something to address.

 All C++ builds should use the autoconf tools
 

 Key: HADOOP-2318
 URL: https://issues.apache.org/jira/browse/HADOOP-2318
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Nigel Daley
Assignee: Giridharan Kesavan
Priority: Minor
 Attachments: hadoop-2318.patch


 Currently we have -Dcompile.native and -Dcompile.c++ build flags.  In 
 addition, builds for pipes and libhadoop use autoconf tools, but libhdfs does 
 not, nor does 64bit libhdfs compile work.
 All these builds should use autoconf tools, support 64bit compilation, and 
 should occur when a single flag is present (-Dcompile.c++ seems like the 
 better choice).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [REVIEW] Hadoop-1.1.0-RC4

2012-09-24 Thread Matt Foley
Hi all,
Since the Review RC went out, I've been asked to include the following
fixes:

Three significant bugs:
HDFS-3461 hftp uses http to the Namenode's https port, which doesn't work
HDFS-3846 NN deadlock
HDFS-3596 decrease the incidence of corrupted logs after disk full
conditions
and a related enhancement:
HDFS-3521 deal with edit log corruption if it occurs

Four unit test fixes (obviously not critical, but good to have):
HDFS-3698
HDFS-3966
MAPREDUCE-4673
MAPREDUCE-4675

and four performance enhancements:
HADOOP-8617, HDFS-496, MAPREDUCE-782 performance of CRCs calculations
HDFS-2751 Datanode drops OS cache behind reads even for short reads
MAPREDUCE-1906 default minimum heartbeat
MAPREDUCE-3289 use fadvise

I will spin a new RC with these fixes, and hope to offer it for vote by
Wednesday.
Thanks,
--Matt


On Wed, Sep 19, 2012 at 1:09 PM, Matt Foley ma...@apache.org wrote:

 The RC has been uploaded to Nexus / maven.
 --Matt


 On Mon, Sep 17, 2012 at 6:46 PM, Matt Foley ma...@apache.org wrote:

 Hi,
 Please review this release candidate for Hadoop-1.1.0.  As suggested
 before, I am posting it for review, and will start a vote on it if it
 passes the next week with no serious issues being found.

 Tarballs, rpms, and debs are available at
 http://people.apache.org/~mattf/hadoop-1.1.0-rc4/
 The release notes are also at the top level of that directory.

 Nexus seems a bit wedged currently.  I will try again to push to Nexus
 tomorrow morning.

 Thanks,
 --Matt





Re: [REVIEW] Hadoop-1.1.0-RC4

2012-09-24 Thread Uma Maheswara Rao G
Hi Matt,

Good to include HDFS-3701 also. I set the fixed versions as 1.1.0 as well.

Regards,
Uma

On Tue, Sep 25, 2012 at 7:29 AM, Matt Foley ma...@apache.org wrote:
 Hi all,
 Since the Review RC went out, I've been asked to include the following
 fixes:

 Three significant bugs:
 HDFS-3461 hftp uses http to the Namenode's https port, which doesn't work
 HDFS-3846 NN deadlock
 HDFS-3596 decrease the incidence of corrupted logs after disk full
 conditions
 and a related enhancement:
 HDFS-3521 deal with edit log corruption if it occurs

 Four unit test fixes (obviously not critical, but good to have):
 HDFS-3698
 HDFS-3966
 MAPREDUCE-4673
 MAPREDUCE-4675

 and four performance enhancements:
 HADOOP-8617, HDFS-496, MAPREDUCE-782 performance of CRCs calculations
 HDFS-2751 Datanode drops OS cache behind reads even for short reads
 MAPREDUCE-1906 default minimum heartbeat
 MAPREDUCE-3289 use fadvise

 I will spin a new RC with these fixes, and hope to offer it for vote by
 Wednesday.
 Thanks,
 --Matt


 On Wed, Sep 19, 2012 at 1:09 PM, Matt Foley ma...@apache.org wrote:

 The RC has been uploaded to Nexus / maven.
 --Matt


 On Mon, Sep 17, 2012 at 6:46 PM, Matt Foley ma...@apache.org wrote:

 Hi,
 Please review this release candidate for Hadoop-1.1.0.  As suggested
 before, I am posting it for review, and will start a vote on it if it
 passes the next week with no serious issues being found.

 Tarballs, rpms, and debs are available at
 http://people.apache.org/~mattf/hadoop-1.1.0-rc4/
 The release notes are also at the top level of that directory.

 Nexus seems a bit wedged currently.  I will try again to push to Nexus
 tomorrow morning.

 Thanks,
 --Matt





Re: [REVIEW] Hadoop-1.1.0-RC4

2012-09-24 Thread Matt Foley
Okay.

On Mon, Sep 24, 2012 at 7:05 PM, Uma Maheswara Rao G
hadoop@gmail.comwrote:

 Hi Matt,

 Good to include HDFS-3701 also. I set the fixed versions as 1.1.0 as well.

 Regards,
 Uma

 On Tue, Sep 25, 2012 at 7:29 AM, Matt Foley ma...@apache.org wrote:
  Hi all,
  Since the Review RC went out, I've been asked to include the following
  fixes:
 
  Three significant bugs:
  HDFS-3461 hftp uses http to the Namenode's https port, which doesn't work
  HDFS-3846 NN deadlock
  HDFS-3596 decrease the incidence of corrupted logs after disk full
  conditions
  and a related enhancement:
  HDFS-3521 deal with edit log corruption if it occurs
 
  Four unit test fixes (obviously not critical, but good to have):
  HDFS-3698
  HDFS-3966
  MAPREDUCE-4673
  MAPREDUCE-4675
 
  and four performance enhancements:
  HADOOP-8617, HDFS-496, MAPREDUCE-782 performance of CRCs calculations
  HDFS-2751 Datanode drops OS cache behind reads even for short reads
  MAPREDUCE-1906 default minimum heartbeat
  MAPREDUCE-3289 use fadvise
 
  I will spin a new RC with these fixes, and hope to offer it for vote by
  Wednesday.
  Thanks,
  --Matt
 
 
  On Wed, Sep 19, 2012 at 1:09 PM, Matt Foley ma...@apache.org wrote:
 
  The RC has been uploaded to Nexus / maven.
  --Matt
 
 
  On Mon, Sep 17, 2012 at 6:46 PM, Matt Foley ma...@apache.org wrote:
 
  Hi,
  Please review this release candidate for Hadoop-1.1.0.  As suggested
  before, I am posting it for review, and will start a vote on it if it
  passes the next week with no serious issues being found.
 
  Tarballs, rpms, and debs are available at
  http://people.apache.org/~mattf/hadoop-1.1.0-rc4/
  The release notes are also at the top level of that directory.
 
  Nexus seems a bit wedged currently.  I will try again to push to Nexus
  tomorrow morning.
 
  Thanks,
  --Matt