[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289756#comment-13289756
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-8368:
------------------------------------------------

Builds failed and the tests could not be executed.  See [build 
#2593|https://builds.apache.org/job/PreCommit-HDFS-Build/2593/console] for 
example.  After the patch was reverted, the tests could be run; see [build 
#2594|https://builds.apache.org/job/PreCommit-HDFS-Build/2594/console].

This patch failed the build but got a false positive on a bug (HADOOP-8483) in 
test-patch.sh as mentioned previously.  The following is from the console 
output of [build 
#1062|https://builds.apache.org/job/PreCommit-HADOOP-Build/1062/console].
{noformat}
main:
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 14.494s
[INFO] Finished at: Wed May 30 23:51:25 UTC 2012
[INFO] Final Memory: 21M/259M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.6:run (make) on project 
hadoop-common:
 An Ant BuildException has occured: Execute failed: java.io.IOException:
 Cannot run program "cmake" (in directory 
"/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/trunk/hadoop-common-project/hadoop-common/target/native"):
 java.io.IOException: error=2, No such file or directory -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[INFO] Build failures were ignored.
{noformat}

                
> Use CMake rather than autotools to build native code
> ----------------------------------------------------
>
>                 Key: HADOOP-8368
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8368
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>             Fix For: 2.0.1-alpha
>
>         Attachments: HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, 
> HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, 
> HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368.001.patch, 
> HADOOP-8368.005.patch, HADOOP-8368.006.patch, HADOOP-8368.007.patch, 
> HADOOP-8368.008.patch, HADOOP-8368.009.patch, HADOOP-8368.010.patch, 
> HADOOP-8368.012.half.patch, HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
> HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
> HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
> HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
> HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, 
> HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, 
> HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch
>
>
> It would be good to use cmake rather than autotools to build the native 
> (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on 
> different operating systems.  It would be extremely difficult, and perhaps 
> impossible, to use autotools under Windows.  Even if it were possible, it 
> might require horrible workarounds like installing cygwin.  Even on Linux 
> variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
> the Dash shell, rather than the Bash shell as it is in other Linux versions.  
> It is currently impossible to build the native code under Ubuntu 12.04 
> because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use 
> shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: 
> cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method 
> "path" via package "Autom4te..." are common error messages.  In order to even 
> start debugging automake problems you need to learn shell, m4, sed, and the a 
> bunch of other things.  With CMake, all you have to learn is the syntax of 
> CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required 
> libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For 
> example, the version installed under openSUSE defaults to putting libraries 
> in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
> to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
> build is currently broken when using OpenSUSE.)  This is another source of 
> build failures and complexity.  If things go wrong, you will often get an 
> error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a 
> particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
> backwards compatibility between different versions.  This prevents build bugs 
> due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
> build time.
> For all these reasons, I think we should switch to CMake for compiling native 
> (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to