-1 based on Bigtop testing against Hadoop 2.0.2 and Hadoop 1.1.0 Sqoop
and Hive wf fail to execute.

Here's how to reproduce:

1. On your linux box hook up 2 Bigtop repos to your system:
    
http://bigtop01.cloudera.org:8080/view/Bigtop-trunk/job/Bigtop-trunk-Repository/
    http://bigtop01.cloudera.org:8080/job/Bigtop-git/
e.g. on Ubuntu Lucid you'd run:
    #  curl 
http://bigtop01.cloudera.org:8080/job/Bigtop-git/label=lucid/lastSuccessfulBuild/artifact/output/bigtop.list
> /etc/apt/sources.list.d/bigtop1.list
    # curl 
http://bigtop01.cloudera.org:8080/view/Bigtop-trunk/job/Bigtop-trunk-Repository/label=lucid/lastSuccessfulBuild/artifact/repo/bigtop.list
> /etc/apt/sources.list.d/bigtop2.list
    # apt-get update
on CentOS 5 you'd run:
    #  curl 
http://bigtop01.cloudera.org:8080/job/Bigtop-git/label=centos5/lastSuccessfulBuild/artifact/output/bigtop.repo
> /etc/yum.repos.d/bigtop1.repo
    # curl 
http://bigtop01.cloudera.org:8080/view/Bigtop-trunk/job/Bigtop-trunk-Repository/label=centos5/lastSuccessfulBuild/artifact/repo/bigtop.repo
>  /etc/yum.repos.d/bigtop2.repo

2. Install Hadoop, Sqoop and Oozie in pseudo distributed mode:
   ubuntu# apt-get install -y hadoop-conf-pseudo sqoop oozie
   RedHat# yum install -y sqoop hadoop-conf-pseudo oozie

3. Init and start the services
   # service hadoop-hdfs-namenode init
   # service oozie init
   # sudo -u hdfs hadoop fs -chmod -R 777 /
   # for i in  hadoop-hdfs-namenode  hadoop-hdfs-datanode
hadoop-yarn-resourcemanager hadoop-yarn-nodemanager
hadoop-mapreduce-historyserver  ; do service $i start ; done
   # service oozie restart

4. Make sure that Oozie is up and running
    # oozie admin -version -oozie http://localhost:11000/oozie

5. Install examples and try running a workflow
    # cd /tmp
    # tar xzvf /usr/share/doc/oozie*/oozie-examples.tar.gz
    # hadoop fs -mkdir -p /user/oozie/share/lib/sqoop
    # hadoop fs -mkdir -p /user/root
    # hadoop fs -put examples /user/root/examples
    # hadoop fs -put /usr/lib/sqoop/*.jar /usr/lib/sqoop/lib/*.jar
/user/oozie/share/lib/sqoop
    # oozie job -DnameNode=hdfs://localhost:8020
-DjobTracker=localhost:8032 -config examples/apps/sqoop/job.properties
-run -oozie http://localhost:11000/oozie

At this point the workflow would fail with the following:

eption: cache file (mapreduce.job.cache.files) scheme: "hdfs", host:
"localhost", port: 8020, file:
"/user/oozie/share/lib/sqoop/hsqldb-1.8.0.10.jar",  conflicts with
cache file (mapreduce.job.cache.files)
hdfs://localhost:8020/tmp/hadoop-yarn/staging/root/.staging/job_1353549235835_0004/libjars/hsqldb-1.8.0.10.jar
        at 
org.apache.hadoop.mapreduce.v2.util.MRApps.parseDistributedCacheArtifacts(MRApps.java:338)
        at 
org.apache.hadoop.mapreduce.v2.util.MRApps.setupDistributedCache(MRApps.java:273)
        at 
org.apache.hadoop.mapred.YARNRunner.createApplicationSubmissionContext(YARNRunner.java:419)
        at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:288)
        at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:391)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
        at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1236)
        at 
org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:141)
        at 
org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:202)
        at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:465)
        at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:403)
        at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
        at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
        at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
        at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
        at 
org.apache.oozie.action.hadoop.SqoopMain.runSqoopJob(SqoopMain.java:205)
        at org.apache.oozie.action.hadoop.SqoopMain.run(SqoopMain.java:174)
        at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:37)
        at org.apache.oozie.action.hadoop.SqoopMain.main(SqoopMain.java:47)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:473)
        at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
        at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:400)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:335)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:157)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:152)

Intercepting System.exit(1)

<<< Invocation of Main class completed <<<

Thanks,
Roman.

On Wed, Nov 21, 2012 at 12:00 PM, Mohammad Islam <misla...@yahoo.com> wrote:
> Dear Oozie community,
>
> The release candidate 0 for for Oozie 3.3.0 is available.
>
>
> Oozie 3.3.0 has the following new features:
> 1. Bulk Monitoring API - Consolidated view of jobs
> 2. Eliminate redundancies in xml through global section.
> 3. Add formal parameters to XML for early valiadation
>
> 4. Visualize color coded job DAG at runtime.
>
> 5. Load Hbase/HCat credentials in Job conf
> 6.  Support  direct map-reduce job submission through  Oozie CLI without 
> workflow XML
> 7. Add support for multiple/configurable sharelibs for each action type
>
>
>
> In addition, it includes several improvements for performance and stability
> and several bug fixes. Detail release log could be found at:
> http://people.apache.org/~kamrul/oozie-3.3.0-rc0/release-log.txt
>
>
> Keys used to sign the release are available at
> http://www.apache.org/dist/oozie/KEYS
>
> Please download, test, and try it out:
> http://people.apache.org/~kamrul/oozie-3.3.0-rc0/
>
> The release, md5 signature, gpg signature, and rat report can all be found
> at the above URL.
>
> Vote closes in 3 days .
>
> Regards,
>
> Mohammad

Reply via email to