Thanks Yaniv for the reply. The job now starts but completed with some
errors.

I wanted to know if the current implementation has the ability trigger the
job from the Spark batch jar file, instead of reading from a github repo ?
Or should this be considered as a feature request ?

PFB the error logs.

===> Executor 0000000002-efce1eff-b351-4bc3-9bb2-776af04d3b3b registered
===> a provider for group spark was created
===> launching task: 0000000002
===> ================= started action start =================
===> val data = 1 to 1000
===> val rdd = sc.parallelize(data)
===> val odd = rdd.filter(n => n%2 != 0).toDF("number")
===> ================= finished action start =================
===> complete task: 0000000002
===> launching task: 0000000003
===> ================= started action start =================
===> val highNoDf = AmaContext.getDataFrame("start", "odd").where("number >
100")
===> highNoDf.write.mode(SaveMode.Overwrite).json(s"${env.
outputRootPath}/dtatframe_res")
===> org.apache.spark.SparkException: Job aborted.
===> Executor 0000000003-ada561dd-f38b-4741-9ca5-eeb20780bbcf registered
===> a provider for group spark was created
===> launching task: 0000000003
===> ================= started action step2 =================
===> val highNoDf = AmaContext.getDataFrame("start", "odd").where("number >
100")
===> highNoDf.write.mode(SaveMode.Overwrite).json(s"${env.
outputRootPath}/dtatframe_res")
===> org.apache.spark.SparkException: Job aborted.
===> Executor 0000000003-c65bcfd5-215b-46dc-9bd8-b38d31f50bb1 registered
===> a provider for group spark was created
===> launching task: 0000000003
===> ================= started action step2 =================
===> val highNoDf = AmaContext.getDataFrame("start", "odd").where("number >
100")
===> highNoDf.write.mode(SaveMode.Overwrite).json(s"${env.
outputRootPath}/dtatframe_res")
===> org.apache.spark.SparkException: Job aborted.
===> moving to err action null
2017-10-31 10:53:37.730:INFO:oejs.ServerConnector:Thread-73: Stopped
ServerConnector@58ea606c{HTTP/1.1}{0.0.0.0:8000}
2017-10-31 10:53:37.732:INFO:oejsh.ContextHandler:Thread-73: Stopped
o.e.j.s.ServletContextHandler@8c3b9d{/,file:/home/shad/apps/
apache-amaterasu-0.2.0-incubating/dist/,UNAVAILABLE}
I1031 10:53:37.733896 36249 sched.cpp:2021] Asked to stop the driver
I1031 10:53:37.734133 36249 sched.cpp:1203] Stopping framework
afd772c2-509b-4782-a4c6-4cd9a2985cc2-0001


W00t amaterasu job is finished!!!

Thanks,
Shad

On Tue, Oct 31, 2017 at 5:02 AM, Yaniv Rodenski <ya...@shinto.io> wrote:

> Hi Shad,
>
> sorry about that, there was indeed an issue with the job definition.
> Should be fine now.
>
> Cheers,
> Yaniv
>
> On Mon, Oct 30, 2017 at 9:02 PM, Shad Amez <shad.ame...@gmail.com> wrote:
>
> > Hi All,
> >
> > I started tinkering with source code of Amaterasu and just wanted to
> > confirm if I am missing any step.
> >
> > Here are the steps that I followed :
> >
> > 1. Installed a single node mesos cluster (version 1.4) in Ubuntu 16.0.4
> > 2. Generated the amaterasu tar file post building the project from
> source.
> > 3. Tested if mesos by executing the following command :
> >    mesos-execute --master=$MASTER --name="cluster-test" --command="sleep
> 5"
> > 4. Ran the command for deploying the spark job using Amaterasu
> >  ./ama-start.sh --repo="https://github.com/shintoio/amaterasu-job-sample
> .
> > git"
> > --branch="master" --env="test" --report="code"
> >
> > Following is error log
> >
> > repo: https://github.com/shintoio/amaterasu-job-sample.git
> > 2017-10-30 14:00:09.761:INFO::main: Logging initialized @430ms
> > 2017-10-30 14:00:09.829:INFO:oejs.Server:main: jetty-9.2.z-SNAPSHOT
> > 2017-10-30 14:00:09.864:INFO:oejsh.ContextHandler:main: Started
> > o.e.j.s.ServletContextHandler@8c3b9d
> > {/,file:/home/shad/apps/apache-amaterasu-0.2.0-
> incubating/dist/,AVAILABLE}
> > 2017-10-30 14:00:09.882:INFO:oejs.ServerConnector:main: Started
> > ServerConnector@58ea606c{HTTP/1.1}{0.0.0.0:8000}
> > 2017-10-30 14:00:09.882:INFO:oejs.Server:main: Started @553ms
> > SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> > SLF4J: Defaulting to no-operation (NOP) logger implementation
> > SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for
> further
> > details.
> > I1030 14:00:10.085204 16586 sched.cpp:232] Version: 1.4.0
> > I1030 14:00:10.088812 16630 sched.cpp:336] New master detected at
> > master@127.0.1.1:5050
> > I1030 14:00:10.089205 16630 sched.cpp:352] No credentials provided.
> > Attempting to register without authentication
> > I1030 14:00:10.090991 16624 sched.cpp:759] Framework registered with
> > e72b9609-4b7f-4509-b1a4-bd2055d674aa-0002
> > Exception in thread "Thread-13" while scanning for the next token
> > found character '\t(TAB)' that cannot start any token. (Do not use
> \t(TAB)
> > for indentation)
> >  in 'reader', line 2, column 1:
> >     "name":"test",
> >     ^
> >
> > at
> > org.yaml.snakeyaml.scanner.ScannerImpl.fetchMoreTokens(
> > ScannerImpl.java:420)
> > at org.yaml.snakeyaml.scanner.ScannerImpl.checkToken(
> ScannerImpl.java:226)
> > at
> > org.yaml.snakeyaml.parser.ParserImpl$ParseImplicitDocumentStart.
> > produce(ParserImpl.java:194)
> > at org.yaml.snakeyaml.parser.ParserImpl.peekEvent(ParserImpl.java:157)
> > at org.yaml.snakeyaml.parser.ParserImpl.checkEvent(ParserImpl.java:147)
> > at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:104)
> > at
> > org.yaml.snakeyaml.constructor.BaseConstructor.
> > getSingleData(BaseConstructor.java:122)
> > at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:505)
> > at org.yaml.snakeyaml.Yaml.load(Yaml.java:436)
> > at
> > org.apache.amaterasu.leader.utilities.DataLoader$.
> > yamlToMap(DataLoader.scala:87)
> > at
> > org.apache.amaterasu.leader.utilities.DataLoader$$anonfun$
> > 3.apply(DataLoader.scala:68)
> > at
> > org.apache.amaterasu.leader.utilities.DataLoader$$anonfun$
> > 3.apply(DataLoader.scala:68)
> > at
> > scala.collection.TraversableLike$$anonfun$map$
> > 1.apply(TraversableLike.scala:234)
> > at
> > scala.collection.TraversableLike$$anonfun$map$
> > 1.apply(TraversableLike.scala:234)
> > at
> > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.
> > scala:33)
> > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
> > at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
> > at
> > org.apache.amaterasu.leader.utilities.DataLoader$.
> > getExecutorData(DataLoader.scala:68)
> > at
> > org.apache.amaterasu.leader.mesos.schedulers.JobScheduler$
> > $anonfun$resourceOffers$1.apply(JobScheduler.scala:163)
> > at
> > org.apache.amaterasu.leader.mesos.schedulers.JobScheduler$
> > $anonfun$resourceOffers$1.apply(JobScheduler.scala:128)
> > at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> > at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> > at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> > at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> > at
> > org.apache.amaterasu.leader.mesos.schedulers.JobScheduler.
> > resourceOffers(JobScheduler.scala:128)
> > I1030 14:00:12.991056 16624 sched.cpp:2055] Asked to abort the driver
> > I1030 14:00:12.991195 16624 sched.cpp:1233] Aborting framework
> > e72b9609-4b7f-4509-b1a4-bd2055d674aa-0002
> >
> > Thanks,
> > Shad
> >
>
>
>
> --
> Yaniv Rodenski
>

Reply via email to