There is a spark-jobserver (SJS) which is REST interface for spark and
spark-sql
you can deploy your jar file with Jobs impl to spark-jobserver
and use rest API to submit jobs in synch or async mode
in sync mode you need to poll SJS to get job result
job result might be actual data in json or path on s3 / hdfs with the data

There is an instruction on how to start job-server on AWS EMR and submit
simple workdcount job using culr
https://github.com/spark-jobserver/spark-jobserver/blob/master/doc/EMR.md

On Mon, Feb 29, 2016 at 12:54 PM, skaarthik oss <skaarthik....@gmail.com>
wrote:

> Check out http://toree.incubator.apache.org/. It might help with your
> need.
>
>
>
> *From:* moshir mikael [mailto:moshir.mik...@gmail.com]
> *Sent:* Monday, February 29, 2016 5:58 AM
> *To:* Alex Dzhagriev <dzh...@gmail.com>
> *Cc:* user <user@spark.apache.org>
> *Subject:* Re: Spark Integration Patterns
>
>
>
> Thanks, will check too, however : just want to use Spark core RDD and
> standard data sources.
>
>
>
> Le lun. 29 févr. 2016 à 14:54, Alex Dzhagriev <dzh...@gmail.com> a écrit :
>
> Hi Moshir,
>
>
>
> Regarding the streaming, you can take a look at the spark streaming, the
> micro-batching framework. If it satisfies your needs it has a bunch of
> integrations. Thus, the source for the jobs could be Kafka, Flume or Akka.
>
>
>
> Cheers, Alex.
>
>
>
> On Mon, Feb 29, 2016 at 2:48 PM, moshir mikael <moshir.mik...@gmail.com>
> wrote:
>
> Hi Alex,
>
> thanks for the link. Will check it.
>
> Does someone know of a more streamlined approach ?
>
>
>
>
>
>
>
> Le lun. 29 févr. 2016 à 10:28, Alex Dzhagriev <dzh...@gmail.com> a écrit :
>
> Hi Moshir,
>
>
>
> I think you can use the rest api provided with Spark:
> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/rest/RestSubmissionServer.scala
>
>
>
> Unfortunately, I haven't find any documentation, but it looks fine.
>
> Thanks, Alex.
>
>
>
> On Sun, Feb 28, 2016 at 3:25 PM, mms <moshir.mik...@gmail.com> wrote:
>
> Hi, I cannot find a simple example showing how a typical application can
> 'connect' to a remote spark cluster and interact with it. Let's say I have
> a Python web application hosted somewhere *outside *a spark cluster, with
> just python installed on it. How can I talk to Spark without using a
> notebook, or using ssh to connect to a cluster master node ? I know of
> spark-submit and spark-shell, however forking a process on a remote host to
> execute a shell script seems like a lot of effort What are the recommended
> ways to connect and query Spark from a remote client ? Thanks Thx !
> ------------------------------
>
> View this message in context: Spark Integration Patterns
> <http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Integration-Patterns-tp26354.html>
> Sent from the Apache Spark User List mailing list archive
> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>
>
>
>
>
>

Reply via email to