Yeah I agree with Koert, it would be the lightest solution. I have
used it quite successfully and it just works.

There is not much spark specifics here, you can follow this example
https://github.com/jacobus/s4 on how to build your spray service.
Then the easy solution would be to have a SparkContext in your
HttpService, this context is being initialized at bootstrap,
computes the RDD you want to run the queries on and caches them. In
your routes, you will query the cached RDDs.

In my case I used spark+spray a bit differently for a always running
service, as I didn't want to block the resources for always.
The app at bootstrap was starting a spark job that fetches data and
preprocesses/precomputes an optimized structure
(domain specific indexes) that is collected locally and then reused
across requests directly from RAM,
the spark context is stopped when the job is done. Only the service
continues to run.


Eugen


2014-06-25 9:07 GMT+02:00 Jaonary Rabarisoa <jaon...@gmail.com>:

> Hi all,
>
> Thank you for the reply. Is there any example of spark running in client
> mode with spray ? I think, I will choose this approach.
>
>
> On Tue, Jun 24, 2014 at 4:55 PM, Koert Kuipers <ko...@tresata.com> wrote:
>
>> run your spark app in client mode together with a spray rest service,
>> that the front end can talk to
>>
>>
>> On Tue, Jun 24, 2014 at 3:12 AM, Jaonary Rabarisoa <jaon...@gmail.com>
>> wrote:
>>
>>> Hi all,
>>>
>>> So far, I run my spark jobs with spark-shell or spark-submit command.
>>> I'd like to go further and I wonder how to use spark as a backend of a web
>>> application. Specificaly, I want a frontend application ( build with nodejs
>>> )  to communicate with spark on the backend, so that every query from the
>>> frontend is rooted to spark. And the result from Spark are sent back to the
>>> frontend.
>>> Does some of you already experiment this kind of architecture ?
>>>
>>>
>>> Cheers,
>>>
>>>
>>> Jaonary
>>>
>>
>>
>

Reply via email to