Pradeep,

Thanks for the assistance! We'll be trying this out and I'll certainly let
you know if we have questions.


Thanks,
June Taylor
System Administrator, Minnesota Population Center
University of Minnesota

On Fri, Apr 15, 2016 at 6:50 AM, Pradeep Chhetri <
pradeep.chhetr...@gmail.com> wrote:

> Hi June,
>
> Here is the spark marathon configuration you were asking:
> https://gist.github.com/pradeepchhetri/df6b71580a9f107378ffebc789d805ac
>
> I have included the script to start MesosClusterDispatcher too in the
> above gist
>
> I would suggest you to use this Dockerfile as the reference for building
> spark docker image:
> https://github.com/apache/spark/blob/master/external/docker/spark-mesos/Dockerfile
>
> I have modified my dockerfile to read env variables and fill the
> configuration template. These env variables are being passed thru marathon.
>
> And ofcourse, we are here to help you out.
>
> On Thu, Apr 14, 2016 at 5:03 PM, June Taylor <j...@umn.edu> wrote:
>
>> Shuai,
>>
>> Thank you for your reply. Are you actually using this docker image in
>> Marathon successfully? If so, please share your JSON for the application,
>> as that would help me understand exactly what you suggest.
>>
>>
>> Thanks,
>> June Taylor
>> System Administrator, Minnesota Population Center
>> University of Minnesota
>>
>> On Thu, Apr 14, 2016 at 9:23 AM, Shuai Lin <linshuai2...@gmail.com>
>> wrote:
>>
>>> To run the dispatcher  in marathon I would recommend use a docker image
>>> like mesosphere/spark https://hub.docker.com/r/mesosphere/spark/tags/
>>>
>>> One problem is how to access the dispatcher since it may be launched on
>>> any one the slaves. You can setup a service discovery mechanism like
>>> marathon-lb or mesos-dns for this purpose, but it may be a little overkill
>>> if you don't need them except here.
>>>
>>> On simple approach is to specify --net=host in the marathon task for the
>>> dispatch, and run a haproxy on the your your master server that tries all
>>> the slaves:
>>>
>>> listen mesos-spark-dispatcher 0.0.0.0:7077
>>>>     server node1 10.0.1.1:7077 check
>>>>     server node2 10.0.1.2:7077 check
>>>>     server node3 10.0.1.3:7077 check
>>>
>>>
>>> Then use "--master=mesos://yourmaster:7077" in your spark-submit command.
>>>
>>>
>>>
>>> On Thu, Apr 14, 2016 at 10:03 PM, June Taylor <j...@umn.edu> wrote:
>>>
>>>> Pradeep,
>>>>
>>>> Thank you for your reply. I have read that documentation, but it leaves
>>>> out a lot of key pieces. Have you actually run MesosClusterDispatcher on
>>>> Marathon? If so, can you please share your JSON configuration for the
>>>> application?
>>>>
>>>>
>>>> Thanks,
>>>> June Taylor
>>>> System Administrator, Minnesota Population Center
>>>> University of Minnesota
>>>>
>>>> On Wed, Apr 13, 2016 at 11:32 AM, Pradeep Chhetri <
>>>> pradeep.chhetr...@gmail.com> wrote:
>>>>
>>>>> In cluster mode, you need to first run *MesosClusterDispatcher*
>>>>> application on marathon (Read more about that here:
>>>>> http://spark.apache.org/docs/latest/running-on-mesos.html#cluster-mode
>>>>> )
>>>>>
>>>>> In both client and cluster mode, you need to specify --master flag
>>>>> while submitting job, the only difference is that you will specifying the
>>>>> value as the URL of dispatcher in cluster mode
>>>>> (mesos://<dispatcher_address>:<dispatcher_port>) while in client mode, you
>>>>> will be specifying URL of mesos-master
>>>>> (mesos://<mesos_master>:<mesos_master_port>)
>>>>>
>>>>> On Wed, Apr 13, 2016 at 3:24 PM, June Taylor <j...@umn.edu> wrote:
>>>>>
>>>>>> I'm interested in what the "best practice" is for running pyspark
>>>>>> jobs against a mesos cluster.
>>>>>>
>>>>>> Right now, we're simply passing the --master mesos://host:5050 flag,
>>>>>> which appears to register a framework properly.
>>>>>>
>>>>>> However, I was told this isn't "cluster mode" - and I'm a bit
>>>>>> confused. What is the recommended method of doing this?
>>>>>>
>>>>>> Thanks,
>>>>>> June Taylor
>>>>>> System Administrator, Minnesota Population Center
>>>>>> University of Minnesota
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Regards,
>>>>> Pradeep Chhetri
>>>>>
>>>>
>>>>
>>>
>>
>
>
> --
> Regards,
> Pradeep Chhetri
>

Reply via email to