Hi Everyone, a quick question with in this context. What is the underneath
persistent storage that you guys are using? With regards to this
containerized environment? Thanks

On Thursday, March 10, 2016, yanlin wang <yanl...@me.com> wrote:

> How you guys make driver docker within container to be reachable from
> spark worker ?
>
> Would you share your driver docker? i am trying to put only driver in
> docker and spark running with yarn outside of container and i don’t want to
> use —net=host
>
> Thx
> Yanlin
>
> On Mar 10, 2016, at 11:06 AM, Guillaume Eynard Bontemps <
> g.eynard.bonte...@gmail.com
> <javascript:_e(%7B%7D,'cvml','g.eynard.bonte...@gmail.com');>> wrote:
>
> Glad to hear it. Thanks all  for sharing your  solutions.
>
> Le jeu. 10 mars 2016 19:19, Eran Chinthaka Withana <
> eran.chinth...@gmail.com
> <javascript:_e(%7B%7D,'cvml','eran.chinth...@gmail.com');>> a écrit :
>
>> Phew, it worked. All I had to do was to add *export
>> SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=echinthaka/mesos-spark:0.23.1-1.6.0-2.6"
>> *before calling spark-submit. Guillaume, thanks for the pointer.
>>
>> Timothy, thanks for looking into this. Looking forward to see a fix soon.
>>
>> Thanks,
>> Eran Chinthaka Withana
>>
>> On Thu, Mar 10, 2016 at 10:10 AM, Tim Chen <t...@mesosphere.io
>> <javascript:_e(%7B%7D,'cvml','t...@mesosphere.io');>> wrote:
>>
>>> Hi Eran,
>>>
>>> I need to investigate but perhaps that's true, we're using
>>> SPARK_JAVA_OPTS to pass all the options and not --conf.
>>>
>>> I'll take a look at the bug, but if you can try the workaround and see
>>> if that fixes your problem.
>>>
>>> Tim
>>>
>>> On Thu, Mar 10, 2016 at 10:08 AM, Eran Chinthaka Withana <
>>> eran.chinth...@gmail.com
>>> <javascript:_e(%7B%7D,'cvml','eran.chinth...@gmail.com');>> wrote:
>>>
>>>> Hi Timothy
>>>>
>>>> What version of spark are you guys running?
>>>>>
>>>>
>>>> I'm using Spark 1.6.0. You can see the Dockerfile I used here:
>>>> https://github.com/echinthaka/spark-mesos-docker/blob/master/docker/mesos-spark/Dockerfile
>>>>
>>>>
>>>>
>>>>> And also did you set the working dir in your image to be spark home?
>>>>>
>>>>
>>>> Yes I did. You can see it here: https://goo.gl/8PxtV8
>>>>
>>>> Can it be because of this:
>>>> https://issues.apache.org/jira/browse/SPARK-13258 as Guillaume pointed
>>>> out above? As you can see, I'm passing in the docker image URI through
>>>> spark-submit (--conf spark.mesos.executor.docker.
>>>> image=echinthaka/mesos-spark:0.23.1-1.6.0-2.6)
>>>>
>>>> Thanks,
>>>> Eran
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to