Re: Spark on Mesos: Spark issuing hundreds of SUBSCRIBE requests / second and crashing Mesos

2018-07-23 Thread Susan X. Huynh
Hi Nimi,

This sounds similar to a bug I have come across before. See:
https://jira.apache.org/jira/browse/SPARK-22342?focusedCommentId=16429950=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16429950

It turned out to be a bug in libmesos (the client library used to
communicate with Mesos): "using a failoverTimeout of 0 with Mesos native
scheduler client can result in infinite subscribe loop" (
https://issues.apache.org/jira/browse/MESOS-8171). It can be fixed by
upgrading to a version of libmesos that has the fix.

Susan


On Fri, Jul 13, 2018 at 3:39 PM, Nimi W  wrote:

> I've come across an issue with Mesos 1.4.1 and Spark 2.2.1. We launch
> Spark tasks using the MesosClusterDispatcher in cluster mode. On a couple
> of occasions, we have noticed that when the Spark Driver crashes (to
> various causes - human error, network error), sometimes, when the Driver is
> restarted, it issues a hundreds of SUBSCRIBE requests to mesos / per second
> up until the Mesos Master node gets overwhelmed and crashes. It does this
> again to the next master node, over and over until it takes down all the
> master nodes. Usually the only thing that will fix is manually stopping the
> driver and restarting.
>
> Here is a snippet of the log of the mesos master, which just logs the
> repeated SUBSCRIBE command: https://gist.github.com/nemosupremo/
> 28ef4acfd7ec5bdcccee9789c021a97f
>
> Here is the output of the spark framework: https://gist.
> github.com/nemosupremo/d098ef4def28ebf96c14d8f87aecd133 which also just
> repeats 'Transport endpoint is not connected' over and over.
>
> Thanks for any insights
>
>
>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: Interest in adding ability to request GPU's to the spark client?

2018-07-23 Thread Susan X. Huynh
There's some discussion and proposal of supporting GPUs in this Spark JIRA:
https://jira.apache.org/jira/browse/SPARK-24615 "Accelerator-aware task
scheduling for Spark"

Susan

On Thu, Jul 12, 2018 at 11:17 AM, Mich Talebzadeh  wrote:

> I agree.
>
> Adding GPU capability to Spark in my opinion is a must for Advanced
> Analytics.
>
> HTH
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Thu, 12 Jul 2018 at 19:14, Maximiliano Felice <
> maximilianofel...@gmail.com> wrote:
>
>> Hi,
>>
>> I've been meaning to reply to this email for a while now, sorry for
>> taking so much time.
>>
>> I personally think that adding GPU resource management will allow us to
>> boost some ETL performance a lot. For the last year, I've worked in
>> transforming some Machine Learning pipelines from Python in Numpy/Pandas to
>> Spark. Adding GPU capabilities to Spark would:
>>
>>
>>- Accelerate many matrix and batch computations we currently have in
>>Tensorflow
>>- Allow us to use spark for the whole pipeline (combined with
>>possibly better online serving)
>>- Let us trigger better Hyperparameter selection directly from Spark
>>
>>
>> There will be many more aspects of this that we could explode. What do
>> the rest of the list think?
>>
>> See you
>>
>> El mié., 16 may. 2018 a las 2:58, Daniel Galvez ()
>> escribió:
>>
>>> Hi all,
>>>
>>> Is anyone here interested in adding the ability to request GPUs to
>>> Spark's client (i.e, spark-submit)? As of now, Yarn 3.0's resource manager
>>> server has the ability to schedule GPUs as resources via cgroups, but the
>>> Spark client lacks an ability to request these.
>>>
>>> The ability to guarantee GPU resources would be practically useful for
>>> my organization. Right now, the only way to do that is to request the
>>> entire memory (or all CPU's) on a node, which is very kludgey and wastes
>>> resources, especially if your node has more than 1 GPU and your code was
>>> written such that an executor can use only one GPU at a time.
>>>
>>> I'm just not sure of a good way to make use of libraries like
>>> Databricks' Deep Learning pipelines
>>> <https://github.com/databricks/spark-deep-learning> for GPU-heavy
>>> computation otherwise, unless you are luckily in an organization which is
>>> able to virtualize computer nodes such that each node will have only one
>>> GPU. Of course, I realize that many Databricks customers are using Azure or
>>> AWS, which allow you to do this facilely. Is this what people normally do
>>> in industry?
>>>
>>> This is something I am interested in working on, unless others out there
>>> have advice on why this is a bad idea.
>>>
>>> Unfortunately, I am not familiar enough with Mesos and Kubernetes right
>>> now to know how they schedule gpu resources and whether adding support for
>>> requesting GPU's from them to the spark-submit client would be simple.
>>>
>>> Daniel
>>>
>>> --
>>> Daniel Galvez
>>> http://danielgalvez.me
>>> https://github.com/galv
>>>
>>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: Spark on Mesos - Weird behavior

2018-07-23 Thread Susan X. Huynh
Hi Thodoris,

Maybe setting "spark.scheduler.minRegisteredResourcesRatio" to > 0 would
help? Default value is 0 with Mesos.

"The minimum ratio of registered resources (registered resources / total
expected resources) (resources are executors in yarn mode and Kubernetes
mode, CPU cores in standalone mode and Mesos coarsed-grained mode
['spark.cores.max' value is total expected resources for Mesos
coarse-grained mode] ) to wait for before scheduling begins. Specified as a
double between 0.0 and 1.0. Regardless of whether the minimum ratio of
resources has been reached, the maximum amount of time it will wait before
scheduling begins is controlled by config
spark.scheduler.maxRegisteredResourcesWaitingTime." -
https://spark.apache.org/docs/latest/configuration.html

Susan

On Wed, Jul 11, 2018 at 7:22 AM, Pavel Plotnikov <
pavel.plotni...@team.wrike.com> wrote:

> Oh, sorry, i missed that you use spark without dynamic allocation. Anyway,
> i don't know does this parameters works without dynamic allocation.
>
> On Wed, Jul 11, 2018 at 5:11 PM Thodoris Zois  wrote:
>
>> Hello,
>>
>> Yeah you are right, but I think that works only if you use Spark dynamic
>> allocation. Am I wrong?
>>
>> -Thodoris
>>
>> On 11 Jul 2018, at 17:09, Pavel Plotnikov 
>> wrote:
>>
>> Hi, Thodoris
>> You can configure resources per executor and manipulate with number of
>> executers instead using spark.max.cores. I think 
>> spark.dynamicAllocation.minExecutors
>> and spark.dynamicAllocation.maxExecutors configuration values can help
>> you.
>>
>> On Tue, Jul 10, 2018 at 5:07 PM Thodoris Zois  wrote:
>>
>>> Actually after some experiments we figured out that spark.max.cores /
>>> spark.executor.cores is the upper bound for the executors. Spark apps will
>>> run even only if one executor can be launched.
>>>
>>> Is there any way to specify also the lower bound? It is a bit annoying
>>> that seems that we can’t control the resource usage of an application. By
>>> the way, we are not using dynamic allocation.
>>>
>>> - Thodoris
>>>
>>>
>>> On 10 Jul 2018, at 14:35, Pavel Plotnikov >> com> wrote:
>>>
>>> Hello Thodoris!
>>> Have you checked this:
>>>  - does mesos cluster have available resources?
>>>   - if spark have waiting tasks in queue more than
>>> spark.dynamicAllocation.schedulerBacklogTimeout configuration value?
>>>  - And then, have you checked that mesos send offers to spark app mesos
>>> framework at least with 10 cores and 2GB RAM?
>>>
>>> If mesos have not available offers with 10 cores, for example, but have
>>> with 8 or 9, so you can use smaller executers for better fit for available
>>> resources on nodes for example with 4 cores and 1 GB RAM, for example
>>>
>>> Cheers,
>>> Pavel
>>>
>>> On Mon, Jul 9, 2018 at 9:05 PM Thodoris Zois  wrote:
>>>
>>>> Hello list,
>>>>
>>>> We are running Apache Spark on a Mesos cluster and we face a weird
>>>> behavior of executors. When we submit an app with e.g 10 cores and 2GB of
>>>> memory and max cores 30, we expect to see 3 executors running on the
>>>> cluster. However, sometimes there are only 2... Spark applications are not
>>>> the only one that run on the cluster. I guess that Spark starts executors
>>>> on the available offers even if it does not satisfy our needs. Is there any
>>>> configuration that we can use in order to prevent Spark from starting when
>>>> there are no resource offers for the total number of executors?
>>>>
>>>> Thank you
>>>> - Thodoris
>>>>
>>>> -
>>>> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>>>>
>>>>
>>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: Advice on multiple streaming job

2018-05-06 Thread Susan X. Huynh
Hi Dhaval,

Not sure if you have considered this: the port 4040 sounds like a driver UI
port. By default it will try up to 4056, but you can increase that number
with "spark.port.maxRetries". (
https://spark.apache.org/docs/latest/configuration.html) Try setting it to
"32". This would help if the only conflict is among the driver UI ports
(like if you have > 16 drivers running on the same host).

Susan

On Sun, May 6, 2018 at 12:32 AM, vincent gromakowski <
vincent.gromakow...@gmail.com> wrote:

> Use a scheduler that abstract the network away with a CNI for instance or
> other mécanismes (mesos, kubernetes, yarn). The CNI will allow to always
> bind on the same ports because each container will have its own IP. Some
> other solution like mesos and marathon can work without CNI , with host IP
> binding, but will manage the ports for you ensuring there isn't any
> conflict.
>
> Le sam. 5 mai 2018 à 17:10, Dhaval Modi <dhavalmod...@gmail.com> a écrit :
>
>> Hi All,
>>
>> Need advice on executing multiple streaming jobs.
>>
>> Problem:- We have 100's of streaming job. Every streaming job uses new
>> port. Also, Spark automatically checks port from 4040 to 4056, post that it
>> fails. One of the workaround, is to provide port explicitly.
>>
>> Is there a way to tackle this situation? or Am I missing any thing?
>>
>> Thanking you in advance.
>>
>> Regards,
>> Dhaval Modi
>> dhavalmod...@gmail.com
>>
>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: [Mesos] How to Disable Blacklisting on Mesos?

2018-04-09 Thread Susan X. Huynh
Hi Han,

You may be seeing the same issue I described here:
https://issues.apache.org/jira/browse/SPARK-22342?focusedCommentId=16411780=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16411780
Do you see "TASK_LOST" in your driver logs? I got past that issue by
updating my version of libmesos (see my second comment in the ticket).

There's also this PR that is in progress:
https://github.com/apache/spark/pull/20640

Susan

On Sun, Apr 8, 2018 at 4:06 PM, hantuzun <m...@hantuzun.com> wrote:

> Hi all,
>
> Spark currently has blacklisting enabled on Mesos, no matter what:
> [SPARK-19755][Mesos] Blacklist is always active for
> MesosCoarseGrainedSchedulerBackend
>
> Blacklisting also prevents new drivers from running on our nodes where
> previous drivers' had failed tasks.
>
> We've tried restarting Spark dispatcher before sending new tasks. Even
> creating new machines (with the same hostname) does not help.
>
> Looking at  TaskSetBlacklist
> <https://github.com/apache/spark/blob/e18d6f5326e0d9ea03d31de5ce04cb
> 84d3b8ab37/core/src/main/scala/org/apache/spark/
> scheduler/TaskSetBlacklist.scala#L66>
> , I don't understand how a fresh Spark job submitted from a fresh Spark
> Dispatcher starts saying all the nodes are blacklisted right away. How does
> Spark know previous task failures?
>
> This issue severely interrupts us. How could we disable blacklisting on
> Spark 2.3.0? Creative ideas are welcome :)
>
> Best,
> Han
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> ---------
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: external shuffle service in mesos

2018-01-22 Thread Susan X. Huynh
Hi Igor,

You made a good point about the tradeoffs. I think the main thing you would
get with Marathon is the accounting for resources (the memory and cpus
specified in the config file). That allows Mesos to manage the resources
properly. I don't think the other tools mentioned would reserve resources
from Mesos.

If you want more information about production ops for Mesos, you might want
to ask in the Mesos mailing list. Or, you can check out the
https://dcos.io/community/ project.

Susan

On Sat, Jan 20, 2018 at 11:59 PM, igor.berman <igor.ber...@gmail.com> wrote:

> Hi Susan
>
> In general I can get what I need without Marathon, with configuring
> external-shuffle-service with puppet/ansible/chef + maybe some alerts for
> checks.
>
> I mean in companies that don't have strong Devops teams and want to install
> services as simple as possible just by config - Marathon might be useful,
> however if company already has strong puppet/ansible/chef whatever infra,
> the Marathon addition(additional component) and management is less clear
>
> WDYT?
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: external shuffle service in mesos

2018-01-20 Thread Susan X. Huynh
Hi Igor,

The best way I know of is with Marathon.
* Placement constraint: you could combine constraints in Marathon. Like:
"constraints": [
["hostname", "UNIQUE"],
["hostname", "LIKE", "host1|host2|host3"]
]
https://groups.google.com/forum/#!topic/marathon-framework/hfLUw3TIw2I

* You would have to use a workaround to deal with a dynamically sized
cluster: set the number of instances to be greater than the expected
cluster size.
https://jira.mesosphere.com/browse/MARATHON-3791?focusedCommentId=79976=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-79976
As the commenter notes, it's not ideal, it's just a workaround.

Susan

On Sat, Jan 20, 2018 at 8:33 AM, igor.berman <igor.ber...@gmail.com> wrote:

> Hi,
> wanted to get some advice regarding managing external shuffle service in
> mesos environments
>
> In spark documentation the Marathon is mentioned, however there is very
> limited documentation.
> I've tried to search for some documentation and it's seems not too
> difficult
> to configure it under Marathon(e.g.
> https://github.com/NBCUAS/dcos-spark-shuffle-service/
> blob/master/marathon/mesos-shuffle-service.json),
> however I see few problems:
>
> There is no clear way to deploy some application in mesos on every node
> see https://jira.mesosphere.com/browse/MARATHON-3791
> * it's not possible to guarantee on which nodes shuffle service application
> will be placed(it's possible to guarantee with mesos unique constrain that
> only 1 shuffle service instance will be placed on some node)
> * cluster that has dynamic nodes joining/leaving - the config of shuffle
> service must be adjusted(specifically number of instances config)
>
> So any production ops advices will be welcome
> Igor
>
>
>
> --
> Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/
>
> -
> To unsubscribe e-mail: user-unsubscr...@spark.apache.org
>
>


-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com


Re: Spark job only starts tasks on a single node

2017-12-07 Thread Susan X. Huynh
Sounds strange. Maybe it has to do with the job itself? What kind of job is
it? Have you gotten it to run on more than one node before? What's in the
spark-submit command?

Susan

On Wed, Dec 6, 2017 at 11:21 AM, Ji Yan <ji...@drive.ai> wrote:

> I am sure that the other agents have plentiful enough resources, but I
> don't know why Spark only scheduled executors on one single node, up to
> that node's capacity ( it is a different node everytime I run btw ).
>
> I checked the DEBUG log from Spark Driver, didn't see any mention of
> decline. But from log, it looks like it has only accepted one offer from
> Mesos.
>
> Also looks like there is no special role required on Spark part!
>
> On Wed, Dec 6, 2017 at 5:57 AM, Art Rand <art.r...@gmail.com> wrote:
>
>> Hello Ji,
>>
>> Spark will launch Executors round-robin on offers, so when the resources
>> on an agent get broken into multiple resource offers it's possible that
>> many Executrors get placed on a single agent. However, from your
>> description, it's not clear why your other agents do not get Executors
>> scheduled on them. It's possible that the offers from your other agents are
>> insufficient in some way. The Mesos MASTER log should show offers being
>> declined by your Spark Driver, do you see that?  If you have DEBUG level
>> logging in your Spark driver you should also see offers being declined
>> <https://github.com/apache/spark/blob/193555f79cc73873613674a09a7c371688b6dbc7/resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala#L576>
>> there. Finally if your Spark framework isn't receiving any resource offers,
>> it could be because of the roles you have established on your agents or
>> quota set on other frameworks, have you set up any of that? Hope this helps!
>>
>> Art
>>
>> On Tue, Dec 5, 2017 at 10:45 PM, Ji Yan <ji...@drive.ai> wrote:
>>
>>> Hi all,
>>>
>>> I am running Spark 2.0 on Mesos 1.1. I was trying to split up my job
>>> onto several nodes. I try to set the number of executors by the formula
>>> (spark.cores.max / spark.executor.cores). The behavior I saw was that Spark
>>> will try to fill up on one mesos node as many executors as it can, then it
>>> stops going to other mesos nodes despite that it has not done scheduling
>>> all the executors I have asked it to yet! This is super weird!
>>>
>>> Did anyone notice this behavior before? Any help appreciated!
>>>
>>> Ji
>>>
>>> The information in this email is confidential and may be legally
>>> privileged. It is intended solely for the addressee. Access to this email
>>> by anyone else is unauthorized. If you are not the intended recipient, any
>>> disclosure, copying, distribution or any action taken or omitted to be
>>> taken in reliance on it, is prohibited and may be unlawful.
>>>
>>
>>
>
> The information in this email is confidential and may be legally
> privileged. It is intended solely for the addressee. Access to this email
> by anyone else is unauthorized. If you are not the intended recipient, any
> disclosure, copying, distribution or any action taken or omitted to be
> taken in reliance on it, is prohibited and may be unlawful.
>



-- 
Susan X. Huynh
Software engineer, Data Agility
xhu...@mesosphere.com