AM
To: user@spark.apache.org
Subject: How to avoid long-running jobs blocking short-running jobs
Hi,
I use spark cluster to run ETL jobs and analysis computation about the data
after elt stage.
The elt jobs can keep running for several hours, but analysis computation is a
short-running job
Forwarded message From : conner
To : Date : Sat, 03 Nov 2018
12:34:01 +0330 Subject : How to avoid long-running jobs blocking short-running
jobs Forwarded message Hi, I use spark cluster to run
ETL jobs and analysis computation about the data
Hi,
What does your Spark deployment architecture looks like? Standalone? Yarn?
Mesos? Kubernetes? Those have resource managers (not middlewares) that allow to
implement scenarios as you want to achieve.
In any case you can try the FairScheduler of any of those solutions.
Best regards
> Am
On Sat, Nov 03, 2018 at 02:04:01AM -0700, conner wrote:
> My solution is to find a good way to divide the spark cluster resource
> into two.
What about yarn and its queue management system ?
--
nicolas
-
To unsubscribe
Hi,
I use spark cluster to run ETL jobs and analysis computation about the data
after elt stage.
The elt jobs can keep running for several hours, but analysis computation is
a short-running job which can finish in a few seconds.
The dilemma I entrapped is that my application runs in a single JVM