>
> On Thu, Dec 7, 2017 at 10:18 AM, Susan X. Huynh <xhu...@mesosphere.io>
> wrote:
>
>> Sounds strange. Maybe it has to do with the job itself? What kind of job
>> is it? Have you gotten it to run on more than one node before? What's in
>> the spark-submit comman
or
> quota set on other frameworks, have you set up any of that? Hope this helps!
>
> Art
>
> On Tue, Dec 5, 2017 at 10:45 PM, Ji Yan <ji...@drive.ai> wrote:
>
>> Hi all,
>>
>> I am running Spark 2.0 on Mesos 1.1. I was trying to split up my job o
Hi all,
I am running Spark 2.0 on Mesos 1.1. I was trying to split up my job onto
several nodes. I try to set the number of executors by the formula
(spark.cores.max / spark.executor.cores). The behavior I saw was that Spark
will try to fill up on one mesos node as many executors as it can, then
Dear spark users,
When running Spark on Docker, the spark executors by default always run as
root. Is there a way to change this to other users?
Thanks
Ji
--
The information in this email is confidential and may be legally
privileged. It is intended solely for the addressee. Access to this
Dear spark users,
Is there any mechanism in Spark that does not guarantee the idempotent
nature? For example, for stranglers, the framework might start another task
assuming the strangler is slow while the strangler is still running. This
would be annoying sometime when say the task is writing to
Dear spark users,
>From this site https://spark.apache.org/docs/latest/tuning.html where it
offers recommendation on setting the level of parallelism
Clusters will not be fully utilized unless you set the level of parallelism
> for each operation high enough. Spark automatically sets the number
termined at start time.
> It's not influenced by your job itself.
>
>
> On Thu, Feb 2, 2017 at 2:42 PM, Ji Yan <ji...@drive.ai> wrote:
>
>> I tried setting spark.executor.cores per executor, but Spark seems to be
>> spinning up as many executors as possib
about this here:
> https://docs.mesosphere.com/1.8/usage/service-guides/spark/job-scheduling/.
> That doc is for DC/OS, but the configuration is the same.
>
> On Thu, Feb 2, 2017 at 1:06 PM, Ji Yan <ji...@drive.ai> wrote:
>
>> I was mainly confused why this is t
s the memory per executor. If you have no executor
> w/ 200GB memory, then the driver will accept no offers.
>
> On Thu, Feb 2, 2017 at 1:01 PM, Ji Yan <ji...@drive.ai> wrote:
>
>> sorry, to clarify, i was using --executor-memory for memory,
>> and --total-executor-cor
ory, --total-executor-cores, and --executor-cores)
>
> On Thu, Feb 2, 2017 at 12:41 PM, Ji Yan <ji...@drive.ai> wrote:
>
>> I have done a experiment on this today. It shows that only CPUs are
>> tolerant of insufficient cluster size when a job starts. On my cluster, I
>>
specifies per executor memory
requirement.
On Mon, Jan 30, 2017 at 11:34 AM, Michael Gummelt <mgumm...@mesosphere.io>
wrote:
>
>
> On Mon, Jan 30, 2017 at 9:47 AM, Ji Yan <ji...@drive.ai> wrote:
>
>> Tasks begin scheduling as soon as the first executor comes up
&
p-to-coming
>> spark on kubernetes), you have to specify the cores and memory of each
>> executor.
>>
>> It may not be supported in the future, because only mesos has the
>> concepts of offers because of its two-level scheduling model.
>>
>>
>> On Sat, Ja
Dear Spark Users,
Currently is there a way to dynamically allocate resources to Spark on
Mesos? Within Spark we can specify the CPU cores, memory before running
job. The way I understand is that the Spark job will not run if the CPU/Mem
requirement is not met. This may lead to decrease in overall
Dear Spark Users,
With the latest version of Spark and Mesos with GPU support, is there a way
to guarantee a Spark job with specified number of GPUs? Currently the Spark
job sets "spark.mesos.gpus.max" to ask for GPU resources, however this is
an upper bound, which means that Spark will accept
hen it is launched.
>
> Tim
>
>
> On Dec 30, 2016, at 1:23 PM, Ji Yan <ji...@drive.ai> wrote:
>
> Dear Spark Users,
>
> We are trying to launch Spark on Mesos from within a docker container. We
> have found that since the Spark executors need to talk back at
Dear Spark Users,
We are trying to launch Spark on Mesos from within a docker container. We
have found that since the Spark executors need to talk back at the Spark
driver, there is need to do a lot of port mapping to make that happen. We
seemed to have mapped the ports on what we could find from
Thanks Michael, Tim and I have touched base and thankfully the issue has
already been resolved
On Fri, Dec 30, 2016 at 9:20 AM, Michael Gummelt <mgumm...@mesosphere.io>
wrote:
> I've cc'd Tim and Kevin, who worked on GPU support.
>
> On Wed, Dec 28, 2016 at 11:22 AM, Ji Yan
Dear Spark Users,
Has anyone had successful experience running Spark on Mesos with GPU support?
We have a Mesos cluster that can see and offer nvidia GPU resources. With
Spark, it seems that the GPU support with Mesos
(https://github.com/apache/spark/pull/14644
18 matches
Mail list logo