[GitHub] spark pull request #21033: [SPARK-19320][MESOS][WIP]allow specifying a hard ...

2018-04-12 Thread yanji84
Github user yanji84 commented on a diff in the pull request:

https://github.com/apache/spark/pull/21033#discussion_r181231118
  
--- Diff: 
resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackendSuite.scala
 ---
@@ -165,18 +165,47 @@ class MesosCoarseGrainedSchedulerBackendSuite extends 
SparkFunSuite
   }
 
 
-  test("mesos does not acquire more than spark.mesos.gpus.max") {
-val maxGpus = 5
-setBackend(Map("spark.mesos.gpus.max" -> maxGpus.toString))
+  test("mesos acquires spark.mesos.executor.gpus number of gpus per 
executor") {
+setBackend(Map("spark.mesos.gpus.max" -> "5",
+   "spark.mesos.executor.gpus" -> "2"))
 
 val executorMemory = backend.executorMemory(sc)
-offerResources(List(Resources(executorMemory, 1, maxGpus + 1)))
+offerResources(List(Resources(executorMemory, 1, 5)))
 
 val taskInfos = verifyTaskLaunched(driver, "o1")
 assert(taskInfos.length == 1)
 
 val gpus = backend.getResource(taskInfos.head.getResourcesList, "gpus")
-assert(gpus == maxGpus)
+assert(gpus == 2)
+  }
+
+
+  test("mesos declines offers that cannot satisfy 
spark.mesos.executor.gpus") {
+setBackend(Map("spark.mesos.gpus.max" -> "5",
--- End diff --

Sounds good. Added the test


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21033: [SPARK-19320][MESOS][WIP]allow specifying a hard ...

2018-04-12 Thread tnachen
Github user tnachen commented on a diff in the pull request:

https://github.com/apache/spark/pull/21033#discussion_r181199842
  
--- Diff: 
resource-managers/mesos/src/test/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackendSuite.scala
 ---
@@ -165,18 +165,47 @@ class MesosCoarseGrainedSchedulerBackendSuite extends 
SparkFunSuite
   }
 
 
-  test("mesos does not acquire more than spark.mesos.gpus.max") {
-val maxGpus = 5
-setBackend(Map("spark.mesos.gpus.max" -> maxGpus.toString))
+  test("mesos acquires spark.mesos.executor.gpus number of gpus per 
executor") {
+setBackend(Map("spark.mesos.gpus.max" -> "5",
+   "spark.mesos.executor.gpus" -> "2"))
 
 val executorMemory = backend.executorMemory(sc)
-offerResources(List(Resources(executorMemory, 1, maxGpus + 1)))
+offerResources(List(Resources(executorMemory, 1, 5)))
 
 val taskInfos = verifyTaskLaunched(driver, "o1")
 assert(taskInfos.length == 1)
 
 val gpus = backend.getResource(taskInfos.head.getResourcesList, "gpus")
-assert(gpus == maxGpus)
+assert(gpus == 2)
+  }
+
+
+  test("mesos declines offers that cannot satisfy 
spark.mesos.executor.gpus") {
+setBackend(Map("spark.mesos.gpus.max" -> "5",
--- End diff --

I think it's worth testing setting max less than the number of executor 
gpus as well.


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #21033: [SPARK-19320][MESOS][WIP]allow specifying a hard ...

2018-04-10 Thread yanji84
GitHub user yanji84 opened a pull request:

https://github.com/apache/spark/pull/21033

[SPARK-19320][MESOS][WIP]allow specifying a hard limit on number of gpus 
required in each spark executor when running on mesos

## What changes were proposed in this pull request?

Currently, Spark only allows specifying overall gpu resources as an upper 
limit, this adds a new conf parameter to allow specifying a hard limit on the 
number of gpu cores for each executor while still respecting the overall gpu 
resource constraint

## How was this patch tested?

Unit Testing

Please review http://spark.apache.org/contributing.html before opening a 
pull request.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yanji84/spark ji/hard_limit_on_gpu

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/21033.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #21033


commit cec434a1eba6227814ba5a842ff8f41103217539
Author: Ji Yan 
Date:   2017-03-10T05:30:11Z

respect both gpu and maxgpu

commit c427e151dbf63815f25d20fe1b099a7b09e85f51
Author: Ji Yan 
Date:   2017-05-14T20:02:16Z

fix gpu offer

commit 1e61996c31ff3a01396738fd91adf69952fd3558
Author: Ji Yan 
Date:   2017-05-14T20:15:55Z

syntax fix

commit f24dbe17787acecd4c032e25d820cb59d8b6d491
Author: Ji Yan 
Date:   2017-05-15T00:30:50Z

pass all tests

commit f89e5ccae02667d4f55e7aeb1f805a9cfaee1558
Author: Ji Yan 
Date:   2018-04-10T18:37:14Z

remove redundant




---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org