Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r122050475
--- Diff:
core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java ---
@@ -183,12 +185,38 @@ public void testChildProcLauncher() throws
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r122049171
--- Diff:
core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java ---
@@ -183,12 +185,38 @@ public void testChildProcLauncher() throws
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin I added `waitFor` method to ChildSparkHandle allowing it to wait
for launched process or thread. Now the test waits for thread to finish.
---
If your project is set up for it, you can
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin I modified the test with sleep for SparkApp to run.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r115573484
--- Diff:
core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java ---
@@ -183,6 +183,26 @@ public void testChildProcLauncher() throws
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin @tgravescs,
I removed `envvars` argument from SparkApp and all unwanted code along with
it.
It still provides autoShutdown in both Thread mode and process mode for
yarn cluster
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin , Please review and suggest if you have any further changes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin,
- I refactored unit tests
- Changed LAUNCHER_INTERNAL* variable access to package private
- Fixed some of the nits.
About passing sys.env, we want to capture set
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r107498853
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala
---
@@ -201,6 +210,152 @@ class YarnClusterSuite
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r106080005
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/YarnClusterSuite.scala
---
@@ -252,20 +307,55 @@ class YarnClusterSuite
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@tgravescs, I added few more tests to cover `autoStudown` options impact on
launching it as thread and proc.
---
If your project is set up for it, you can reply to this email and have your
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
The build failure seems unrelated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin @tgravescs, I have updates the patch with suggested changes.
- Removed unused `connect` method from `LauncherBackend`.
- Documented expected behavior for `SparkAppHandler
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin,
I have updated the patch to address the review comments. Sorry about
confusion on props and environment variables. I have separated that as argument
in SparkApp method. Also
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r99850931
--- Diff:
core/src/main/scala/org/apache/spark/launcher/LauncherBackend.scala ---
@@ -17,38 +17,67 @@
package org.apache.spark.launcher
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r99849329
--- Diff:
core/src/main/scala/org/apache/spark/launcher/LauncherBackend.scala ---
@@ -71,6 +100,9 @@ private[spark] abstract class LauncherBackend
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r99841742
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/AbstractSparkAppHandle.java ---
@@ -0,0 +1,132 @@
+/*
+ * Licensed to the Apache
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin @tgravescs, I have worked on the review comments. Please revisit
the work and let me know if we need any further changes to this patch.
---
If your project is set up for it, you can
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15627
@ueshin Sorry about this. The patch is available in
https://github.com/apache/spark/pull/15810.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/12203
@vanzin, @jerryshao
Sorry for breaking this functionality. I have the patch available with more
unit tests added to ensure positive test case ensuring submission continues if
unique files
GitHub user kishorvpatil opened a pull request:
https://github.com/apache/spark/pull/15810
[SPARK-18357] Fix yarn files/archive broken issue andd unit tests
## What changes were proposed in this pull request?
The #15627 broke functionality with yarn --files --archives does
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r86870416
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/ChildThreadAppHandle.java ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache
GitHub user kishorvpatil opened a pull request:
https://github.com/apache/spark/pull/15627
[SPARK-18099][YARN] Fail if same files added to distributed cache for
--files and --archives
## What changes were proposed in this pull request?
During spark-submit, if yarn dist
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@tgravescs @vanzin , Thanks for all the comments.
I added `SparkApp` trait, more documentation and minor code fixes based on
our suggestion.
Please review and let me know if I need any
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r84178923
--- Diff:
launcher/src/main/java/org/apache/spark/launcher/SparkSubmitRunner.java ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r83744591
--- Diff: yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -760,7 +787,7 @@ private[spark] class Client(
.foreach { case
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15009
@vanzin @tgravescs, Sorry for the delay. I have made changes to isolate
System properties while launching multiple applications in thread mode. Also,
tested this by running multiple apps one
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/15069
@tgravescs fixed the test name
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15069#discussion_r78802398
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocator.scala ---
@@ -506,36 +505,41 @@ private[yarn] class YarnAllocator
GitHub user kishorvpatil opened a pull request:
https://github.com/apache/spark/pull/15069
[SPARK-17511] Yarn Dynamic Allocation: Avoid marking released container as
Failed
## What changes were proposed in this pull request?
Due to race conditions, the ` assert
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r78066827
--- Diff:
core/src/main/scala/org/apache/spark/launcher/LauncherBackend.scala ---
@@ -29,26 +31,63 @@ import org.apache.spark.util.{ThreadUtils, Utils
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/15009#discussion_r78050727
--- Diff:
core/src/main/scala/org/apache/spark/launcher/LauncherBackend.scala ---
@@ -71,6 +110,9 @@ private[spark] abstract class LauncherBackend
GitHub user kishorvpatil opened a pull request:
https://github.com/apache/spark/pull/15009
[SPARK-17443] Stop Spark Application if launcher goes down and use
reflection
## What changes were proposed in this pull request?
1. `SparkLauncher` provides `stopIfInterrupted
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@nblintao , I am not sure why we had the check of existence of logs -
specially when the reference to logs is static link for `stdout` and `stderr`.
Unless there is scenario where
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@ajbozarth Thank you for review. I removed unused methods from the the
javascript.
I agree converting pages to `DataTables` is tricky and listing some
standard way might be a huge
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@tgravescs, I removed unwanted code/comment from the javascript.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@tgravescs , moved `formatBytes` to `utils.js`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@ajbozarth , I fixed the code for standalone mode as well. Sorry for not
being fully aware of possible UI invocations. I will keep this in mind for
future patches.
---
If your project is set
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@tgravescs, fixed documentation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@tgravescs, I think addressed the comments mentioned above.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13670#discussion_r70686785
--- Diff: docs/monitoring.md ---
@@ -288,7 +288,11 @@ where `[base-app-id]` is the YARN application ID.
/applications/[app-id
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13670#discussion_r70560042
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage-template.html
---
@@ -0,0 +1,102 @@
+
+
+
Github user kishorvpatil commented on a diff in the pull request:
https://github.com/apache/spark/pull/13670#discussion_r70560022
--- Diff:
core/src/main/resources/org/apache/spark/ui/static/executorspage.js ---
@@ -0,0 +1,410 @@
+/*
+ * Licensed to the Apache Software
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@ajbozarth, I have fixed the issue with safari exception. Its because
`parser.baseURI` is `null`. I switched to `document.baseURI`. It looks fine on
testing. Let me know if you still see
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@ajbozarth, I do not see how I could produce the exception you see. Please
help me reproduce this so that I can address it.
---
If your project is set up for it, you can reply to this email
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@zhuoliu , i have removed unwanted, unused javascript functions in the
executorspage.js. There are no duplicate functions now between historyserver.js
and this script..
---
If your project
Github user kishorvpatil commented on the issue:
https://github.com/apache/spark/pull/13670
@ajbozarth , Thank you for loooking into this patch. I have added
screenshot with annotation. I hope that helps.
---
If your project is set up for it, you can reply to this email and have
GitHub user kishorvpatil opened a pull request:
https://github.com/apache/spark/pull/13670
[SPARK-15951] Change Executors Page to use datatables to support sorting
columns and searching
1. Create the executorspage-template.html for displaying application
information in datables
48 matches
Mail list logo