Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/22752
@vanzin can you check the updated changes, thanks
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22752#discussion_r226429600
--- Diff: core/src/main/scala/org/apache/spark/deploy/history/config.scala
---
@@ -64,4 +64,11 @@ private[spark] object config
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22752#discussion_r226416951
--- Diff: core/src/main/scala/org/apache/spark/deploy/history/config.scala
---
@@ -64,4 +64,11 @@ private[spark] object config
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22752#discussion_r226014282
--- Diff: core/src/main/scala/org/apache/spark/deploy/history/config.scala
---
@@ -64,4 +64,11 @@ private[spark] object config
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22752#discussion_r226014153
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -449,7 +450,7 @@ private[history] class
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22752#discussion_r226013841
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -449,7 +450,7 @@ private[history] class
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/22752
[SPARK-24787][CORE] Revert hsync in EventLoggingListener and make
FsHistoryProvider to read lastBlockBeingWritten data for logs
## What changes were proposed in this pull request
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22623#discussion_r223504756
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -74,20 +74,28 @@ trait TestPrematureExit {
@volatile
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22623#discussion_r223504565
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -74,20 +74,27 @@ trait TestPrematureExit {
@volatile
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22623#discussion_r223453234
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -74,20 +74,26 @@ trait TestPrematureExit {
@volatile
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/22623
Yes, you are right, `exitedCleanly` is `false` even when the expected
exception is thrown.
---
-
To unsubscribe, e-mail
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/22623
This change is to avoid the expected exception is being thrown from the
thread and getting printed in the test log
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22623#discussion_r223248407
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkSubmitSuite.scala ---
@@ -74,20 +74,26 @@ trait TestPrematureExit {
@volatile
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21996
@felixcheung can you check this PR, please let me know if there is anything
need to be updated.
---
-
To unsubscribe, e
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/22625
Thanks @felixcheung for looking into this.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/22625#discussion_r222786641
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -98,6 +98,8 @@ class
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/22623
Thanks @srowen for looking into this.
> ThreadUtils.scala
```
case NonFatal(t) if !t.isInstanceOf[TimeoutException] =>
throw new SparkExc
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/22625
[SPARK-25637][CORE] SparkException: Could not find CoarseGrainedScheduler
occurs during the application stop
## What changes were proposed in this pull request
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/22623
[SPARK-25636][CORE] spark-submit swallows the failure reason when there
## What changes were proposed in this pull request?
Cause of the error is wrapped with SparkException, now finding
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/21996#discussion_r207763630
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -98,17 +98,24 @@ private[spark] class SparkSubmit extends Logging
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/21996#discussion_r207702588
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -98,17 +98,24 @@ private[spark] class SparkSubmit extends Logging
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21996
> I'm not sure how the PR title is related to the change here?
As an user perspective, when they don't see any o/p for status/kill
commands, they would probably ass
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/21996
[SPARK-24888][CORE] spark-submit --master spark://host:port --status
driver-id does not work
## What changes were proposed in this pull request?
In `SparkSubmit.scala` (`val
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/21979
[SPARK-25009][CORE]Standalone Cluster mode application submit is not working
## What changes were proposed in this pull request?
It seems 'doRunMain()' has been removed accidentally
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21202
@srowen this issue still exists and needs to be merged.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21202
> This patch fails due to an unknown error code, -9.
I don't see any failures related to the change, can you trigger the test
ag
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/21202
[SPARK-24129] [K8S] Add option to pass --build-arg's to docker-image-tool.sh
## What changes were proposed in this pull request?
Adding `-b arg` option to take `--build-arg
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21088
For standalone cluster(`DriverRunner.scala:182`), there is driverId which
we can use here and for Mesos cluster(`MesosRestServer.scala:107`), there is
submission ID available
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21088
Thanks @jerryshao and @jiangxb1987 for looking into this.
I have updated with k8s support and fixed review comment, please have a
look
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/21071
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/21088
[SPARK-24003][CORE] Add support to provide spark.executor.extraJavaOptions
in terms of App Id and/or Executor Id's
## What changes were proposed in this pull request?
Added support
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21071
@gatorsmile we need to have this for K8S as well, will include it in SPIP.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21071
Thanks @rxin and @markhamstra for your comments, I will come up with SPIP
design draft and start the discussion
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/21071
Thanks @steveloughran and @rdblue for looking into this.
bq. this turns HTrace on always; do you think it should be optional
It operates on NullScope which doesn't do anything
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/21071
[SPARK-21962][CORE] Distributed Tracing in Spark
## What changes were proposed in this pull request?
This PR integrates with HTrace, it sends traces for the application and
tasks
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/20754
@felixcheung Thanks for looking into this. This error doesn't seem to be
related to the PR, can you trigger the test again?
`This patch fails due to an unknown error code, -9
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/20754
[SPARK-23287][MESOS] Spark scheduler does not remove initial executor if
not one job submitted
## What changes were proposed in this pull request
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19616
@vanzin Thanks for the review, can you have a look into the updated PR?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168364882
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -51,33 +52,16 @@ import
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168364718
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1104,14 +1117,39 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168364491
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1104,14 +1117,39 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168364257
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -784,6 +794,9 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168363855
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1104,14 +1117,39 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168363726
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1104,14 +1117,39 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168363672
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -656,7 +664,9 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19616#discussion_r168363561
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -69,6 +70,10 @@ private[spark] class Client
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19616
@vanzin Thanks for looking into this.
I thought to verify some scenarios before removing WIP, feedback is welcome
anytime. Now I see there are some code conflicts, I will resolve
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19741#discussion_r151541900
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -268,8 +268,13 @@ private
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19741#discussion_r151533418
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -268,8 +268,13 @@ private
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19741#discussion_r151497034
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -268,8 +268,13 @@ private
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19741#discussion_r151303638
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -268,8 +268,13 @@ private
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19741
Thanks @jerryshao for looking into this.
> From my understanding, the above exception seems no harm to the Spark
application, just running into some threading corner case during s
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/19749
[SPARK-22519][YARN] Remove unnecessary stagingDirPath null check in
ApplicationMaster.cleanupStagingDir()
## What changes were proposed in this pull request?
Removed the unnecessary
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/19741
[SPARK-14228][CORE][YARN] Lost executor of RPC disassociated, and occurs
exception: Could not find CoarseGrainedScheduler or it has been stopped
## What changes were proposed in this pull
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19396
@jiangxb1987 Thanks for the comment, I made the change which throws
exception and exits the worker.
---
-
To unsubscribe
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/19616
[SPARK-22404][YARN][WIP] Provide an option to use unmanaged AM in
yarn-client mode
## What changes were proposed in this pull request?
Providing a new configuration "spark.ya
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/16801
Will identify better solution to fix this issue and create a new PR,
closing it.
---
-
To unsubscribe, e-mail: reviews
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/16801
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19396
@jerryshao Please let me know if you don't convince with the above comment,
I can make the changes to PR to make Worker do down on external shuffle service
start failure
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19385
Thanks @vanzin for looking into this.
> This only solves half the problem, right? What about cluster mode?
Yes, it solves the Mesos/Client mode. For Mesos/Cluster mode, I th
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19396
Thanks @jerryshao for the comment.
> IMO I think it might be better to throw an exception instead of not
starting shuffle service. Since user want to use external shuffle explici
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/19396
[SPARK-22172][CORE] Worker hangs when the external shuffle service port is
already in use
## What changes were proposed in this pull request?
Handling the NonFatal exceptions while
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/19385
[SPARK-11034] [LAUNCHER] [MESOS] Launcher: add support for monitoring Mesos
apps
## What changes were proposed in this pull request?
Added Launcher support for monitoring Mesos
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13143
@ArtRand I think this is still an issue which needs to be merged, do you
have any observations with this PR
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19141#discussion_r138970023
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -565,7 +565,6 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19141#discussion_r138402807
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -565,7 +565,6 @@ private[spark] class Client
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/19141#discussion_r138219530
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -565,7 +565,6 @@ private[spark] class Client
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/19141
Thanks @jerryshao for looking into this PR.
> Can you please describe your usage scenario and steps to reproduce your
issue, from my understanding. Did you configure your default
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/19141
[SPARK-21384] [YARN] Spark 2.2 + YARN without spark.yarn.jars /
spark.yarn.archive fails
## What changes were proposed in this pull request?
When the libraries temp directory(i.e
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18708
@vanzin I have updated the changes, can you check and validate the change?
Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18708
Thanks @vanzin for checking this.
```
Then shouldn't the fix be in the code that transforms the URI list into an
argument for -classpath?
I'm pretty sure the code you're
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18708
Thanks @HyukjinKwon for checking this and for the link.
> Are you saying "file:///C:/Users//.ivy2/jars/.jar" is not the correct
form of URI on Windows?
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18708
[SPARK-21339] [CORE] spark-shell --packages option does not add jars to
classpath on windows
## What changes were proposed in this pull request?
The --packages option jars are getting
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18357
@zsxwing Tests have passed, can you check this? Thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/18357#discussion_r124945536
--- Diff:
core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala
---
@@ -26,27 +26,34 @@ import
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/18357#discussion_r124944716
--- Diff:
core/src/main/scala/org/apache/spark/util/SparkUncaughtExceptionHandler.scala
---
@@ -26,27 +26,34 @@ import
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/18358
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18384
@srowen, I am not able to reproduce this failure in my local env, seems
this test is not related to the change and also passed in the previous run, can
you trigger the test once? Thanks
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18384
@srowen I removed spaces around imports, please check now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18384
Thanks @sowen for taking a look into this.
There are other two classes where `addSuppressed ` being used,
https://github.com/apache/spark/blob/master/core/src/main/scala
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18384
[SPARK-21170] [CORE] Utils.tryWithSafeFinallyAndFailureCallbacks throws
IllegalArgumentException: Self-suppression not permitted
## What changes were proposed in this pull request
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18357
can we change SparkUncaughtExceptionHandler to class and create instance in
each daemons main() with the constructor flag whether to kill the process for
Exceptions or not? Something like
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18357
@zsxwing Thanks for looking into this, I see SparkUncaughtExceptionHandler
exits the process for all the exceptions/errors.
Can we modify SparkUncaughtExceptionHandler in such a way
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18357
> But can you double check that this change won't terminate a
still-functional worker?
The change doesn't impact/harm the fully functional Worker,
SparkUncaughtExceptionHand
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18358
I thought, we can have discussion at the process level handling if needed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/18357
@jiangxb1987 Thanks for looking at this.
Any one of the threads in Worker gets an exception which is unhandled then
the thread gets terminated and process(Worker) keeps running
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18358
[SPARK-21148] [CORE] Set SparkUncaughtExceptionHandler to the Master
## What changes were proposed in this pull request?
Adding the default UncaughtExceptionHandler to the Master
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/18357
[SPARK-21146] [CORE] Worker should handle and shutdown when any thread gets
UncaughtException
## What changes were proposed in this pull request?
Adding the default
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/16801
@jiangxb1987, Please find the difference.
Current behaviour of Spark Jobs page for Running Application and History
page,
400/400 (17 killed)
Behaviour
Github user devaraj-kavali commented on a diff in the pull request:
https://github.com/apache/spark/pull/16801#discussion_r122792258
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -234,7 +234,6 @@ class JobProgressListener(conf: SparkConf
Github user devaraj-kavali closed the pull request at:
https://github.com/apache/spark/pull/16705
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/16705
Thanks @tgravescs for checking this, seems PR has gone stale, I will update
it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user devaraj-kavali opened a pull request:
https://github.com/apache/spark/pull/17726
[SPARK-17928] [Mesos] No driver.memoryOverhead setting for mesos cluster
mode
## What changes were proposed in this pull request?
Added a new configuration
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
Killed drivers have the same value as the successfully completed drivers
when we are showing in the Web UI, users would get surprised when they don't
see the driver suddenly which
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
Thanks @mgummelt for looking into this.
I think we should have some mechanism to show the KILLED drivers instead of
making them disappear for killed command. Here finished set
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13143
Thanks @mgummelt for the feedback, will update the PR with the function
rewrite.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13326
@mgummelt /@tnachen, can you have a look into this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13143
@mgummelt /@tnachen, can you have a look into this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13072
Thanks @mgummelt for the confirmation, It throws SparkException with the
bug SPARK-15359/https://github.com/apache/spark/pull/13143.
---
If your project is set up for it, you can reply
Github user devaraj-kavali commented on the issue:
https://github.com/apache/spark/pull/13077
@srowen I think it is still needed, @tnachen mentioned that it will be
closed as 'it's no longer being updated' if the conflicts cannot be resolved. I
have updated the PR
1 - 100 of 214 matches
Mail list logo