Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/19968
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19968
Now this problemï¼ i don't work. Now i close it .
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/20865
[SPARK-23542] The exists action shoule be further optimized in logical plan
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/20670
@gatorsmile thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/20670
@cloud-fan @srowen @jiangxb1987 i have changed the code and title ,
please help me review. Thanks
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/20670
@srowen i redescribe the problem. Now i hive a small table `ls` with one
row , and a big table `catalog_sales` with One hundred billion rows. And in the
big table, the non null value about
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/20670
@SparkQA i think this error is not caused by my patch. please ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/20670
@srowen @wangyum help me review, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/20670
add constranits
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
I run a sql: `select ls.cs_order_number from ls left semi join
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19968
@srowen ok ,i will update, thanks
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/19968#discussion_r159167935
--- Diff: core/src/main/scala/org/apache/spark/rpc/netty/Dispatcher.scala
---
@@ -100,6 +102,7 @@ private[netty] class Dispatcher(nettyEnv
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19951
@vanzin yeah, it is difficult to consider all the race. So i continue to
analyze the source code, and i think my another way to solve the problem better
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19951
I have another way to modify this problem:
![change
coarsegrainedschedulerbackend](https://user-images.githubusercontent.com/9440626/33971859-9705974c-e0b5-11e7-95dd-499ff132e330.png
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19951
My mean is, `CoarseGrainedSchedulerBackend.stopExecutors()` is called, then
same executors is exited. The driver does not need feel these executor is
disconnected and send message, otherwise
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19951
@tgravescs the job is end, then error log apper. i think this error log
will cause illusions to believe the failure of the task.
@vanzin Your analysis is right. And i think my
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19968
@srowen i close https://github.com/apache/spark/pull/19965, and update
description
---
-
To unsubscribe, e-mail: reviews
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/19965
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/19968
[SPARK-22770][CORE] When driver stopping, there is error: Could not find
CoarseGrainedScheduler
## What changes were proposed in this pull request?
When driver stopping
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19967
ok
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/19967
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/19967
[SPARK-22770][CORE] When driver stopping, there is error: Could not find
CoarseGrainedScheduler
## What changes were proposed in this pull request?
When driver stopping
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/19965
[SSPARK-22769][CORE] When driver stopping, there is error: RpcEnv already
stopped
## What changes were proposed in this pull request?
When driver stopping, there is a error
Github user KaiXinXiaoLei commented on the issue:
https://github.com/apache/spark/pull/19951
@devaraj-kavali @vanzin ,using https://github.com/apache/spark/pull/19741,
i still find the problem "Could not find CoarseGrainedScheduler", i change the
code ,please revi
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/19945
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/19951
[SPARK-22760][CORE][YARN] When sc.stop() is called, set stopped is true
before removing executors
## What changes were proposed in this pull request?
When the number of executors
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/19945
[SPARK-14228][CORE][YARN] Lost executor of RPC disassociated, and occurs
exception: Could not find CoarseGrainedScheduler or it has been stopped
## What changes were proposed in this pull
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/10900#issuecomment-219212923
@andrewor14 See https://github.com/apache/spark/pull/13115,
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/10900
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/13115
[SPARK-12492] Using spark-sql commond to run query, write the event of
SparkListenerJobStart
See https://github.com/apache/spark/pull/10900
## What changes were proposed in this pull
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/12059#issuecomment-211823757
@ajbozarth help me check, thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/10900#issuecomment-208319014
@zsxwing I change it, Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/12059#issuecomment-203345585
![Uploading it's ok using this MR.pngâ¦]()
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/10900#issuecomment-203328686
Ok, I will change. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/12059
[SPARK-14265] Get attempId of stage and transfer it to web
## What changes were proposed in this pull request?
if a stage is failed and attempt to start again, DAG visualization does
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/11935
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/11935
[SPARK-13102] There is a problem about clicking â+detail" in SQLPage
using IE 11
Run query using ThriftServer, and open web using IE11, i click â+detail"
in SQLPage, but no
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/10900#issuecomment-200819923
@JoshRosen Can u check? Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/10900
[SPARK-12492] Using spark-sql commond to run query, write the event of
SparkListenerJobStart
You can merge this pull request into a Git repository by running:
$ git pull https
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/9911#discussion_r48445836
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientInterface.scala
---
@@ -189,4 +191,6 @@ private[hive] trait ClientInterface
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/9911#issuecomment-167294826
@liancheng I use your code and build. Then run "sbin/start-thriftserver.sh"
failed. the error info is as fellow:
15/12/26 16:28:27 INFO ClientWrapp
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/10157
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/10157#issuecomment-163186959
ok. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/10157
[SPARK-12156] Make SPARK_EXECUTOR_INSTANCES become effective
I set SPARK_EXECUTOR_INSTANCES=3, but two executors starts. That is,
SPARK_EXECUTOR_INSTANCES does not work.
You can merge
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8848#issuecomment-156342123
ok thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/8848
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/9268#issuecomment-151044681
@vanzin But i run :
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8737#issuecomment-151049720
@jerryshao Ok, Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/9268#issuecomment-151341056
ok. Can you tell me the url of PR. thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/9268
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/9268
[SPARK-11298] When driver sends message "GetExecutorLossReason" to AM, the
SparkContext may be stop
I get lastest code form github, and just run "bin/spark-shell --mast
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/9129
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8737#issuecomment-150733760
@andrewor14 I am sorry to reply so late. I just test the latest code, the
problem still exists. So i think i continue tracking this problem .Thanks.
---
If your
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/9129#issuecomment-148273400
Jenkins failed is not caused by my code. Please retest please. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/9129
[YARN] When driver sends message "GetExecutorLossReason" to AM, there
should have reture value
I get lastest code form github, and just run "bin/spark-shell --mast
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/9129#issuecomment-148287449
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/8947
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/8945#discussion_r41947268
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -369,6 +369,38 @@ class
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/8945#discussion_r41752408
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -369,6 +369,38 @@ class
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/8945#discussion_r41820348
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -369,6 +369,38 @@ class
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/8945#discussion_r41692752
--- Diff:
core/src/test/scala/org/apache/spark/deploy/StandaloneDynamicAllocationSuite.scala
---
@@ -369,6 +369,38 @@ class
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/9026#issuecomment-147030150
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8945#issuecomment-146446834
jenkins test please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8737#issuecomment-146516735
@srowen This problem is not the same with
https://github.com/apache/spark/pull/8945. In this MR, During running tasks,
the AM is failed and restarted
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-146389770
see https://github.com/apache/spark/pull/8945
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/8668
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8945
[SPARK-10515] When killing executor, the pending replacement executors
should not be lost
If the heartbeat receiver kills executors (and new ones are not registered
to replace them
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8947
[SPARK-9776]Another instance of Derby may have already booted the database
In security cluster, and using yarn-client mode, I just run
"bin/spark-shell --master yarn-client".And
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-144338392
@andrewor14 I change code according to your suggest. see:
https://github.com/apache/spark/pull/8945
---
If your project is set up for it, you can reply
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-142507056
@vanzin I add a unit test for this problem. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-142201407
@vanzin Yes, ```addExecutors``` will often send message to AM, but the
total number of executors in ```spark-dynamic-executor-allocation``` will be
the same
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8868
Idle4
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/KaiXinXiaoLei/spark idle4
Alternatively you can review and apply these changes
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/8868
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/8668#discussion_r40096873
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -236,6 +246,12 @@ class
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-142322746
@vanzin I think the code I changed can resolve the problem about
[SPARK-10515]. Thanks
---
If your project is set up for it, you can reply to this email
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8848
[SPARK-10726] When task starting, executor should be in busy state
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/KaiXinXiaoLei/spark
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8815
[SPARK-10515] The total of number of executors in driver should be the same
with in AM
see https://github.com/apache/spark/pull/8668
You can merge this pull request into a Git repository
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-141432796
@vanzin I think in this way to resolve problem is not better. in
spark-dynamic-executor-allocation, the total of executors should be consistent
with in AM. So i
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/8815
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-141007880
@vanzin I try this way, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-139985164
Using dynamic-executor-allocation, the number of executor needed by driver
should be calculated according to the number of task. For example, During
running
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8737#issuecomment-139998027
Run a long job, and stages have many tasks. During running tasks, the AM
is failed. Then a new AM restarts. In ExecutorAllocationManager, because there
is many
Github user KaiXinXiaoLei commented on a diff in the pull request:
https://github.com/apache/spark/pull/8737#discussion_r39370475
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -91,6 +91,7 @@ private[spark] abstract class
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8737
[SPARK-10582] using dynamic-executor-allocation, if a new AM restarts,
executors should be registered.
You can merge this pull request into a Git repository by running:
$ git pull
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-139738374
@vanzin For example, executorsPendingToRemove=Set(1), and executor 2 is
idle timeout before a new executor is asked to replace executor 1.. Then driver
kill
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-139741805
@andrewor14 when the numbers of executors requested is lower, driver will
send message to AM to changed in ExecutorAllocationManager.
![image](https
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-139166283
If the heartbeat receiver kills executors (and new ones are not
registered to replace them), the idle timeout for the old executors will be
lost
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/8668
[SPARK-10515] When killing executor, there is no need to seed
RequestExecutors to AM
When killing executor, driver will send RequestExecutors to AM. But in
ExecutorAllocationManager
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/8668#issuecomment-139096059
@andrewor14 Now i have a problem. For example, there is three executors.
After same minutes, There executors will be removed because of no recent
heartbeats
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7716#issuecomment-129283885
@andrewor14 I use the code before SPARK-8119. Now i use the latest code,
test again, and not found this problem. Now i close PR, Thanks.
---
If your project
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/7716
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/7984
[CORE] Remove space in function for scala style
Remove space, change appAttemptId : Option[String] to appAttemptId:
Option[String]
You can merge this pull request into a Git repository
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/7984
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7716#issuecomment-127492003
I think it's not the same. There maybe the same executorId in
knownExecutors and executorsPendingToRemove.
---
If your project is set up for it, you can reply
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7559#issuecomment-127095058
ok. i close this. and will find the better way
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7716#issuecomment-127094874
i run in latest version.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user KaiXinXiaoLei closed the pull request at:
https://github.com/apache/spark/pull/7559
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7716#issuecomment-125477598
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7716#issuecomment-125477581
I think this patch has no association with the failed unit tests, please
retest.
---
If your project is set up for it, you can reply to this email and have your
GitHub user KaiXinXiaoLei opened a pull request:
https://github.com/apache/spark/pull/7716
[SPARK-9375] Make sure the total number of executor(s) requested by the
driver is not negative
In the code:
if (!replace) {
doRequestTotalExecutors(numExistingExecutors
Github user KaiXinXiaoLei commented on the pull request:
https://github.com/apache/spark/pull/7559#issuecomment-125112620
@srowen In my picture, i just want to say the type of executor id is
digital or character, eg: the executor id 5 and driver in picture.
Now, do you
1 - 100 of 149 matches
Mail list logo