Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17625
@jsoltren thanks to bring up this very old PR.
By looking at the UI you pasted here, I'm wondering what is the usage of
`Completed Stages` here, what's difference here compared to
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17586#discussion_r111312845
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/classification/LinearSVC.scala ---
@@ -287,6 +290,27 @@ class LinearSVCModel private[classification] (
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17586#discussion_r111313220
--- Diff: python/pyspark/ml/classification.py ---
@@ -172,6 +172,59 @@ def intercept(self):
"""
return self._call_java("intercept")
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17628
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17626
`udf(x, y) = 1` is deterministic no matter whether x or y is deterministic
or not, is because x, y are not used, in other words they don't affect the
result of the udf.
The result of
GitHub user ouyangxiaochen opened a pull request:
https://github.com/apache/spark/pull/17628
[SPARK-20316][SQL] val and var should strictly follow the Scala syntax
What changes were proposed in this pull request?
val and var should strictly follow the Scala syntax
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17627
**[Test build #75757 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75757/testReport)**
for PR 17627 at commit
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/17608
Thanks for the followup @guoxiaolongzte and @HyukjinKwon I'll take another
look at work tomorrow. Also for clarification, the reason I'm being extra
detailed about this is because I want to make
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17627
cc @cloud-fan @sameeragarwal
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/17627
[SPARK-19924] [SQL] [BACKPORT-2.1] Handle InvocationTargetException for all
Hive Shim
### What changes were proposed in this pull request?
This is to backport the PR
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17626
The deterministic of our [Scala udaf has exactly the same
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17626
A simple example, `udf(x, y) = 1` is deterministic no matter whether x or y
is deterministic or not.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r111311174
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -359,6 +359,16 @@ object SQLConf {
.booleanConf
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17626
udf(x, y) = x + y looks like a deterministic UDF function. Is this
udf(rand(), rand()) deterministic?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r111310969
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -121,7 +121,12 @@
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r111310979
--- Diff:
sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkExecuteStatementOperation.scala
---
@@ -242,7 +247,12 @@
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r111310863
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -359,6 +359,16 @@ object SQLConf {
.booleanConf
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17608
> maybe you can show a more concrete example of the URL as generated by the
UI, and exactly what it is interpreted as, and the error page. This isn't very
clear now.
+1.
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17581
**[Test build #75756 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75756/testReport)**
for PR 17581 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17581
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17626
Even if the input expressions are not deterministic, the output could be
still deterministic.
If we already make an assumption that ScalaUDF is deterministic, we should
make it behave
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17495
**[Test build #75755 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75755/testReport)**
for PR 17495 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17459
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75754/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17459
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17459
**[Test build #75754 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75754/testReport)**
for PR 17459 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17626
Hmm, I think it is a bit different between the deterministic assumption on
UDF functions and the `deterministic` of `ScalaUDF`.
Even your UDF functions are deterministic, if the input
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17459
Only one remaining comment left for the test.
Btw, @johnc1231 Do you think it is possible you can run a simple benchmark,
so we know whether this change improves the performance too?
---
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17608
After replacing `%23` to `#`, it seems showing the contents correctly as
below:
![2017-04-13 12 41
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17608
I can reproduce this bug by the following steps provided in the JIRA -
https://issues.apache.org/jira/secure/attachment/12862828/jobs.png and
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17459#discussion_r111305748
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/IndexedRowMatrixSuite.scala
---
@@ -87,19 +87,92 @@ class IndexedRowMatrixSuite
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17459#discussion_r111305664
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/IndexedRowMatrixSuite.scala
---
@@ -87,19 +87,92 @@ class IndexedRowMatrixSuite
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/17556
I have ran all unit test case of MLlib in Python. However, I am not
familiar with R, and I don't want waste too many time on deploying R's
environment.
Could CI retest the pr? We can
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111303973
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111303746
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17610
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75751/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17610
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17610
**[Test build #75751 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75751/testReport)**
for PR 17610 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17626
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17626
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75753/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17626
**[Test build #75753 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75753/testReport)**
for PR 17626 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17459
**[Test build #75754 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75754/testReport)**
for PR 17459 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17626
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75752/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17626
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user johnc1231 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17459#discussion_r111302756
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/distributed/IndexedRowMatrixSuite.scala
---
@@ -87,21 +87,74 @@ class IndexedRowMatrixSuite
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17626
**[Test build #75752 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75752/testReport)**
for PR 17626 at commit
Github user johnc1231 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17459#discussion_r111302391
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/linalg/distributed/IndexedRowMatrix.scala
---
@@ -108,8 +108,64 @@ class IndexedRowMatrix
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111300760
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user weiqingy commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111300718
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext:
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111299625
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111299488
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111298559
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111298162
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111296917
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17608
Browser is Chrome.
Spark version is 2.1.0.
Must be History Server Web ui.
My Url:
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17527
If you refer column names being `ı` from `I` in Turkish locale, it might
be correct per the discussion above as they are correct lower cases in Turkish
locale.
I would like to know
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/17626#discussion_r111292278
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ScalaUDF.scala
---
@@ -45,6 +45,9 @@ case class ScalaUDF(
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111292239
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager(
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17506#discussion_r111292109
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisInputDStream.scala
---
@@ -267,7 +267,7 @@ object
Github user yssharma commented on a diff in the pull request:
https://github.com/apache/spark/pull/17506#discussion_r111291842
--- Diff:
external/kinesis-asl/src/test/scala/org/apache/spark/streaming/kinesis/KinesisStreamSuite.scala
---
@@ -233,11 +241,15 @@ abstract class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17626
**[Test build #75753 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75753/testReport)**
for PR 17626 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75749/
Test FAILed.
---
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17610
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17610
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17527
@HyukjinKwon You can set the locale to `tr`. You will see the test failure.
The test cases failed because the column names are incorrectly set.
---
If your project is set up for it, you can
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17624
thanks for the review, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75749 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75749/testReport)**
for PR 17540 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17610
Merging to master and 2.1. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17626
cc @cloud-fan @dongjoon-hyun
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17610
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17626
**[Test build #75752 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75752/testReport)**
for PR 17626 at commit
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/17626
[SPARK-20315] [SQL] Set ScalaUDF's deterministic to true
### What changes were proposed in this pull request?
ScalaUDF is always assumed to deterministic, based on the previous
discussion
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17624
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17610
**[Test build #75751 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75751/testReport)**
for PR 17610 at commit
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17610
I removed the lock and changed `stopping` to `AtomicBoolean` to ensure
idempotent.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r111287664
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -320,14 +321,35 @@ private[history] class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17590
This is my thought on it.
- **variable names**
in [SPARK-6813](https://issues.apache.org/jira/browse/SPARK-6813) ...
> We could have a R style guide based on the one from
Github user zjffdu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17586#discussion_r111279320
--- Diff: python/pyspark/ml/classification.py ---
@@ -172,6 +172,47 @@ def intercept(self):
"""
return self._call_java("intercept")
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17625
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75750/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17625
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17625
**[Test build #75750 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75750/testReport)**
for PR 17625 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17625
**[Test build #75750 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75750/testReport)**
for PR 17625 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111274638
--- Diff:
core/src/main/scala/org/apache/spark/network/netty/NettyBlockTransferService.scala
---
@@ -22,8 +22,12 @@ import java.nio.ByteBuffer
import
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17625
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111275191
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -259,6 +290,46 @@ private[spark] class EventLoggingListener(
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111275150
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -259,6 +290,46 @@ private[spark] class EventLoggingListener(
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111274944
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -87,6 +88,10 @@ private[spark] class EventLoggingListener(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75749 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75749/testReport)**
for PR 17540 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75748 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75748/testReport)**
for PR 17540 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75748/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17540
Build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17625
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17540
**[Test build #75748 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75748/testReport)**
for PR 17540 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111233891
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1080,6 +1122,25 @@ class DAGScheduler(
}
}
+
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111264930
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -472,6 +472,47 @@ class DAGScheduler(
}
/**
+ *
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111271432
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1080,6 +1122,25 @@ class DAGScheduler(
}
}
+
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111269273
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -472,6 +472,47 @@ class DAGScheduler(
}
/**
+ *
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111273237
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -168,6 +169,8 @@ private[spark] class TaskSetManager(
t.epoch
GitHub user jsoltren opened a pull request:
https://github.com/apache/spark/pull/17625
[SPARK-9103][WIP] Add Memory Tracking UI and track Netty memory usage
This patch resurrects https://github.com/apache/spark/pull/7753 by
liyezhang556520.
## What changes were proposed in
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/17608
I attempted to recreate this with the latest code on Safari, Firefox and
Chrome and everything worked fine for me. What browser/environment are you
using that you see this issue?
---
If your
1 - 100 of 322 matches
Mail list logo