Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/22258
retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/22258
All right. It's done.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/22258
[SPARK-25266] Fix memory leak vulnerability in Barrier Execution Mode
## What changes were proposed in this pull request?
BarrierCoordinator$ uses Timer and TimerTask. `TimerTask#cancel
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/19195#discussion_r138338917
--- Diff: docs/building-spark.md ---
@@ -111,7 +111,7 @@ should run continuous compilation (i.e. wait for
changes). However, this has not
extensively
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18592#discussion_r138220942
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
---
@@ -99,6 +95,20 @@ object TPCDSQueryBenchmark
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/19195
[DOCS] Fix unreachable links in the document
## What changes were proposed in this pull request?
Recently, I found two unreachable links in the document and fixed them.
Because
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18592#discussion_r137985658
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
---
@@ -99,6 +95,20 @@ object TPCDSQueryBenchmark
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18971#discussion_r134659662
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/ReplayListenerSuite.scala ---
@@ -151,7 +153,10 @@ class ReplayListenerSuite extends SparkFunSuite
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/18592
Yeah, `spark-submit --class ` is collect.
The instruction `TPCDSQueryBenchmark.scala` said at the head of it was also
wrong.
I've fixed it.
---
If your project is set up for it, you can
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18592#discussion_r126629820
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
---
@@ -99,6 +99,13 @@ object TPCDSQueryBenchmark
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18592#discussion_r126629680
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
---
@@ -65,8 +65,8 @@ object TPCDSQueryBenchmark
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18592#discussion_r126629620
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
---
@@ -65,8 +65,8 @@ object TPCDSQueryBenchmark
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/18592#discussion_r126629648
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/benchmark/TPCDSQueryBenchmark.scala
---
@@ -65,8 +65,8 @@ object TPCDSQueryBenchmark
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/18592
[SPARK-21368][SQL] TPCDSQueryBenchmark can't refer query files.
## What changes were proposed in this pull request?
TPCDSQueryBenchmark packaged into a jar doesn't work with spark-submit
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/17248
O.K. I'll close this PR. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/17248
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/14719
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
I found this solution can't resolve this issue in some corner case. I'll
close this PR for now and will revise later.
---
If your project is set up for it, you can reply to this email and have
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
Sorry for my late reply. I'll close this PR for now and might open another
PR in the near future. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/12257
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
@HyukjinKwon Thanks for pinging me! I still think this issue should be
fixed but I didn't notice @nsyca's last comment. I'll consider the problem
which he mentioned soon.
---
If your project
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/17149#discussion_r109866897
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -386,7 +386,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/17252
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/17252
Thanks for the comment. I understand the concern relevant to the
consistency.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/17252
[SPARK-19913][SS] Log warning rather than throw AnalysisException when
output is partitioned although format is memory, console or foreach
## What changes were proposed in this pull request
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/17248
[SPARK-19909][SS] Batches will fail in case that temporary checkpoint dir
is on local file system while metadata dir is on HDFS
## What changes were proposed in this pull request?
When we
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16625
LGTM. Merging into `master`. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16625#discussion_r99538768
--- Diff: docs/configuration.md ---
@@ -1797,6 +1797,20 @@ Apart from these, the following properties are also
available, and may be useful
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16625#discussion_r99540244
--- Diff: docs/security.md ---
@@ -49,10 +49,6 @@ component-specific configuration namespaces used to
override the default setting
Component
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16625#discussion_r99540452
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -394,8 +410,7 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16625
I'll take a look at this during this week.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16582
This change cannot be applied to `branch-2.0` and `branch-2.1` cleanly so
please open other PRs for those branches. Thanks.
---
If your project is set up for it, you can reply to this email
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16582
The latest change LGTM. Merging into `master`. Thanks @vanzin !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97750499
--- Diff: core/src/test/scala/org/apache/spark/ui/UISuite.scala ---
@@ -227,8 +228,55 @@ class UISuite extends SparkFunSuite {
assert(newHeader
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97710897
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -274,25 +277,28 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97709181
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -274,25 +277,28 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97700450
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -274,25 +277,28 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16582
O.K, It's reasonable.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97700650
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -337,17 +350,20 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97478738
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -337,17 +350,20 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97479049
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -274,25 +277,28 @@ private[spark] object JettyUtils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16681#discussion_r97460261
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/GenerateExec.scala ---
@@ -181,7 +181,13 @@ case class GenerateExec(
val row
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/16681
[SPARK-19334][SQL]Fix the code injection vulnerability related to Generator
functions.
## What changes were proposed in this pull request?
Similar to SPARK-15165, codegen is in danger
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16653
I tried to find other instances but I found none.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16582#discussion_r97018717
--- Diff: core/src/main/scala/org/apache/spark/ui/JettyUtils.scala ---
@@ -306,23 +311,31 @@ private[spark] object JettyUtils extends Logging
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/16653
[SPARK-19302][DOC][MINOR] Fix the wrong item format in security.md
## What changes were proposed in this pull request?
In docs/security.md, there is a description as follows
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16582
I understand. if there are no additional comments from anyone by tomorrow,
I'll merge this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16582
@vanzin I'm looking into this change and it works well on standalone-mode
but doesn't on yarn-mode.
I think it is because ResourceManager's web proxy might not handle https
properly
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16582
I'll take a look at this within the weekend.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/16338#discussion_r93677926
--- Diff: core/src/main/resources/org/apache/spark/ui/static/webui.css ---
@@ -246,4 +246,8 @@ a.expandbutton {
text-align: center;
margin: 0
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16338
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/16338
[SPARK-18837][WEBUI] Very long stage descriptions do not wrap in the UI
## What changes were proposed in this pull request?
This issue was reported by @wangyum.
In the AllJobsPage
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16190
Also, Spark Shell says as follows.
```
Spark context Web UI available at http://192.168.1.1:4040
```
---
If your project is set up for it, you can reply to this email and have your
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16190
Of course, HistoryServer works with SSL enabled.
![ssl2](https://cloud.githubusercontent.com/assets/4736016/20960966/4a291b3a-bca6-11e6-916f-5806fc88cbef.png)
---
If your project is set
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16190
cc: @mengxr
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16190
I've confirmed I can access MasterPage and WorkerPage with SSL enabled.
![ssl0](https://cloud.githubusercontent.com/assets/4736016/20960677/eadae416-bca4-11e6-82c4-df14ae3b0e86.png)
![ssl1
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/16190
@viirya Thanks! I've fixed it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/16190
[SPARK-18761][WEBUI] Web UI should be http:4040 instead of https:4040
## What changes were proposed in this pull request?
When SSL is enabled, the Spark shell shows:
```
Spark
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
O.K. Merging into `master`, `branch-2.0` and `branch-2.1`.
Thanks @moomindani !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
@moomindani Actually, I found another "64 MB" in docs/tuning.md.
Could you fix it too?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
@HyukjinKwon Ah, exactly. We have three descriptions of "64MB" for
Scala/Java/Python.
@moomindani Could you fix the left of two "64MB"?
---
If your project is set up f
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
LGTM
cc: @tgravescs @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15879
ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15862
Merging into `master`/`branch-2.0`/`branch-2.1`. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15862
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
Not my day...
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
@ajbozarth Your concern seems to be reasonable but I think it might be
better way than you suggested. One possible way is toggle the appearance of
User column by checkbox on demand. I think it's
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
@tdas @ajbozarth Sorry for the late response. I'm attaching new screen
captures as follows.
* AllJobsPage
https://cloud.githubusercontent.com/assets/4736016/20036511/91e3b7da-a44d-11e6
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/12257#discussion_r85635079
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/JobsTab.scala ---
@@ -31,8 +31,6 @@ private[ui] class JobsTab(parent: SparkUI) extends
SparkUITab
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15611
LGTM. Merging into `master` and `branch-2`. Thanks @hayashidac .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15611
ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15611
While I I noticed that all of `spark.ui.ssl` in `SSLOptionsSuite.scala`
should be `spark.ssl.ui`. It's very minor so you can fix it too within this PR
if you would like to.
---
If your project
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15611
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15611
Hey @hayashidac , you did't need to close this PR. You should have just
pushed your change...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/12257
@ajbozarth @tdas Thanks for your interest. I'll rebase this to master and
reflect the comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user sarutak opened a pull request:
https://github.com/apache/spark/pull/15439
[SPARK-17880][DOC] The url linking to `AccumulatorV2` in the document is
incorrect.
## What changes were proposed in this pull request?
In `programming-guide.md`, the url which links
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
I noticed `HiveDataFrameJoinSuite` expect to support self-join like as
follows.
```
checkAnswer(
df.join(df, df("key") === df("Key")),
Row(1,
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
In the current commit(b778b5d) I tried changing to prohibit direct
self-join.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14719#discussion_r80852382
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -672,6 +684,21 @@ class Analyzer(
exprs.exists
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
@nsyca I took a look the problem you mentioned but I think it's not related
to this issue. It's an issue related to an optimization logic which `not in` is
converted to anti join.
---
If your
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
@nsyca Thanks for letting me know. I'll take a look soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15217
@yanboliang Would you close this PR by yourself? This PR may be closed
automatically on merging.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15217
LGTM, Merging into `branch-2.0`. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14719#discussion_r78191196
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1580,6 +1583,28 @@ class DataFrameSuite extends QueryTest
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14719#discussion_r78190424
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -1580,6 +1583,28 @@ class DataFrameSuite extends QueryTest
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14719#discussion_r78126865
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -683,8 +710,14 @@ class Analyzer(
try
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/15008
LGTM. Pending Jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14960
I noticed this PR is not able to be merged cleanly to `branch-2.0`.
@HyukjinKwon If you would like to merge this into `branch-2.0`, please feel
free to open another PR.
---
If your project
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14960
LGTM. Merging this into `master` and `branch-2.0`. Thanks @HyukjinKwon !
And thanks @shivaram and @felixcheung for the review !
---
If your project is set up for it, you can reply
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14960
I found we can replace `FileSystem.get` in `SparkContext#hadoopFile` and
`SparkContext.newAPIHadoopFile` with `FileSystem.getLocal` like
`SparkContext#hadoopRDD` so once they are replaced, we need
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77575448
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1900,7 +1900,20 @@ private[spark] object Utils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77574419
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1900,7 +1900,20 @@ private[spark] object Utils extends Logging
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/14960#discussion_r77519854
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1900,7 +1900,20 @@ private[spark] object Utils extends Logging
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14719
I've attached a document for solution proposal.
https://issues.apache.org/jira/browse/SPARK-17154
CC: @marmbrus
---
If your project is set up for it, you can reply to this email
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/14900
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
1 - 100 of 1302 matches
Mail list logo