Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17502
**[Test build #75442 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75442/testReport)**
for PR 17502 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109278537
--- Diff: R/pkg/R/catalog.R ---
@@ -0,0 +1,478 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/17502
[SPARK-19148][SQL][follow-up] do not expose the external table concept in
Catalog
### What changes were proposed in this pull request?
After we renames `Catalog`.`createExternalTable` to
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17502
cc @cloud-fan @hvanhovell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17491
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75439/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17491
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17491
**[Test build #75439 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75439/testReport)**
for PR 17491 at commit
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17498
@srowen @jerryshao
If a spark application developer, using event compress, from the spark
official document, will not see the use of spark.io.compression.codec is
specified compression
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16989
**[Test build #75441 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75441/testReport)**
for PR 16989 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17415
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17415
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75438/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17415
**[Test build #75438 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75438/testReport)**
for PR 17415 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109276366
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -645,16 +645,17 @@ test_that("test tableNames and tables", {
df <- read.json(jsonPath)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17394
**[Test build #75440 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75440/testReport)**
for PR 17394 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17491
**[Test build #75439 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75439/testReport)**
for PR 17491 at commit
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17480
@jerryshao Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17498
@jerryshao
This is just an optimization suggestion.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17498
@srowen i add a space
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17501
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75437/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17501
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17501
**[Test build #75437 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75437/testReport)**
for PR 17501 at commit
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17498
IMHO I thought this is still a unnecessary fix. I would doubt if user
really get confused without your fix? You can always correct me since I stand
on the of developers :).
---
If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
@tgravescs @vanzin , would you please help reviewing this PR. Thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ron8hu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17415#discussion_r109273331
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -550,6 +565,140 @@ case
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/17481
@ajbozarth
I now need to constantly switch pageSize to change paging data. Sometimes I
want to see all the data, but now show all the data, page area is lost. When I
quit, I have to
Github user ron8hu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17415#discussion_r109273334
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -550,6 +565,140 @@ case
Github user ron8hu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17415#discussion_r109273326
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -550,6 +565,140 @@ case
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17415
**[Test build #75438 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75438/testReport)**
for PR 17415 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109272800
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
*
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109271875
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf)
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17501
**[Test build #75437 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75437/testReport)**
for PR 17501 at commit
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17501
@sethah Here's a first easy one : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109271419
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -498,3 +498,31 @@ object
GitHub user jkbradley opened a pull request:
https://github.com/apache/spark/pull/17501
[SPARK-20183][ML] Added outlierRatio arg to
MLTestingUtils.testOutliersWithSmallWeights
## What changes were proposed in this pull request?
This is a small piece from
Github user ron8hu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17415#discussion_r109268076
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -550,6 +565,140 @@ case
Github user ron8hu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17415#discussion_r109267675
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/statsEstimation/FilterEstimation.scala
---
@@ -550,6 +565,140 @@ case
Github user hhbyyh commented on the issue:
https://github.com/apache/spark/pull/17336
The major thing I'm concerned is that `transform` will have to recompute
the association rules each time it's invoked. If that's not a problem,
changing association rules to method would be much
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/17336
Thanks for this PR! Do you think it's worth adding the caching logic? I'm
now wondering if we should change associationRules into a method which
recomputes the DataFrame every time it is
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17394#discussion_r109265858
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -91,15 +98,27 @@ case class CatalogTablePartition(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17394#discussion_r109265870
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -500,7 +500,6 @@ case class TruncateTableCommand(
case
Github user kunalkhamar commented on the issue:
https://github.com/apache/spark/pull/17486
@gatorsmile Verified the behaviour using this, it makes `plan` null upon
deserialization.
```
import java.io._
import org.apache.spark.sql.AnalysisException
lazy val
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17500
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109257813
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -645,16 +645,17 @@ test_that("test tableNames and tables", {
df <- read.json(jsonPath)
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109258117
--- Diff: R/pkg/inst/tests/testthat/test_sparkSQL.R ---
@@ -2977,6 +2981,51 @@ test_that("Collect on DataFrame when NAs exists at
the top of a timestamp
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/17483#discussion_r109258347
--- Diff: R/pkg/R/catalog.R ---
@@ -0,0 +1,478 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor
GitHub user d2r opened a pull request:
https://github.com/apache/spark/pull/17500
[SPARK-20181] [CORE] tries to bind the port to avoid jetty noise
## What changes were proposed in this pull request?
Try to bind the desired port and release it before entering Jetty code, so
Github user dgingrich commented on the issue:
https://github.com/apache/spark/pull/16845
Ping! Let me know if you need more work from me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user dgingrich commented on the issue:
https://github.com/apache/spark/pull/17227
Ping! Let me know if you need more work from me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user sahilTakiar commented on the issue:
https://github.com/apache/spark/pull/17499
Thanks for taking look everyone. The original motivation for this PR comes
[HIVE-13517](https://issues.apache.org/jira/browse/HIVE-13517). It was said to
be useful for debugging HoS
Github user jkbradley commented on the issue:
https://github.com/apache/spark/pull/16722
OK thanks! I'll send an update soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17499
The bug is short on details about what exactly this helps with. Do you have
a specific situation where you found that knowing the thread helped debug
something?
I'm a little wary of adding
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17499
CC maybe @vanzin or @squito who might comment on whether this is disruptive
for apps that parse the logs?
---
If your project is set up for it, you can reply to this email and have your
reply
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17499
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user sahilTakiar commented on the issue:
https://github.com/apache/spark/pull/17499
I'm not sure how the community feels about adding this to the default log4j
files, so posting this as a reference for now. Some more details are in the
JIRA, but this can improve debugabbility,
GitHub user sahilTakiar opened a pull request:
https://github.com/apache/spark/pull/17499
[SPARK-20161][CORE] Default log4j properties file should print thread-id in
ConversionPattern
## What changes were proposed in this pull request?
Change the default log4j properties
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/17481
I'm not quite sure what you mean by cached, but the way paging is
implemented every time you change the row count or page number it's a new page
load/refresh, so the latest data will be shown (if
Github user map222 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109225499
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,39 @@ def __iter__(self):
raise TypeError("Column is not iterable")
#
Github user map222 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109225302
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,39 @@ def __iter__(self):
raise TypeError("Column is not iterable")
#
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109222591
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
*
Github user map222 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17469#discussion_r109217816
--- Diff: python/pyspark/sql/column.py ---
@@ -250,11 +250,39 @@ def __iter__(self):
raise TypeError("Column is not iterable")
#
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17488
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109211310
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17394#discussion_r109205777
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -91,15 +98,27 @@ case class CatalogTablePartition(
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17394#discussion_r109205166
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -500,7 +500,6 @@ case class TruncateTableCommand(
case
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17484
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17484
LGTM, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109203721
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17412
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17412
LGTM. Merging to master / 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17487#discussion_r109201000
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveTableValuedFunctions.scala
---
@@ -105,7 +105,7 @@ object
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16989
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75436/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16989
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16989
**[Test build #75436 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75436/testReport)**
for PR 16989 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17487#discussion_r109198968
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/ResolveTableValuedFunctions.scala
---
@@ -105,7 +105,7 @@ object
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109198895
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -498,3 +498,31 @@ object RewriteCorrelatedScalarSubquery
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17486
Thanks! Merging to master and 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17486
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17486
@kunalkhamar Could you summarize the issue and post it in
https://github.com/databricks/scala-style-guide? Thanks!
---
If your project is set up for it, you can reply to this email and have
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109192716
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
*
Github user dilipbiswal commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109191711
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -498,3 +498,31 @@ object
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109190009
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -498,3 +498,31 @@ object
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16989
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/75435/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16989
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16989
**[Test build #75435 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75435/testReport)**
for PR 16989 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109186662
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
*
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
> AFAIK we don't record block update events in history server, so we could
not calculate the used memory from event log.
good point, sorry I had totally forgotten about. Seems like this
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109184652
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf)
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109183735
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -90,11 +90,12 @@ trait PredicateHelper {
*
Github user nsyca commented on a diff in the pull request:
https://github.com/apache/spark/pull/17491#discussion_r109181709
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/subquery.scala
---
@@ -498,3 +498,31 @@ object RewriteCorrelatedScalarSubquery
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/17485#discussion_r109174578
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -768,6 +767,19 @@ private[spark] class TaskSetManager(
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17498#discussion_r109174272
--- Diff: docs/configuration.md ---
@@ -773,14 +774,15 @@ Apart from these, the following properties are also
available, and may be useful
true
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/17498#discussion_r109172977
--- Diff: docs/configuration.md ---
@@ -773,14 +774,15 @@ Apart from these, the following properties are also
available, and may be useful
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/17498#discussion_r109167234
--- Diff: docs/configuration.md ---
@@ -773,14 +774,15 @@ Apart from these, the following properties are also
available, and may be useful
true
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17498
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/17498
[SPARK-20177]Document about compression way has some little detail châ¦
â¦anges.
## What changes were proposed in this pull request?
Document compression way little
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16989
**[Test build #75436 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/75436/testReport)**
for PR 16989 at commit
Github user Stibbons commented on the issue:
https://github.com/apache/spark/pull/13599
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user viirya closed the pull request at:
https://github.com/apache/spark/pull/16998
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16998
Once we've added the flag, this issue is not urgent for now. I close first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya closed the pull request at:
https://github.com/apache/spark/pull/16785
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
1 - 100 of 248 matches
Mail list logo