Github user wuciawe commented on the issue:
https://github.com/apache/spark/pull/17190
@GaalDornick hi, I think
```
def quote(colName: String): String = {
s$colName
}
```
should be
```
def quote(colName: String): String = {
Github user 10110346 commented on the issue:
https://github.com/apache/spark/pull/19572
cc @sameeragarwal @ericl
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: rev
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19572
**[Test build #83035 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83035/testReport)**
for PR 19572 at commit
[`95ae9c3`](https://github.com/apache/spark/commit/95
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/19572
[SPARK-22349]In on-heap mode, when allocating memory from pool,we should
fill memory with `MEMORY_DEBUG_FILL_CLEAN_VALUE`
## What changes were proposed in this pull request?
In on-heap mode, w
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19571
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19571
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83030/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19571
**[Test build #83030 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83030/testReport)**
for PR 19571 at commit
[`be7ba9b`](https://github.com/apache/spark/commit/b
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19569
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19569
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83032/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19569
**[Test build #83032 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83032/testReport)**
for PR 19569 at commit
[`568e791`](https://github.com/apache/spark/commit/5
Github user nivox commented on the issue:
https://github.com/apache/spark/pull/19217
@vanzin @ash211 I just modified the title of the PR as per your suggestion
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19569
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19569
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83031/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19569
**[Test build #83031 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83031/testReport)**
for PR 19569 at commit
[`b9e238c`](https://github.com/apache/spark/commit/b
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19458
There's a UT failure
(https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83014/testReport/junit/org.apache.spark.storage/BlockIdSuite/test_bad_deserialization/).
@superbobry please
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19458
**[Test build #83034 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83034/testReport)**
for PR 19458 at commit
[`ff9a6ae`](https://github.com/apache/spark/commit/ff
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146763856
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/PropagateEmptyRelationSuite.scala
---
@@ -30,6 +30,7 @@ class PropagateEmp
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19556#discussion_r146763677
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -91,6 +91,50 @@ private[spark] object ClosureCleaner extends Logging {
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19458
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146763654
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,111 @@
+/*
+ * Licens
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146763571
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/ReplaceExceptWithFilter.scala
---
@@ -0,0 +1,111 @@
+/*
+ * Licens
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17100
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146761735
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -922,6 +922,17 @@ object SQLConf {
.intConf
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/19451#discussion_r146761684
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -1200,6 +1211,8 @@ class SQLConf extends Serializable with Loggi
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/18527
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17100
Thanks! Merged to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18527
LGTM
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.ap
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18527
Thanks! Merged to master.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19570
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19570
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83029/
Test PASSed.
---
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/19560
>My main concern is, we'd better not to put burden on Spark to deal with
metastore failures
I think this make sense. I was also thinking about this when proposing this
pr. I do agree with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19570
**[Test build #83029 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83029/testReport)**
for PR 19570 at commit
[`eab627a`](https://github.com/apache/spark/commit/e
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19556#discussion_r146760101
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -91,6 +91,50 @@ private[spark] object ClosureCleaner extends Logging {
Github user shivaram commented on a diff in the pull request:
https://github.com/apache/spark/pull/19557#discussion_r146760035
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3249,9 +3249,12 @@ setMethod("as.data.frame",
#' @note attach since 1.6.0
setMethod("attach",
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19390
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19390
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83028/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19390
**[Test build #83028 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83028/testReport)**
for PR 19390 at commit
[`c90d351`](https://github.com/apache/spark/commit/c
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19550
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19550
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83033/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19550
**[Test build #83033 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83033/consoleFull)**
for PR 19550 at commit
[`3a5b6fa`](https://github.com/apache/spark/commit/
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/19560
My main concern is, we'd better not to put burden on Spark to deal with
metastore failures, because Spark doesn't have control on metastores. The
system using Spark and metastore should be responsible
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/19556#discussion_r146759438
--- Diff: core/src/main/scala/org/apache/spark/util/ClosureCleaner.scala ---
@@ -91,6 +91,50 @@ private[spark] object ClosureCleaner extends Logging {
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/19569
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/19569
good catch! merging to master, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comman
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/19560
> Users always do not know there's error in stats.
Isn't there any exceptions or error messages when updating table/stats
fails? I suppose the system is able to know it through logging or prot
Github user smurching commented on the issue:
https://github.com/apache/spark/pull/19433
@WeichenXu123 Thanks for the comments! I'll respond inline:
> In your doc, you said "Specifically, we only need to store sufficient
stats for each bin of a single feature, as opposed to ea
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19557#discussion_r146757169
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3249,9 +3249,12 @@ setMethod("as.data.frame",
#' @note attach since 1.6.0
setMethod("attach",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19557#discussion_r146757193
--- Diff: R/pkg/R/DataFrame.R ---
@@ -3249,9 +3249,12 @@ setMethod("as.data.frame",
#' @note attach since 1.6.0
setMethod("attach",
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/19557#discussion_r146757250
--- Diff: R/run-tests.sh ---
@@ -38,6 +38,7 @@ FAILED=$((PIPESTATUS[0]||$FAILED))
NUM_CRAN_WARNING="$(grep -c WARNING$ $CRAN_CHECK_LOG_FILE)"
N
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19568#discussion_r146757237
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -585,21 +585,26 @@ case class SortMergeJoinExec(
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19568#discussion_r146756914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -585,21 +585,26 @@ case class SortMergeJoinExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19550
**[Test build #83033 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83033/consoleFull)**
for PR 19550 at commit
[`3a5b6fa`](https://github.com/apache/spark/commit/3
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/19550
Jenkins, retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mai
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19568#discussion_r146755690
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -615,6 +620,7 @@ case class SortMergeJoinExec(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19569
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83027/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19569
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19569
**[Test build #83027 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83027/testReport)**
for PR 19569 at commit
[`f2c8266`](https://github.com/apache/spark/commit/f
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/19480
Sounds good to me. Sorry for being late since I was busy last week.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19571#discussion_r146752367
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala ---
@@ -252,6 +253,13 @@ private[orc] class OrcOutputWriter(
overr
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19571#discussion_r146752170
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcFileFormat.scala ---
@@ -252,6 +253,13 @@ private[orc] class OrcOutputWriter(
overr
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19569
also cc @cloud-fan for review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail:
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19571#discussion_r146751242
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/orc/OrcFileFormat.scala
---
@@ -39,4 +45,33 @@ private[sql] object OrcFileForma
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19569
**[Test build #83032 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83032/testReport)**
for PR 19569 at commit
[`568e791`](https://github.com/apache/spark/commit/56
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19571
**[Test build #83030 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83030/testReport)**
for PR 19571 at commit
[`be7ba9b`](https://github.com/apache/spark/commit/be
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19569
**[Test build #83031 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83031/testReport)**
for PR 19569 at commit
[`b9e238c`](https://github.com/apache/spark/commit/b9
GitHub user dongjoon-hyun opened a pull request:
https://github.com/apache/spark/pull/19571
[SPARK-15474][SQL] Write and read back non-emtpy schema with empty dataframe
## What changes were proposed in this pull request?
Previously, ORC file format cannot write a correct sch
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/19569#discussion_r146749535
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -201,35 +193,50 @@ case class InMemoryTableScan
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/19569#discussion_r146748032
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/InMemoryTableScanExec.scala
---
@@ -201,35 +193,50 @@ case class InMemoryTableS
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r146747982
--- Diff: python/pyspark/serializers.py ---
@@ -224,7 +225,13 @@ def _create_batch(series):
# If a nullable integer series has been promoted to float
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19570
**[Test build #83029 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83029/testReport)**
for PR 19570 at commit
[`eab627a`](https://github.com/apache/spark/commit/ea
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/19570
[SPARK-22335][SQL] Clarify union behavior on Dataset of typed objects in
the document
## What changes were proposed in this pull request?
Seems that end users can be confused by the union's
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/19560
@wzhfy
Thanks for comment;
I know your point.
In my cluster, namenode is under heavy pressure. Errors in stats happen
with big chance. Users always do not know there's error in stats. T
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/19568
Could you please change title from `SPARK-22345` to `[SPARK-22345]`?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache
Github user kiszk commented on a diff in the pull request:
https://github.com/apache/spark/pull/19568#discussion_r146743704
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/joins/SortMergeJoinExec.scala
---
@@ -615,6 +620,7 @@ case class SortMergeJoinExec(
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/19563
ping @cloud-fan
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@s
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19569
@kiszk Thanks. I've roughly checked existing tests. Seems that there are
related ones for pruning the table cache. Let me see if I can add one.
---
-
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19390
**[Test build #83028 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83028/testReport)**
for PR 19390 at commit
[`c90d351`](https://github.com/apache/spark/commit/c9
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/19569
Good catch, thank you. Would it be possible to add a test case for pruning
with table cache?
---
-
To unsubscribe, e-mail: reviews
Github user mpjlu commented on a diff in the pull request:
https://github.com/apache/spark/pull/19516#discussion_r146741139
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/ChiSqSelector.scala ---
@@ -291,9 +291,13 @@ final class ChiSqSelectorModel private[ml] (
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19569
**[Test build #83027 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83027/testReport)**
for PR 19569 at commit
[`f2c8266`](https://github.com/apache/spark/commit/f2
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19390
Merged build finished. Test FAILed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19390
**[Test build #83026 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83026/testReport)**
for PR 19390 at commit
[`9ca6902`](https://github.com/apache/spark/commit/9
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19390
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83026/
Test FAILed.
---
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/19569
[SPARK-22348][SQL] The table cache providing ColumnarBatch should also do
partition batch pruning
## What changes were proposed in this pull request?
We enable table cache `InMemoryTableSca
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19390
**[Test build #83026 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83026/testReport)**
for PR 19390 at commit
[`9ca6902`](https://github.com/apache/spark/commit/9c
Github user skonto commented on the issue:
https://github.com/apache/spark/pull/19390
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h..
Github user jmchung commented on the issue:
https://github.com/apache/spark/pull/19567
Thanks @wangyum and @viirya, I'll add the corresponding tests in
`PostgresIntegrationSuite`.
To @viirya , I'm not sure if the other data types will work, will consider
them into tests, thanks!
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19433
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19433
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83025/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19433
**[Test build #83025 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83025/testReport)**
for PR 19433 at commit
[`fd6cdbb`](https://github.com/apache/spark/commit/f
Github user wzhfy commented on the issue:
https://github.com/apache/spark/pull/19560
I wonder when this config should be used. If user knows there's some error
in stats, why not just analyze the table (specify "noscan" if only size is
needed)? This can fix the problem instead of verif
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19567
Besides uuid, other PostgreSQL types such as "cidr", "inet", they are
treated as StringType too, will they work?
---
-
To unsubs
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19567
Seems that we don't even have test against uuid column, when you create
test for uuid[], can you also create test for uuid? Thanks.
---
-
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/19567
@jmchung You can add unit test into `PostgresIntegrationSuite`.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19519#discussion_r146737263
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkApplication.scala ---
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundati
Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/19568
@dongjoon-hyun, yes, I'm currently working on it. I just wanted to get the
rest up.
---
-
To unsubscribe, e-mail: reviews-unsubsc
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19383
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/83024/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/19383
Merged build finished. Test PASSed.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional comma
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/19383
**[Test build #83024 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/83024/testReport)**
for PR 19383 at commit
[`53357a1`](https://github.com/apache/spark/commit/5
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/19433#discussion_r146735946
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tree/impl/LocalDecisionTree.scala ---
@@ -0,0 +1,250 @@
+/*
+ * Licensed to the Apache Softw
1 - 100 of 327 matches
Mail list logo