Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62216 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62216/consoleFull)**
for PR 14165 at commit
[`1ea0247`](https://github.com/apache/spark/commit/
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14106#discussion_r70585442
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -165,36 +165,48 @@ object PushProjectThroughSample ex
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14106#discussion_r70584787
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -165,36 +165,48 @@ object PushProjectThroughSample ex
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/14106#discussion_r70584778
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -165,36 +165,48 @@ object PushProjectThroughSample ex
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70583775
--- Diff: R/pkg/R/column.R ---
@@ -235,20 +248,16 @@ setMethod("cast",
function(x, dataType) {
if (is.character(dataType))
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70583496
--- Diff: R/pkg/R/column.R ---
@@ -44,6 +44,9 @@ setMethod("initialize", "Column", function(.Object, jc) {
.Object
})
+#' @rdname co
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70583340
--- Diff: R/pkg/R/SQLContext.R ---
@@ -267,6 +267,10 @@ as.DataFrame.default <- function(data, schema = NULL,
samplingRatio = 1.0) {
createDataFra
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14178
**[Test build #62228 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62228/consoleFull)**
for PR 14178 at commit
[`30c7c81`](https://github.com/apache/spark/commit/3
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/14173#discussion_r70583224
--- Diff: R/pkg/R/DataFrame.R ---
@@ -2950,6 +3038,10 @@ setMethod("drop",
})
# Expose base::drop
+#' @name drop
+#' @rd
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14176
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62227/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14176
**[Test build #62227 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62227/consoleFull)**
for PR 14176 at commit
[`a3360e0`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14176
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/14178
[SPARKR][DOCS][MINOR] R programming guide to include csv data source example
## What changes were proposed in this pull request?
Minor documentation update for code example, code style,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14177
**[Test build #62226 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62226/consoleFull)**
for PR 14177 at commit
[`1a86e85`](https://github.com/apache/spark/commit/1
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14176
**[Test build #62227 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62227/consoleFull)**
for PR 14176 at commit
[`a3360e0`](https://github.com/apache/spark/commit/a
Github user ooq commented on the issue:
https://github.com/apache/spark/pull/14174
cc @sameeragarwal @davies @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
Github user ooq commented on the issue:
https://github.com/apache/spark/pull/14176
cc @sameeragarwal @davies @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wi
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/14177
[SPARK-16027][SPARKR] Fix R tests SparkSession init/stop
## What changes were proposed in this pull request?
Fix R SparkSession init/stop, and warnings of reusing existing Spark Context
GitHub user ooq opened a pull request:
https://github.com/apache/spark/pull/14176
[SPARK-16525][SQL] Enable Row Based HashMap in HashAggregateExec
## What changes were proposed in this pull request?
This PR is the second step for the following feature:
For hash aggr
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14119
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14175
**[Test build #62225 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62225/consoleFull)**
for PR 14175 at commit
[`6fe96e5`](https://github.com/apache/spark/commit/6
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/14119
LGTM, I've merged this to master and branch-2.0. Thanks for working on this!
I only observed one weird rendering caused by the blank lines before `{%
include_example %}`, maybe my local Je
GitHub user sun-rui opened a pull request:
https://github.com/apache/spark/pull/14175
[SPARK-16522][MESOS] Spark application throws exception on exit.
## What changes were proposed in this pull request?
Spark applications running on Mesos throw exception upon exit. For details,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14174
**[Test build #6 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14174 at commit
[`c87f26b`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14165
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14165
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62223/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62223 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62223/consoleFull)**
for PR 14165 at commit
[`b4372f7`](https://github.com/apache/spark/commit/
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14174
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14174
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/6/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14174
**[Test build #6 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/6/consoleFull)**
for PR 14174 at commit
[`c87f26b`](https://github.com/apache/spark/commit/c
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14036
**[Test build #62224 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62224/consoleFull)**
for PR 14036 at commit
[`16eff20`](https://github.com/apache/spark/commit/1
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62223 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62223/consoleFull)**
for PR 14165 at commit
[`b4372f7`](https://github.com/apache/spark/commit/b
GitHub user ooq opened a pull request:
https://github.com/apache/spark/pull/14174
[SPARK-16524][SQL] Add RowBatch and RowBasedHashMapGenerator
## What changes were proposed in this pull request?
This PR is the first step for the following feature:
For hash aggregati
Github user techaddict commented on the issue:
https://github.com/apache/spark/pull/14036
@cloud-fan Done ð
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishe
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62221 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62221/consoleFull)**
for PR 14165 at commit
[`1ea0247`](https://github.com/apache/spark/commit/1
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14165
**[Test build #62220 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62220/consoleFull)**
for PR 14165 at commit
[`b0a724e`](https://github.com/apache/spark/commit/b
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14036
LGTM except 2 naming comments, thanks for working on it!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70580188
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -207,20 +207,12 @@ case class Multiply(left: Expre
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14036#discussion_r70580079
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala
---
@@ -285,6 +278,28 @@ case class Divide(left: Expressi
Github user lianhuiwang commented on the issue:
https://github.com/apache/spark/pull/14111
@cloud-fan At firstly I have implemented it with you said. But the
following situation that has broadcast join will have a error 'ScalarSubquery
has not finished', example (from SPARK-14791):
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62213/
Test PASSed.
---
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14148
It's easy to infer the schema once when we create the table and store it
into external catalog. However, it's a breaking change which means users can't
change the underlying data file schema after
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14036
**[Test build #62213 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62213/consoleFull)**
for PR 14036 at commit
[`8d9a04d`](https://github.com/apache/spark/commit/
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70578153
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table:
Ta
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62212/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14036
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/13701
@yhuai OK. Thanks for letting me know that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14036
**[Test build #62212 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62212/consoleFull)**
for PR 14036 at commit
[`ab6858c`](https://github.com/apache/spark/commit/
Github user subrotosanyal commented on the issue:
https://github.com/apache/spark/pull/13658
hi @vanzin
Even I am surprised to see that notify was not triggered somehow.
> Is your code perhaps setting "spark.master" to "local" or something that
is not "yarn-cluster" befor
Github user lins05 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14165#discussion_r70575753
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -79,6 +79,9 @@ class SparkSession private(
sparkContext.assertNo
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14172
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62214/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14172
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14172
**[Test build #62214 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62214/consoleFull)**
for PR 14172 at commit
[`ade0ad2`](https://github.com/apache/spark/commit/
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14152#discussion_r70575075
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/Checkpoint.scala ---
@@ -18,8 +18,8 @@
package org.apache.spark.streaming
import
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
Tomorrow, I will try to dig it deeper and check whether schema evolution
could be an issue if the schema is fixed when creating tables.
---
If your project is set up for it, you can reply to th
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
uh... I see what you mean. Agree.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabl
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14148
I was not talking about caching here. Caching is transient. I want the
behavior to be the same regardless of how many times I'm restarting Spark ...
And this has nothing to do with refresh. For
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70573373
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table:
Ta
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14148
@rxin Currently, we do not run schema inference every time when metadata
cache contains the plan. Based on my understanding, that is the major reason
why we introduced the metadata cache at the v
601 - 660 of 660 matches
Mail list logo