Github user zsxwing closed the pull request at:
https://github.com/apache/spark/pull/16979
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is en
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16979
Thanks. Merging to 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
GitHub user maropu opened a pull request:
https://github.com/apache/spark/pull/17023
[SPARK-19695][SQL] Throw an exception if a `columnNameOfCorruptRecord`
field violates requirements
## What changes were proposed in this pull request?
This pr comes from #16928 and fixed a json
GitHub user ahshahid opened a pull request:
https://github.com/apache/spark/pull/17022
Aqp 271
Looks like in Spark 2.0 the optimization of repeat aggregates being
represented by a single aggregate was broken because of passing of resultId:
ExprID in the constructor of AggregateExpr
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102380326
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/DeduplicationSuite.scala
---
@@ -0,0 +1,252 @@
+/*
+ * Licensed to the Apache Softwar
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17013
**[Test build #73258 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73258/testReport)**
for PR 17013 at commit
[`ed686fa`](https://github.com/apache/spark/commit/ed
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17013
**[Test build #73257 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73257/testReport)**
for PR 17013 at commit
[`5808d71`](https://github.com/apache/spark/commit/58
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102379272
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -321,3 +327,66 @@ case class MapGroupsWithStateExec(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17013
**[Test build #73256 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73256/testReport)**
for PR 17013 at commit
[`91cee26`](https://github.com/apache/spark/commit/91
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16499#discussion_r102378915
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -813,7 +813,14 @@ private[spark] class BlockManager(
fals
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102378877
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -1996,7 +1996,7 @@ class Dataset[T] private[sql](
def dropDuplicates(colNames
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102378594
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
---
@@ -129,6 +156,33 @@ class UnsupportedOperat
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102378522
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/analysis/UnsupportedOperationsSuite.scala
---
@@ -129,6 +156,33 @@ class UnsupportedOperat
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17021
**[Test build #73255 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73255/testReport)**
for PR 17021 at commit
[`367a681`](https://github.com/apache/spark/commit/36
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102378336
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -1143,6 +1144,24 @@ object ReplaceDistinctWithAggregate e
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16970#discussion_r102378293
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicLogicalOperators.scala
---
@@ -869,3 +869,12 @@ case object OneRowRelat
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/16979
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user zhengruifeng opened a pull request:
https://github.com/apache/spark/pull/17021
[SPARK-19694][ML] Add missing 'setTopicDistributionCol' for LDAModel
## What changes were proposed in this pull request?
Add missing 'setTopicDistributionCol' for LDAModel
## How was th
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17004
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is ena
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16928
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16928
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73250/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16928
**[Test build #73250 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73250/testReport)**
for PR 16928 at commit
[`448e6fe`](https://github.com/apache/spark/commit/4
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17004
Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and w
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16499#discussion_r102377114
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -813,7 +813,14 @@ private[spark] class BlockManager(
fals
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16928
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73249/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16928
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17020
**[Test build #73254 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73254/testReport)**
for PR 17020 at commit
[`7948466`](https://github.com/apache/spark/commit/79
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16928
**[Test build #73249 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73249/testReport)**
for PR 16928 at commit
[`8e83522`](https://github.com/apache/spark/commit/8
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16499#discussion_r102376897
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -813,7 +813,14 @@ private[spark] class BlockManager(
fals
GitHub user wangyum opened a pull request:
https://github.com/apache/spark/pull/17020
[SPARK-19693][SQL] Make the SET mapreduce.job.reduces automatically
converted to spark.sql.shuffle.partitions
## What changes were proposed in this pull request?
Make the `SET mapreduce.job.red
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16499#discussion_r102376170
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -1018,7 +1025,9 @@ private[spark] class BlockManager(
try {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17014
**[Test build #73253 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73253/testReport)**
for PR 17014 at commit
[`a3f3bb6`](https://github.com/apache/spark/commit/a3
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16970
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16970
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73247/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16970
**[Test build #73247 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73247/testReport)**
for PR 16970 at commit
[`78dfdfe`](https://github.com/apache/spark/commit/7
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73244/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16744
**[Test build #73244 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73244/testReport)**
for PR 16744 at commit
[`d15affb`](https://github.com/apache/spark/commit/d
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17014
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73252/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17014
**[Test build #73252 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73252/testReport)**
for PR 17014 at commit
[`b81eeb7`](https://github.com/apache/spark/commit/b
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17014
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17013#discussion_r102374176
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -123,7 +123,11 @@ object JavaTypeInference {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16744
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73245/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17014
**[Test build #73252 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73252/testReport)**
for PR 17014 at commit
[`b81eeb7`](https://github.com/apache/spark/commit/b8
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16946
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73242/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16946
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16744
**[Test build #73245 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73245/testReport)**
for PR 16744 at commit
[`b4bf3a8`](https://github.com/apache/spark/commit/b
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16946
**[Test build #73242 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73242/testReport)**
for PR 16946 at commit
[`5aef8eb`](https://github.com/apache/spark/commit/5
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16290#discussion_r102373053
--- Diff: R/pkg/R/sparkR.R ---
@@ -376,6 +377,12 @@ sparkR.session <- function(
overrideEnvs(sparkConfigMap, paramMap)
}
+ # NOT
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102372757
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +46,41 @@ private[csv] class UnivocityP
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16988
how about we set the `numPartitions` when we build
`RepartitionByExpression`? the parser can also access the SQLConf.
---
If your project is set up for it, you can reply to this email and have yo
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17013#discussion_r102372119
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -123,7 +123,11 @@ object JavaTypeInference {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16928
**[Test build #73251 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73251/testReport)**
for PR 16928 at commit
[`619094a`](https://github.com/apache/spark/commit/61
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16979#discussion_r102371334
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/HDFSMetadataLog.scala
---
@@ -63,8 +63,34 @@ class HDFSMetadataLog[T <: AnyRef :
C
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16944#discussion_r102371290
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -690,10 +696,10 @@ private[spark] class HiveExternalCatalog(c
Github user jinxing64 commented on the issue:
https://github.com/apache/spark/pull/16989
@squito
Thanks a lot for your comments : )
Yes, There must be a design doc for discussing. I will prepare and post a
pdf to jira.
---
If your project is set up for it, you can reply to
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/16949
cc @srowen and @vanzin also.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and w
Github user uncleGen closed the pull request at:
https://github.com/apache/spark/pull/17011
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is e
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17013#discussion_r102370203
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -123,7 +123,11 @@ object JavaTypeInference {
Github user tgravescs commented on the issue:
https://github.com/apache/spark/pull/16946
On vacation back next Monday and will review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this fea
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17011
Don't run things as root. Please close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17019
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17019
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73243/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17019
**[Test build #73243 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73243/testReport)**
for PR 17019 at commit
[`21006ff`](https://github.com/apache/spark/commit/2
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16928
**[Test build #73250 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73250/testReport)**
for PR 16928 at commit
[`448e6fe`](https://github.com/apache/spark/commit/44
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16944#discussion_r102369921
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -181,7 +186,8 @@ case class CatalogTable(
vie
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16944#discussion_r102369793
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -181,7 +186,8 @@ case class CatalogTable(
vie
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17015
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/73246/
Test FAILed.
---
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102369500
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -96,31 +96,44 @@ class CSVFileFormat extends Tex
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17015
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
e
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17015
**[Test build #73246 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73246/testReport)**
for PR 17015 at commit
[`b61910e`](https://github.com/apache/spark/commit/b
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16928
**[Test build #73249 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73249/testReport)**
for PR 16928 at commit
[`8e83522`](https://github.com/apache/spark/commit/8e
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/17011
@srowen @vanzin I think the root cause is I test it in root user. So it
always be readable no matter what access permission. IMHO, it is OK to add once
extra access permission check, as the code sc
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16944#discussion_r102369195
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -296,6 +296,21 @@ object SQLConf {
.longConf
.c
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/17014
@srowen @hhbyyh You are right. I will update this without breaking `train`.
Thanks for pointing it out!
---
If your project is set up for it, you can reply to this email and have your
reply ap
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17013#discussion_r102368765
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/JavaTypeInference.scala
---
@@ -123,7 +123,11 @@ object JavaTypeInference {
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102367846
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -147,8 +165,6 @@ private[csv] class UnivocityP
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102367607
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -189,9 +205,10 @@ private[csv] class Univ
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102367601
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -147,8 +165,6 @@ private[csv] class Univoci
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102366629
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +46,41 @@ private[csv] class Univo
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102366771
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +46,41 @@ private[csv] class Univo
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102366908
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/CSVFileFormat.scala
---
@@ -96,31 +96,44 @@ class CSVFileFormat extend
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102366449
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -147,8 +165,6 @@ private[csv] class Univo
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17001
I'd like to treat this as a workaround, the location of default database is
still invalid in cluster-
B.
We can make this logic more clear and consistent: the default database
should
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16744
**[Test build #73248 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/73248/testReport)**
for PR 16744 at commit
[`da18da0`](https://github.com/apache/spark/commit/da
Github user Gauravshah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16842#discussion_r102366702
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -204,10 +208,11 @@ class KinesisSequ
Github user Gauravshah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16842#discussion_r102366680
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisBackedBlockRDD.scala
---
@@ -36,7 +36,8 @@ import org.apache.spa
Github user budde commented on a diff in the pull request:
https://github.com/apache/spark/pull/16744#discussion_r102366429
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SerializableCredentialsProvider.scala
---
@@ -0,0 +1,85 @@
+/*
+ * L
Github user wesm commented on the issue:
https://github.com/apache/spark/pull/15821
The 0.2 Maven artifacts have been posted. I'll try to update the
conda-forge packages this week -- if anyone can help with conda-forge
maintenance that would be a big help.
Thanks!
---
If yo
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/16744#discussion_r102366089
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SerializableCredentialsProvider.scala
---
@@ -0,0 +1,85 @@
+/*
+ *
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/16744#discussion_r102366055
--- Diff:
external/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/SerializableCredentialsProvider.scala
---
@@ -0,0 +1,85 @@
+/*
+ *
Github user wangyum commented on a diff in the pull request:
https://github.com/apache/spark/pull/16819#discussion_r102365927
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1193,6 +1189,37 @@ private[spark] class Client(
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16785#discussion_r102364604
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/LogicalPlan.scala
---
@@ -314,7 +314,17 @@ abstract class UnaryNode extend
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16938
> CREATE TABLE ... (PARTITIONED BY ...) LOCATION path
I think hive's behavior makes more sense. Users may wanna insert data to
this table and put the data in a specified location, even it
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/16990
@srowen @felixcheung
The SQL query is related to the file name, see:
https://github.com/apache/spark/blob/v2.1.0/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveComparisonTes
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/16998
@sameeragarwal That's correct.
> By the way, as an aside we should probably allow constraint
inference/propagation to be turned off via a conf flag to provide a quick work
around against the
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102364205
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +46,41 @@ private[csv] class Univoci
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16928#discussion_r102364069
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/csv/UnivocityParser.scala
---
@@ -45,24 +46,41 @@ private[csv] class Univoci
Github user sameeragarwal commented on the issue:
https://github.com/apache/spark/pull/16998
By the way, as an aside we should probably allow constraint
inference/propagation to be turned off via a conf flag to provide a quick work
around against these kind of problems.
---
If your
101 - 200 of 531 matches
Mail list logo