Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14014
Let's also update the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r71277147
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -442,13 +445,23 @@ private[parquet
Repository: spark
Updated Branches:
refs/heads/branch-2.0 24ea87519 -> ef2a6f131
[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update
## What changes were proposed in this pull request?
This PR moves one and the last hard-coded Scala example snippet from the SQL
programming guide
Repository: spark
Updated Branches:
refs/heads/master e5fbb182c -> 1426a0805
[SPARK-16303][DOCS][EXAMPLES] Minor Scala/Java example update
## What changes were proposed in this pull request?
This PR moves one and the last hard-coded Scala example snippet from the SQL
programming guide into
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r71276489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRecordMaterializer.scala
---
@@ -30,10 +30,11 @@ import
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14245
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71273081
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -146,6 +151,15 @@ case class CatalogTable
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272934
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -303,6 +303,7 @@ object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272434
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -313,18 +313,48 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14155#discussion_r71272290
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -146,6 +151,15 @@ case class CatalogTable
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14036
@techaddict Can you test the performance with and without your change?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
ark, but would fail now.
## How was this patch tested?
added a test case in SQLQuerySuite.
Closes #14169
Author: Daoyuan Wang <daoyuan.w...@intel.com>
Author: Yin Huai <yh...@databricks.com>
Closes #14249 from yhuai/scriptTransformation.
(cherry pic
ark, but would fail now.
## How was this patch tested?
added a test case in SQLQuerySuite.
Closes #14169
Author: Daoyuan Wang <daoyuan.w...@intel.com>
Author: Yin Huai <yh...@databricks.com>
Closes #14249 from yhuai/scriptTransformation.
Project: http://git-wip-us.apache.org/repos
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14249
I am merging this PR to master and branch 2.0.
Thanks @adrian-wang
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14249#discussion_r71227856
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1332,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14249
[SPARK-16515][SQL]set default record reader and writer for script
transformation
## What changes were proposed in this pull request?
In ScriptInputOutputSchema, we read default RecordReader
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71192358
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1306,7 +1306,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Repository: spark
Updated Branches:
refs/heads/master 8ea3f4eae -> 2877f1a52
[SPARK-16351][SQL] Avoid per-record type dispatch in JSON when writing
## What changes were proposed in this pull request?
Currently, `JacksonGenerator.apply` is doing type-based dispatch for each row
to write
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14245
LGTM. Can we reuse a existing jira number?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71102534
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1340,10 +1340,17 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71097210
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096802
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096761
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096584
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096571
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096401
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096388
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71096347
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonParser.scala
---
@@ -35,184 +34,306 @@ import
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14102#discussion_r71095725
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JSONOptions.scala
---
@@ -51,7 +53,8 @@ private[sql] class JSONOptions
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14028
LGTM pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14169#discussion_r71058385
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -1329,7 +1329,7 @@ class SparkSqlAstBuilder(conf: SQLConf
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14139
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
How was this patch tested?
Manually tested.
**Note: This is a backport of https://github.com/apache/spark/pull/13987**
Author: Yin Huai <yh...@databricks.com>
Closes #14139 from yhuai/SPARK-16313-branch-1.6.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
Thank you! I am merging this PR to branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70843685
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,20 @@ private[hive] class HiveMetastoreCatalog(val
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
@rxin I think this version is the minimal change. Since the partition
discovery logic in inside HadoopFsRelation in 1.6 and the refresh is triggered
by using lazy val, passing a flag down
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14139#discussion_r70727924
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -273,6 +273,22 @@ private[hive] class HiveMetastoreCatalog(val
Repository: spark
Updated Branches:
refs/heads/branch-2.0 9e3a59858 -> 550d0e7dc
[SPARK-16482][SQL] Describe Table Command for Tables Requiring Runtime Inferred
Schema
What changes were proposed in this pull request?
If we create a table pointing to a parquet/json datasets without
Repository: spark
Updated Branches:
refs/heads/master fb2e8eeb0 -> c5ec87982
[SPARK-16482][SQL] Describe Table Command for Tables Requiring Runtime Inferred
Schema
What changes were proposed in this pull request?
If we create a table pointing to a parquet/json datasets without
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14148
LGTM. Merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70571914
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -413,38 +413,36 @@ case class DescribeTableCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570551
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/createDataSourceTables.scala
---
@@ -105,7 +105,7 @@ case class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14148#discussion_r70570489
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -431,7 +431,7 @@ case class DescribeTableCommand(table
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13701
@viirya Thank you for updating this. Our schedules are pretty packed for
the release. We can take a look at it once 2.0 is released.
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
let me take another look to see if there is a better change.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
tes this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master 9cc74f95e -> b1e5281c5
[SPARK-12639][SQL] Mark Filters Fully Handled By Sources with *
## What changes were proposed in this pull request?
In order to make it clear which filters are fully handled by the
underlying datasource we will mark
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
lgtm. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
tes thsi please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Repository: spark
Updated Branches:
refs/heads/master 7f38b9d5f -> b4fbe140b
[SPARK-16349][SQL] Fall back to isolated class loader when classes not found.
Some Hadoop classes needed by the Hive metastore client jars are not present
in Spark's packaging (for example,
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
lgtm. Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14020#discussion_r70337582
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -220,9 +220,15 @@ private[hive] class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
also cc @marmbrus
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14020#discussion_r70335850
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/IsolatedClientLoader.scala
---
@@ -220,9 +220,15 @@ private[hive] class
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14020
Will putting that jar in Spark's classpath work? Seems so?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13973
@srowen Seems this commit breaks 1.6 builds
(https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%20Test%20(Dashboard)/job/spark-branch-1.6-test-sbt-hadoop-1.0/248/)?
---
If your project is set up
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
Let me see if we can have a flag to determine if we want to swallow the FNF
(like what https://github.com/apache/spark/pull/13987/files does).
---
If your project is set up for it, you can reply
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14139
I think there will be one warning when we create a table. Or maybe there is
no warning during table creation because the refresh is called lazily.
---
If your project is set up for it, you can
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14139
[SPARK-16313][SQL][BRANCH-1.6] Spark should not silently drop exceptions in
file listing
## What changes were proposed in this pull request?
Spark silently drops exceptions during file listing
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13991
OK. Thanks. Then, it will be good to add more tests for cases that are not
covered by those hive tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
tes this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13991
As a follow-up task. Can you take a look at the following query files and
add useful tests in your test? Thanks.
```
.//sql/hive/src/test/resources/ql/src/test/queries/clientpositive
ted by
release-build.sh.
Author: Yin Huai <yh...@databricks.com>
Closes #14108 from yhuai/SPARK-16453.
(cherry picked from commit 60ba436b7010436c77dfe5219a9662accc25bffa)
Signed-off-by: Yin Huai <yh...@databricks.com>
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
ase-build.sh.
Author: Yin Huai <yh...@databricks.com>
Closes #14108 from yhuai/SPARK-16453.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/60ba436b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/60ba436b
Diff: h
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14108
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14108
@srowen Does it look good?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14108#discussion_r70144675
--- Diff: dev/create-release/release-build.sh ---
@@ -258,7 +258,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
-Phive
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14108#discussion_r70144693
--- Diff: dev/create-release/release-build.sh ---
@@ -258,7 +258,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
-Phive
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14108#discussion_r70144390
--- Diff: dev/create-release/release-build.sh ---
@@ -258,7 +258,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
-Phive
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14108
cc @JoshRosen @rxin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14108
[SPARK-16453] [BUILD] release-build.sh is missing hive-thriftserver for
scala 2.10
## What changes were proposed in this pull request?
This PR adds hive-thriftserver profile to scala 2.10 build
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030627
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030569
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030381
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70030343
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70029947
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70029907
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14014#discussion_r70029843
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -482,13 +482,105 @@ private[parquet
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936226
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/14028#discussion_r69936163
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/json/JacksonGenerator.scala
---
@@ -17,74 +17,180 @@
package
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/14064
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13890#discussion_r69825251
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ExistingRDD.scala ---
@@ -74,13 +74,71 @@ object RDDConversions
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/14064
@mallman this backports your fix to branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14064
[SPARK-15968][SQL] Nonempty partitioned metastore tables are not cached
This PR backports your fix (https://github.com/apache/spark/pull/13818) to
branch 2.0.
This PR addresses
[SPARK
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69679529
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystConf.scala ---
@@ -51,6 +52,7 @@ case class SimpleCatalystConf
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13494#discussion_r69679372
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -258,6 +258,11 @@ object SQLConf {
.booleanConf
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/11317
@RussellSpitzer Sorry. I missed the last update on update. Would you please
update the PR? I will review it and get it merged when it pass all tests.
---
If your project is set up for it, you can
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13542
I have opened https://github.com/apache/spark/pull/14058/files (it has one
update).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/14058
[SPARK-15730][SQL] Respect the --hiveconf in the spark-sql command line
## What changes were proposed in this pull request?
This PR makes spark-sql (backed by SparkSQLCLIDriver) respects confs
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13818
I have a few questions.
1. Is it a regression from 1.6? Looks like not?
2. Is it a correctness issue or a performance issue? Seems it is a
performance issue?
3
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13818
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
901 - 1000 of 5990 matches
Mail list logo