Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13444#discussion_r66682974
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -66,30 +66,13 @@ class ListingFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13542#discussion_r66678717
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/CliSuite.scala
---
@@ -91,6 +91,8 @@ class CliSuite extends
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/12313
@rdblue Thank you for updating the patch. I was out of town late last week
and was busy on spark summit early this week. Sorry for my late reply. Having
name-based resolution is very useful! Since
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r66563299
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -58,15 +60,39 @@ class SQLQuerySuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r66563267
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -855,7 +855,8 @@ class SessionCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13413#discussion_r66563161
--- Diff: python/pyspark/sql/tests.py ---
@@ -1481,17 +1481,7 @@ def test_list_functions(self):
spark.sql("CREATE DATABASE so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13371
@viirya I took a look at parquet's code. Seems parquet only evaluate row
group level filters when generating splits
(https://github.com/apache/parquet-mr/blob/apache-parquet-1.7.0/parquet-hadoop/src
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13573
I am going to trigger a snapshot build.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
mvn clean install -DskipTests=true` when
`JAVA_7_HOME` was set. Also manually inspected the effective POM diff to verify
that the final POM changes were scoped correctly:
https://gist.github.com/JoshRosen/f889d1c236fad14fa25ac4be01654653
/cc vanzin and yhuai for review.
Author: Josh Rosen <
ean install -DskipTests=true` when
`JAVA_7_HOME` was set. Also manually inspected the effective POM diff to verify
that the final POM changes were scoped correctly:
https://gist.github.com/JoshRosen/f889d1c236fad14fa25ac4be01654653
/cc vanzin and yhuai for review.
Author: Josh Rosen <
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13573
Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13573
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13549
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13573
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13189
Seems it is fine to not have metrics when we use hiveResultString.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13534#discussion_r66171754
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/DataTypeParserSuite.scala
---
@@ -133,4 +133,8 @@ class
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/13450
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13444#discussion_r65823862
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -409,13 +409,24 @@ private[sql] object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13444#discussion_r65823818
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ListingFileCatalog.scala
---
@@ -75,7 +75,7 @@ class ListingFileCatalog
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13270
LGTM. @liancheng Can you merge this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13455
lgtm pending jenkins.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13290#discussion_r65450769
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1448,6 +1450,38 @@ class Analyzer
Repository: spark
Updated Branches:
refs/heads/branch-2.0 46d5f7f38 -> 44052a707
[SPARK-15596][SPARK-15635][SQL] ALTER TABLE RENAME fixes
## What changes were proposed in this pull request?
**SPARK-15596**: Even after we renamed a cached table, the plan would remain in
the cache with the
Repository: spark
Updated Branches:
refs/heads/master 5b08ee639 -> 9e2643b21
[SPARK-15596][SPARK-15635][SQL] ALTER TABLE RENAME fixes
## What changes were proposed in this pull request?
**SPARK-15596**: Even after we renamed a cached table, the plan would remain in
the cache with the old
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13416
merging to master and branch 2.0
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13416
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/13445
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13450
@rdblue @liancheng Can you review this PR? This is for branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13450
[SPARK-9876] [BRANCH-2.0] Revert "[SPARK-9876][SQL] Update Parquet to
1.8.1."
## What changes were proposed in this pull request?
Since we are pretty late in the 2.0 rel
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13445
I am closing this for now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13445
OK. Let me create another PR for branch 2.0. We will merge that one first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13445
@rdblue How about we merge this to master and branch 2.0? Feel free to open
your PR again. We can figure out the perf thing with @liancheng together.
---
If your project is set up for it, you can
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/10793
close this for now?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13216
close this for now?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13029
close this for now?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13445
btw, we observed an error when filter pushdown is enabled. Unfortunately,
we missed the exception...
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13445
@rdblue Since 2.0 branch has been cut, I am little bit concerned about
potential merge conflicts when we cherry-pick bug fixes into 2.0 branch before
the release if we do not revert it from
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13445
[SPARK-9876] Revert "[SPARK-9876][SQL] Update Parquet to 1.8.1."
## What changes were proposed in this pull request?
Since we are pretty late in the 2.0 release cycle, it is
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13280
Hello @rdblue, we are pretty late in this release cycle. I am afraid that
we cannot actually upgrade Parquet to 1.8.1 because of the following two
reasons:
1. Since this change was merged
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65306005
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,39 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65305666
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,39 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65305383
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionState.scala ---
@@ -139,22 +139,6 @@ private[hive] class HiveSessionState(sparkSession
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65305344
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveMetastoreCatalog.scala ---
@@ -447,52 +447,20 @@ private[hive] class
HiveMetastoreCatalog
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13371
It is a good idea to add it if parquet supports it (I have an impression
that parquet does not support it. But maybe I am wrong). I think having
benchmark results is a good practice, so we can
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13371#discussion_r65302925
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -344,6 +344,11 @@ private[sql] class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13371#discussion_r65302899
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -344,6 +344,11 @@ private[sql] class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12728#discussion_r65302512
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -114,13 +110,15 @@ case class InsertIntoHiveTable
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13371
Can you provide a test case that shows the problem? Also, can you provide
benchmark results of the performance benefit?
---
If your project is set up for it, you can reply to this email
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13371#discussion_r65301654
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -344,6 +344,11 @@ private[sql] class
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13371#discussion_r65301661
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -578,62 +583,6 @@ private[sql] object
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13306
This PR makes the behavior of drop and withColumn consistent. Let's decide
what we do for backticks in a separate JIRA.
---
If your project is set up for it, you can reply to this email
Repository: spark
Updated Branches:
refs/heads/branch-2.0 7f240eaee -> b8de4ad7d
[SPARK-12988][SQL] Can't drop top level columns that contain dots
## What changes were proposed in this pull request?
Fixes "Can't drop top level columns that contain dots".
This work is based on dilipbiswal's
Repository: spark
Updated Branches:
refs/heads/master 0f2471346 -> 06514d689
[SPARK-12988][SQL] Can't drop top level columns that contain dots
## What changes were proposed in this pull request?
Fixes "Can't drop top level columns that contain dots".
This work is based on dilipbiswal's
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13306
LGTM merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13386#discussion_r65265253
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,39 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13270#discussion_r65263898
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -147,7 +152,41 @@ private[spark] class HiveExternalCatalog(client
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13270#discussion_r65263694
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala
---
@@ -368,14 +371,27 @@ private[hive] class HiveClientImpl
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13290#discussion_r65263136
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1448,6 +1450,38 @@ class Analyzer
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13395
chatted with @andrewor14 . Since
https://github.com/apache/spark/pull/13386/files will fix the location handling
when convertCTAS is true, probably it is not really needed to ban EXTERNAL
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/13395
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13270#discussion_r65253802
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -68,12 +72,13 @@ private[spark] class HiveExternalCatalog(client
iff-bb538fda94224dd0af01d0fd7e1b4ea0R81)
and `test-only *ReplSuite -- -z "SPARK-2576 importing implicits"` still passes
the test (without the change in `CodeGenerator`, this test does not pass with
the change in `ExecutorClassLoader `).
Author: Yin Huai <yh...@databricks.com>
Closes #13366 from yhuai/SPARK-156
R81)
and `test-only *ReplSuite -- -z "SPARK-2576 importing implicits"` still passes
the test (without the change in `CodeGenerator`, this test does not pass with
the change in `ExecutorClassLoader `).
Author: Yin Huai <yh...@databricks.com>
Closes #13366 from yhuai/SPARK-15622.
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13366
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13366
Yea. `SPARK-2576 importing implicits` in REPL suite is a good test. Without
the fix, ExecutorClassLoader throws ClassNotFoundException with those weird
class names.
---
If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13269
Are we going to break this PR to multiple smaller PRs?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Repository: spark
Updated Branches:
refs/heads/branch-2.0 6347ff512 -> 29b94fdb3
[SPARK-15658][SQL] UDT serializer should declare its data type as udt instead
of udt.sqlType
## What changes were proposed in this pull request?
When we build serializer for UDT object, we should declare its
Repository: spark
Updated Branches:
refs/heads/master d67c82e4b -> 2bfed1a0c
[SPARK-15658][SQL] UDT serializer should declare its data type as udt instead
of udt.sqlType
## What changes were proposed in this pull request?
When we build serializer for UDT object, we should declare its data
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13402
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13386
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13395
@gatorsmile Thanks. For partitioned by, it is in
https://github.com/apache/spark/pull/13386. For clustered by, seems we do have
test case in HiveDDLCommandSuite.
---
If your project is set
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13366
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13290
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12313#issuecomment-222566966
@rdblue How about we use a separate PR for the work of adding
`insertByNameInto`? It will be easier to review and the discussion on the API
name/semantic will not block
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13395#discussion_r65013574
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkSqlParser.scala ---
@@ -936,7 +936,47 @@ class SparkSqlAstBuilder(conf: SQLConf) extends
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13395#issuecomment-222376388
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13395
[SPARK-14507] [SQL] EXTERNAL keyword in a CTAS statement is not allowed
## What changes were proposed in this pull request?
This PR makes the parser to throw an exception if a hive style CTAS
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13386#issuecomment-222375080
OK. External related changes will be handled by another PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13386#issuecomment-222343075
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13386#issuecomment-222334796
@ericl @andrewor14 @liancheng Can you review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13386
[SPARK-14507] [SPARK-15646] [SQL] When spark.sql.hive.convertCTAS is true,
we should not convert the table stored as TEXTFILE/SEQUENCEFILE and we need
respect the user-defined location
## What
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a2f68ded2 -> f3570bcea
[SPARK-15636][SQL] Make aggregate expressions more concise in explain
## What changes were proposed in this pull request?
This patch reduces the verbosity of aggregate expressions in explain (but does
not
Repository: spark
Updated Branches:
refs/heads/master 74c1b79f3 -> 472f16181
[SPARK-15636][SQL] Make aggregate expressions more concise in explain
## What changes were proposed in this pull request?
This patch reduces the verbosity of aggregate expressions in explain (but does
not actually
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13367#issuecomment-222329931
lgtm. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Repository: spark
Updated Branches:
refs/heads/master 776d183c8 -> 4a2fb8b87
[SPARK-15594][SQL] ALTER TABLE SERDEPROPERTIES does not respect partition spec
## What changes were proposed in this pull request?
These commands ignore the partition spec and change the storage properties of
the
Repository: spark
Updated Branches:
refs/heads/branch-2.0 dc6e94157 -> 80a40e8e2
[SPARK-15594][SQL] ALTER TABLE SERDEPROPERTIES does not respect partition spec
## What changes were proposed in this pull request?
These commands ignore the partition spec and change the storage properties of
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13343#issuecomment-79262
lgtm. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/13366
[SPARK-15622] [SQL] Wrap the parent classloader of Janino's classloader in
the ParentClassLoader.
## What changes were proposed in this pull request?
At
https://github.com/aunkrig/janino/blob
Repository: spark
Updated Branches:
refs/heads/master 21b2605dc -> 019afd9c7
[SPARK-15431][SQL][BRANCH-2.0-TEST] rework the clisuite test cases
## What changes were proposed in this pull request?
This PR reworks on the CliSuite test cases for `LIST FILES/JARS` commands.
CC yhuai Tha
Repository: spark
Updated Branches:
refs/heads/branch-2.0 dcf498e8a -> 9c137b2e3
[SPARK-15431][SQL][BRANCH-2.0-TEST] rework the clisuite test cases
## What changes were proposed in this pull request?
This PR reworks on the CliSuite test cases for `LIST FILES/JARS` commands.
CC yhuai Tha
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13361#issuecomment-52079
LGTM. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13361#issuecomment-40500
Thanks for the fix. What was the problem?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12313#issuecomment-25637
@rdblue Thank you for your repl. For #2, yea, I feel it is better to be
strict right now. I checked with yesterday's master and seems we already
require the data
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13361#issuecomment-09770
add to whitelist
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13269#discussion_r64940498
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/encoders/ExpressionEncoder.scala
---
@@ -191,6 +189,26 @@ case class ExpressionEncoder[T
user.dir")/spark-warehouse`. Since
`System.getProperty("user.dir")` is a local dir, we should explicitly set the
scheme to local filesystem.
cc yhuai
How was this patch tested?
Added two test cases
Author: gatorsmile <gatorsm...@gmail.com>
user.dir")/spark-warehouse`. Since
`System.getProperty("user.dir")` is a local dir, we should explicitly set the
scheme to local filesystem.
cc yhuai
How was this patch tested?
Added two test cases
Author: gatorsmile <gatorsm...@gmail.com>
Closes #13348 from gatorsmile/ad
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13348#issuecomment-222197726
Thanks. Merging to master and branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13276#issuecomment-222193363
let me know when you have the PR. I will add you to the whitelist.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/13276#issuecomment-222185524
@xwu0226 Please open another PR to re-enable these tests and ask jenkins PR
builder to test maven.
---
If your project is set up for it, you can reply to this email
1201 - 1300 of 5990 matches
Mail list logo