Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12716#issuecomment-216984438
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12081#issuecomment-216979776
@gatorsmile Thank you for updating this. Can you address my comments? Then,
let's get it in!
---
If your project is set up for it, you can reply to this email and have
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r62101534
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -365,4 +381,113 @@ class HiveDDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r62101477
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -365,4 +381,113 @@ class HiveDDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r62100995
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveDDLSuite.scala
---
@@ -20,21 +20,37 @@ package org.apache.spark.sql.hive.execution
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r62100913
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -57,7 +60,7 @@ case class CreateDatabase(
CatalogDatabase
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r62100306
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -95,49 +95,85 @@ class DDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12081#discussion_r62100086
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -57,7 +60,7 @@ case class CreateDatabase(
CatalogDatabase
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r62079732
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/ShowCreateTableSuite.scala ---
@@ -0,0 +1,141 @@
+/*
+ * Licensed to the Apache Software
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r62079118
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -452,3 +455,241 @@ case class ShowTablePropertiesCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r62078936
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -452,3 +455,241 @@ case class ShowTablePropertiesCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r62076855
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -45,7 +45,9 @@ statement
| ALTER DATABASE identifier
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12890#discussion_r62076215
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -71,35 +71,32 @@ object Main extends Logging
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12890#discussion_r62076169
--- Diff: repl/scala-2.11/src/main/scala/org/apache/spark/repl/Main.scala
---
@@ -71,35 +71,32 @@ object Main extends Logging
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12899#issuecomment-216932538
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12890#issuecomment-216930532
Is the problem that {{val sparkContext =
SparkContext.getOrCreate(sparkConf)}} will give us a {{sparkContext}} that
already created by the repl and its conf does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61976246
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -486,7 +488,143 @@ abstract class HadoopFsRelationTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61976162
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -337,7 +337,34 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61972552
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -486,7 +488,143 @@ abstract class HadoopFsRelationTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61972362
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -486,7 +488,143 @@ abstract class HadoopFsRelationTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61972415
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/sources/HadoopFsRelationTest.scala
---
@@ -486,7 +488,143 @@ abstract class HadoopFsRelationTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61971847
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -337,7 +337,34 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12828#discussion_r61971061
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -423,23 +423,34 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12828#discussion_r61969829
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -423,23 +423,34 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12828#discussion_r61969371
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -423,23 +423,34 @@ class HDFSFileCatalog
Repository: spark
Updated Branches:
refs/heads/branch-2.0 a7e8cfa64 -> 52308103e
[SPARK-13749][SQL][FOLLOW-UP] Faster pivot implementation for many distinct
values with two phase aggregation
## What changes were proposed in this pull request?
This is a follow up PR for #11583. It makes 3
Repository: spark
Updated Branches:
refs/heads/master bb9ab56b9 -> d8f528ceb
[SPARK-13749][SQL][FOLLOW-UP] Faster pivot implementation for many distinct
values with two phase aggregation
## What changes were proposed in this pull request?
This is a follow up PR for #11583. It makes 3 lazy
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12861#issuecomment-216443743
Thanks. Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12861#issuecomment-216432500
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61836088
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -443,6 +453,22 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61833537
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -337,7 +341,13 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12828#discussion_r61832023
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -184,8 +184,10 @@ private[sql] object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61831035
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -443,6 +453,22 @@ class HDFSFileCatalog
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12856#discussion_r61830736
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/fileSourceInterfaces.scala
---
@@ -291,8 +291,12 @@ class HDFSFileCatalog
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12828#issuecomment-216412618
@gatorsmile When we call PartitioningUtils.parsePartitions, we should
provide a `Seq[Path]` representing leaf dirs, right? We have this problem is
caused by the fact we
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12828#discussion_r61830374
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningUtils.scala
---
@@ -184,8 +184,10 @@ private[sql] object
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/10428#issuecomment-216335269
@dilipbiswal I think
https://github.com/dongjoon-hyun/spark/commit/a7ce473bd0520c71154ed028f295dab64a7485fe
has fixed the issue. Can you close this PR? Thanks
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/10486#issuecomment-216335124
@wilson8 I think
https://github.com/dongjoon-hyun/spark/commit/a7ce473bd0520c71154ed028f295dab64a7485fe
has resolved this issue. Can you close this PR? Thanks
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/10460#issuecomment-216335195
@huaxingao I think
https://github.com/dongjoon-hyun/spark/commit/a7ce473bd0520c71154ed028f295dab64a7485fe
has fixed this issue. Can you close this PR? Thanks
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/10437#issuecomment-216334904
@xguo27 I think
https://github.com/dongjoon-hyun/spark/commit/a7ce473bd0520c71154ed028f295dab64a7485fe
has fixed this issue. Can you close this PR?
---
If your project
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r61779292
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -389,3 +392,238 @@ case class ShowTablePropertiesCommand(table
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r61779220
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -389,3 +392,238 @@ case class ShowTablePropertiesCommand(table
Repository: spark
Updated Branches:
refs/heads/branch-2.0 eb7336a75 -> 08ae32e61
[SPARK-13749][SQL] Faster pivot implementation for many distinct values with
two phase aggregation
## What changes were proposed in this pull request?
The existing implementation of pivot translates into a
Repository: spark
Updated Branches:
refs/heads/master 0a3026990 -> 992744186
[SPARK-13749][SQL] Faster pivot implementation for many distinct values with
two phase aggregation
## What changes were proposed in this pull request?
The existing implementation of pivot translates into a single
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/11583#issuecomment-216314932
Merging to master and 2.0 branch.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12781#discussion_r61772980
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -45,7 +45,9 @@ statement
| ALTER DATABASE identifier
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12033#issuecomment-216303662
want to quickly update this pr?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/11583#issuecomment-216287741
@aray This PR looks good. I will merge this after it passes tests. Can you
send out a follow up pr to address my comments?
---
If your project is set up for it, you can
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11583#discussion_r61764184
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -363,43 +363,68 @@ class Analyzer(
object
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11583#discussion_r61763969
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/PivotFirst.scala
---
@@ -0,0 +1,152 @@
+/*
+ * Licensed
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/11583#issuecomment-216282747
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12827#issuecomment-216093900
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12812#discussion_r61679032
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -83,6 +93,29 @@ class DDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12812#discussion_r61678623
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -83,6 +93,29 @@ class DDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12812#discussion_r61678579
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -83,6 +93,29 @@ class DDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12812#discussion_r61674817
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -148,7 +165,10 @@ class DDLSuite extends QueryTest
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12812#discussion_r61674814
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala
---
@@ -64,6 +65,22 @@ class SparkSession private(
| Session-related state
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12812#discussion_r61674808
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -81,14 +90,27 @@ class SessionCatalog
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12812#issuecomment-215976986
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/master b3ea57931 -> 8dc3987d0
[SPARK-15028][SQL] Remove HiveSessionState.setDefaultOverrideConfs
## What changes were proposed in this pull request?
This patch removes some code that are no longer relevant -- mainly
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12806#issuecomment-215946614
Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12812
[SQL] Use spark.sql.warehouse.dir as the warehouse location
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12806#issuecomment-215942903
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12806#discussion_r61666128
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionState.scala ---
@@ -44,8 +44,6 @@ private[hive] class HiveSessionState(sparkSession
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12699#issuecomment-215933890
How about we do it a little bit later? Maybe that will introduce conflicts
with prs from others.
---
If your project is set up for it, you can reply to this email
Repository: spark
Updated Branches:
refs/heads/master 09da43d51 -> d7755cfd0
[SPARK-14917][SQL] Enable some ORC compressions tests for writing
## What changes were proposed in this pull request?
https://issues.apache.org/jira/browse/SPARK-14917
As it is described in the JIRA, it seems Hive
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12699#issuecomment-215933410
Cool. Thanks! lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12796#issuecomment-215933370
I have reverted the parser change. Since `CatalystSqlParser` always has
reserved keywords, let's decide if we want to expand the list of non reserved
keywords in future
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12798#issuecomment-215931164
OK I am merging this to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12796#discussion_r61663162
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -674,6 +674,8 @@ nonReserved
| AT | NULLS | OVERWRITE
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12798
[SPARK-15012][SQL] Simplify configuration API further
## What changes were proposed in this pull request?
1. Remove all the `spark.setConf` etc. Just expose `spark.conf`
2. Make
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/12699#discussion_r61662694
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcQuerySuite.scala ---
@@ -169,39 +169,42 @@ class OrcQuerySuite extends QueryTest
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12787#issuecomment-215921540
```
==
ERROR [0.000s]: test_conf (pyspark.sql.tests.SQLTests
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12796
[SPARK-14591] [SQL] Remove DataTypeParser and add more keywords to the
nonReserved list.
## What changes were proposed in this pull request?
CatalystSqlParser can parse data types. So, we do
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12787#issuecomment-215911145
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12787#issuecomment-215910512
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12724
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12791
[SPARK-15019] Propagate all Spark Confs to HiveConf created in
HiveClientImpl
## What changes were proposed in this pull request?
This PR makes two changes:
1. We will propagate Spark Confs
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12786
[SPARK-15013] [SQL] Remove hiveConf from HiveSessionState
## What changes were proposed in this pull request?
The hiveConf in HiveSessionState is not actually used anymore. Let's remove
lem.
Author: Yin Huai <yh...@databricks.com>
Closes #12783 from yhuai/SPARK-15011-ignore.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ac115f66
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ac115f66
Diff: h
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12783#issuecomment-215852060
I am merging this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12783
[SPARK-15011] [SQL] [TEST] Ignore
org.apache.spark.sql.hive.StatisticsSuite.analyze MetastoreRelation
This test always fail with sbt's hadoop 2.3 and 2.4 tests. Let'e disable it
for now
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12416#issuecomment-215746254
In sql/hive's pom, we have
```
org.apache.spark
spark-sql_${scala.binary.version}
test-jar
${project.version
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12416#issuecomment-215745750
Seems we are moving ExtendedYarnTest to src/test, but we are not making
others depend on the test jar? Also, can we separate the build change with
adding Since tag
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12769#issuecomment-215618149
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12773#issuecomment-215616619
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Repository: spark
Updated Branches:
refs/heads/master 2398e3d69 -> 9c7c42bc6
Revert "[SPARK-14613][ML] Add @Since into the matrix and vector classes in
spark-mllib-local"
This reverts commit dae538a4d7c36191c1feb02ba87ffc624ab960dc.
Project:
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12416#issuecomment-215615904
I have reverted this commit.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12416#issuecomment-215615493
Sorry. I am going to revert it. I believe it breaks the build. Seems those
build changes are not related to adding Since tag.
---
If your project is set up for it, you
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12416#issuecomment-215615096
Looks like this one breaks the pr builder?
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57289/testReport/org.apache.spark.network
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12769#issuecomment-215593452
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12724#issuecomment-215314116
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12724#issuecomment-215303991
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user yhuai closed the pull request at:
https://github.com/apache/spark/pull/12743
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
est` instead of `changed_modules`.
Author: Yin Huai <yh...@databricks.com>
Closes #12743 from yhuai/1.6build.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f4af6a8b
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12743#issuecomment-215263726
Thanks. Merging to branch 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12734#issuecomment-215226799
I fixed the title while merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Repository: spark
Updated Branches:
refs/heads/master f405de87c -> 24bea0004
[SPARK-14954] [SQL] Add PARTITION BY and BUCKET BY clause for data source CTAS
syntax
Currently, we can only create persisted partitioned and/or bucketed data source
tables using the Dataset API but not using SQL
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/12743
[SPARK-13023][PROJECT INFRA][BRANCH-1.6] Fix handling of root module in
modules_to_test()
This is a 1.6 branch backport of SPARK-13023 based on @JoshRosen's
https://github.com/apache/spark/commit
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12734#issuecomment-215198428
Changes look good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
1601 - 1700 of 5990 matches
Mail list logo