Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-218589737
**Update:**
- Reverted the code changes.
1. Users are allowed to specify the partial partition spec. For
example, given a table with four partitions [a='
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-218587926
**[Test build #58405 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/58405/consoleFull)**
for PR 12801 at commit
[`15f287f`](https://gi
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-218258116
Will try to improve the existing error message if a multi-partition drop
command failed. This can improve the usability before we supporting atomicity.
Thanks!
---
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-218256971
I see. I will change the implementation of this PR. This PR will only fix
SPARK-14684.
Agree, we can address the atomicity issue later.
---
If your proje
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-218248097
For now, let's just drop them one by one. Fixing that is a separate issue
that we can always do later (even after 2.0)
---
If your project is set up for it, you can
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-217353785
@yhuai Agree! Actually, I initially tried to do it in a batch mode. That is
the following API we need to use
```
public List dropPartitions(String dbName, Str
Github user yhuai commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-217337836
@gatorsmile I am not sure we should ban dropping multiple partitions in a
single call, which is a useful command.
I just took a look at our implementation of add
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-217070994
Regarding SPARK-15026, my major concern is we might get JIRAs if the
previous `Drop Partition` command only drops a subset of partitions. If users
issue the same com
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-217070045
@andrewor14 For the defect mentioned in SPARK-14684, you can reproduce it
by using the following statement:
```
sql(s"ALTER TABLE $externalTab DROP PARTITION
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r62109483
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -400,6 +400,10 @@ case class AlterTableDropPartition(
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216993873
Actually let's ping @yhuai. I personally don't think SPARK-15026 is a real
problem. The user should still be able to drop multiple partitions at once. As
for SPARK-1
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r62108004
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -400,6 +400,10 @@ case class AlterTableDropPartition(
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216989870
What I'm saying is, the issue here is more like "removing support for
partial specs when dropping partitions", NOT "disallow dropping multiple
partitions in a single
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216989231
What do you mean, I was able to drop multiple partitions in one command:
```
scala> sql("SHOW PARTITIONS my_tab1").show()
+---+
| result|
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216953678
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216953381
**[Test build #57770 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57770/consoleFull)**
for PR 12801 at commit
[`8a4980c`](https://g
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216953682
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216924542
**[Test build #57770 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57770/consoleFull)**
for PR 12801 at commit
[`8a4980c`](https://gi
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r62072100
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -510,6 +538,21 @@ class SessionCatalog(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61996334
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -465,6 +479,11 @@ class SessionCatalog(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61996301
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -510,6 +538,21 @@ class SessionCatalog(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61995887
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -448,11 +454,19 @@ class SessionCatalog(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61995863
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -248,6 +248,26 @@ class AstBuilder extends SqlBaseBase
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216745696
But, this PR does not allow it for ensuring the atomicity of the command.
https://github.com/gatorsmile/spark/blob/banDropMultiPart/sql/core/src/main/scala/org/ap
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216744413
```scala
case class AlterTableDropPartition(
tableName: TableIdentifier,
specs: Seq[TablePartitionSpec],
ifExists: Boolean)
```
The
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216743828
what if the partition spec of the table is a, b, and we do `ALTER TABLE tbl
DROP PARTITION (a=1, b=1) PARTITION (a=1,b=2)`?
---
If your project is set up for it, you
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216742831
@cloud-fan The parser still can do this if the partition spec of the table
is `a, b, c`. However, the partial or invalid partition spec can be detected in
this PR.
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216741883
oh, looks like our parser forbid it. cc @andrewor14 , seems your second
case is impossible?
---
If your project is set up for it, you can reply to this email and hav
Github user cloud-fan commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216741658
is it allowed to do `ALTER TABLE tbl DROP PARTITION (a=1, b=1) PARTITION
(a=1,c=2)`?
---
If your project is set up for it, you can reply to this email and have your
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216717803
@andrewor14 What is the partition spec if one table has two partitions
`a=1`, `b=1` and `a=1`, `c=2`? I think it is impossible to do that, right?
We enforce
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216686405
@gatorsmile I'm not sure if I completely understand the intent of this
patch. If I understand correctly there are two ways to drop multiple
partitions. If I have a t
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61968544
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -465,6 +479,11 @@ class SessionCatalog(
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61968276
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -510,6 +538,21 @@ class SessionCatalog(
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61968002
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -448,11 +454,19 @@ class SessionCatalog(
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61967761
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/parser/AstBuilder.scala
---
@@ -248,6 +248,26 @@ class AstBuilder extends SqlBaseBase
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216586048
@cloud-fan Your PR https://github.com/apache/spark/pull/12871 is related to
this. This is for disallowing users drop multiple partitions using one command.
Could you
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216099929
cc @rxin It is ready for review. : )
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216052798
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216052799
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216052641
**[Test build #57481 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57481/consoleFull)**
for PR 12801 at commit
[`fa15228`](https://g
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216045570
**[Test build #57481 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57481/consoleFull)**
for PR 12801 at commit
[`fa15228`](https://gi
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/12801#discussion_r61679029
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -81,8 +84,8 @@ class DDLSuite extends QueryTest with Share
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216012541
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216012542
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216012520
**[Test build #57460 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57460/consoleFull)**
for PR 12801 at commit
[`78a868b`](https://g
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216011744
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216011743
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216011717
**[Test build #57458 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57458/consoleFull)**
for PR 12801 at commit
[`3b7b5de`](https://g
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/12801#issuecomment-216008667
**[Test build #57460 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/57460/consoleFull)**
for PR 12801 at commit
[`78a868b`](https://gi
49 matches
Mail list logo