Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/4878
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/4878#issuecomment-77407789
I am sorry, the comments are valid. I am closing this PR. Thank you for
reviewing it.
---
If your project is set up for it, you can reply to this email and have your
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/4878
[SPARK-5920][CORE]BufferedInputStream is added at required places
BufferedInputStream and BufferedOutputStream is added at required places.
You can merge this pull request into a Git repository
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/4812#issuecomment-76755122
@chenghao-intel Sorry for late reply. I think semantically it looks fine.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-76770343
@chenghao-intel Thank you for reviewing it.I will go through your comments
and fix it. And regarding ```not in``` case we can use ``` left outer join``` .
I will try
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/4812#issuecomment-76454856
@chenghao-intel Thank you for your implementation, following are my
observations
Implementation seems simple but it comes with lot of limitations. The query
like
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-73280560
@marmbrus Please check whether it is ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-70528755
@marmbrus Please review it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22448850
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -414,6 +418,123 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22448848
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SubqueryExpression.scala
---
@@ -0,0 +1,39 @@
+/*
+ * Licensed
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-68673618
Thank you for reviewing it. Fixed the review comments. And added the TODO
for future expansion of complex queries.
---
If your project is set up for it, you can
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22448842
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -414,6 +418,123 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22448876
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -414,6 +418,123 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148838
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148840
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148841
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148845
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148859
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r2214
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148896
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148903
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SubqueryExpression.scala
---
@@ -0,0 +1,40 @@
+/*
+ * Licensed
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148913
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SubqueryExpression.scala
---
@@ -0,0 +1,40 @@
+/*
+ * Licensed
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22148917
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SubqueryExpression.scala
---
@@ -0,0 +1,40 @@
+/*
+ * Licensed
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r22149004
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -314,6 +318,113 @@ class Analyzer(catalog: Catalog
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-67778115
Thank you for reviewing it.
I have worked on review comments.Please review it.
I guess the ```SubqueryExpression``` may not be resolved along with main
query
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3348#issuecomment-65766741
I have Rebased with master,Please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-65393775
Rebased with master. And fixed comments
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/3249#discussion_r21225534
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/SubqueryExpression.scala
---
@@ -0,0 +1,32 @@
+/*
+ * Licensed
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3387#issuecomment-65186523
It was already merged with master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/3387
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3516
[SPARK-4658][SQL] Code documentation issue in DDL of datasource API
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ravipesala/spark ddl_doc
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3510
[SPARK-4648][SQL] Support COALESCE function in Spark SQL and Hive QL
Currently HiveQL uses Hive UDF function for Coalesce. Usually using hive
udfs are memory intensive. Since Coalesce function
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3511
[SPARK-4650][SQL] Supporting multi column support in countDistinct function
like count(distinct c1,c2..) in Spark SQL
Supporting multi column support in countDistinct function like
count
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3510#issuecomment-64925533
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3387
[SPARK-4513][SQL] Support relational operator '=' in Spark SQL
The relational operator '=' is not working in Spark SQL. Same works in
Spark HiveQL
You can merge this pull request into a Git
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3348
[SPARK-2554][SQL] Supporting SumDistinct partial aggregation
Adding support to the partial aggregation of SumDistinct
You can merge this pull request into a Git repository by running:
$ git
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3249
[SPARK-4226][SQL] SparkSQL - Add support for subqueries in predicates('in'
clause)
This PR supports subqueries in preicates 'in' clause. The queries will be
transformed to the LeftSemi join
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3075
[SPARK-4207][SQL] Query which has syntax like 'not like' is not working in
Spark SQL
Queries which has 'not like' is not working spark sql.
sql(SELECT * FROM records where value
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/3017#issuecomment-61291682
Thank you for your comment.I handled it and also rebased with master.Please
review it
---
If your project is set up for it, you can reply to this email and have your
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/3017
[SPARK-4154][SQL] Query does not work if it has not between in Spark SQL
and HQL
if the query contains not between does not work like.
SELECT * FROM src where key not between 10 and 20
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2987
[SPARK-4120][SQL] Join of multiple tables with syntax like SELECT .. FROM
T1,T2,T3.. does not work in SparkSQL
Right now it works for only 2 tables like below query.
sql(SELECT * FROM
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2961
[SPARK-3814][SQL] Support for Bitwise AND(), OR(|) ,XOR(^), NOT(~) in
Spark HQL and SQL
Currently there is no support of Bitwise , | in Spark HiveQl and Spark
SQL as well. So this PR support
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2961#issuecomment-60644658
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2926#issuecomment-60644904
Closed this PR and created new PR https://github.com/apache/spark/pull/2961
after rebasing with master
---
If your project is set up for it, you can reply
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2926
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2926
[SPARK-3814][SQL] Support for Bitwise AND(), OR(|) ,XOR(^), NOT(~) in
Spark HQL and SQL
Currently there is no support of Bitwise , | in Spark HiveQl and Spark
SQL as well. So this PR support
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2927
[SPARK-3483][SQL] Special chars in column names
Supporting special chars in column names by using back ticks. Closed
https://github.com/apache/spark/pull/2804 and created this PR as it has merge
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2789
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2789#issuecomment-60383108
Closed this PR as it has merge conflicts and created new PR
https://github.com/apache/spark/pull/2926 and handled comments here
---
If your project is set up
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2804
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2804#issuecomment-60383267
Closed this PR as it has merge conflicts and created new PR
https://github.com/apache/spark/pull/2927
---
If your project is set up for it, you can reply
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2789#issuecomment-59329218
Added support for Bitwise AND(), OR(|) ,XOR(^), NOT(~) in this PR only.
Please review it.
---
If your project is set up for it, you can reply to this email and have
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2804
[SPARK-3483][SQL] Special chars in column names
Supporting special chars in column names by using back ticks.
You can merge this pull request into a Git repository by running:
$ git pull
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2789#issuecomment-59087699
@marmbrus Please review this PR, I handled review comments of PR
https://github.com/apache/spark/pull/2736.Due to merge conflicts I have created
new PR.
---
If your
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2736#issuecomment-58872032
Thank you @scwf , I have created new PR since it has merge conflicts. It
will not be neat If I rebase and push to old PR because it will show all
changed files which
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2772#issuecomment-58980787
Again merge conflicts :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2772
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2789
[SPARK-3814][SQL] Bitwise does not work in Hive
Currently there is no support of Bitwise , | in Spark HiveQl and Spark
SQL as well. So this PR support the same.
I am closing https
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2789#issuecomment-58984605
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2772
[SPARK-3814][SQL] Bitwise does not work in Hive
Currently there is no support of Bitwise , | in Spark HiveQl and Spark
SQL as well. So this PR support the same.
I am closing https
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2736#issuecomment-58765712
Since this PR has conflicts , I created new PR
https://github.com/apache/spark/pull/2772 and handled review comments in it.
---
If your project is set up for it, you
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2736
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2737
[SPARK-3834][SQL] Backticks not correctly handled in subquery aliases
The queries like '''SELECT a.key FROM (SELECT key FROM src) `a`''' does not
work as backticks in subquery aliases
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2710
[SPARK-3814][SQL] Bitwise does not work in Hive
Currently there is no support of Bitwise in Spark HiveQl and Spark SQL as
well. So this PR support the same.
Author : ravipesala
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2710#issuecomment-58458547
@marmbrus it seems git cannot fetch the code that's why it is failed.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2678#issuecomment-58464496
@marmbrus Can you also verify this PR. Thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2678
[SPARK-3813][SQL] Support case when conditional functions in Spark SQL.
case when conditional function is already supported in Spark SQL but
there is no support in SqlParser. So added parser
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2678#discussion_r18471169
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -333,6 +338,24 @@ class SqlParser extends StandardTokenParsers
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2678#discussion_r18496795
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -128,6 +128,11 @@ class SqlParser extends StandardTokenParsers
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2678#discussion_r18496791
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -333,6 +338,15 @@ class SqlParser extends StandardTokenParsers
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2678#discussion_r18496800
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -333,6 +338,15 @@ class SqlParser extends StandardTokenParsers
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2678#issuecomment-58126077
Thank you for reviewing it. I have updated the code as per your comments.
Please review it
---
If your project is set up for it, you can reply to this email and have
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2678#issuecomment-58126112
Thank you for reviewing it. I have updated the code as per your comments.
Please review it
---
If your project is set up for it, you can reply to this email and have
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2620#issuecomment-57819925
You are right, it is not so good to pass resolver in constructor. Instead I
just passed boolean flag.
---
If your project is set up for it, you can reply
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2590#issuecomment-57595734
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2620
[SPARK-2693][SQL] Supported for UDAF Hive Aggregates like PERCENTILE
Implemented UDAF Hive aggregates by adding wrapper to Spark Hive.
You can merge this pull request into a Git repository
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2594#issuecomment-57528756
@marmbrus I am not sure why it is failed. The error shows git could not
fetch the code and timed out. Do I have to do something here.?
---
If your project is set up
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2511#discussion_r18321536
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -166,7 +186,7 @@ class SqlParser extends StandardTokenParsers
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2590#issuecomment-57581369
Fixed code as per comments, please review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2590#discussion_r18211038
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSqlParser.scala ---
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2590#discussion_r18211061
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSqlParser.scala ---
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2590#discussion_r18211084
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSqlParser.scala ---
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2590#discussion_r1822
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSqlParser.scala ---
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2590#discussion_r18211107
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala ---
@@ -75,6 +75,9 @@ class LocalHiveContext(sc: SparkContext) extends
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2590#discussion_r18211122
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSqlParser.scala ---
@@ -0,0 +1,138 @@
+/*
+ * Licensed to the Apache Software
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2590#issuecomment-57299430
Thank you. I updated the code as per your comments, Please review it.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2590#issuecomment-57335446
Thank you for your comments. I updated code as per your comments.Please
review.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2594
[SPARK-3708][SQL] Backticks aren't handled correctly is aliases
The below query gives error
sql(SELECT k FROM (SELECT \`key\` AS \`k\` FROM src) a)
It gives error because the aliases
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2590
[SPARK-3654][SQL] Implement all extended HiveQL statements/commands with a
separate parser combinator
Created separate parser for hql. It preparses the commands like
cache,uncache,add jar etc
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2511
[SPARK-3371][SQL] Renaming a function expression with group by gives error
The following code gives error.
```
sqlContext.registerFunction(len, (s: String) = s.length)
sqlContext.sql
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/2456
[SPARK-3536][SQL] SELECT on empty parquet table throws exception
It return null metadata from parquet if querying on empty parquet file
while calculating splits.So added null check and returns
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2456#issuecomment-56157072
Please review
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17711807
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,20 @@ case class DescribeCommand(child: SparkPlan
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2397#issuecomment-56001847
Updated as per comments. Please review.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2381#issuecomment-56020207
OK. Closing this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2381
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ravipesala closed the pull request at:
https://github.com/apache/spark/pull/2390
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user ravipesala commented on the pull request:
https://github.com/apache/spark/pull/2390#issuecomment-56020320
OK. Closing this PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17659871
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,20 @@ case class DescribeCommand(child: SparkPlan
Github user ravipesala commented on a diff in the pull request:
https://github.com/apache/spark/pull/2397#discussion_r17707447
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/commands.scala ---
@@ -166,3 +166,20 @@ case class DescribeCommand(child: SparkPlan
1 - 100 of 128 matches
Mail list logo