Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22824
@koeninger , is this PR okay to merge or some other review comments are
pending ?
---
-
To unsubscribe, e-mail: reviews
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22824
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22824
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22824
> I mean its easy to miss if a new "case" is added and "update" mode is not
supported. Even now how about LeftSemi, LeftAnti, FullOuter etc?
Currently Stream s
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
cc @gatorsmile if everything is okay ,this PR can be merged ?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22824
cc @zsxwing @jose-torres @brkyvz
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22824
cc @koeninger can you please review this
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user sandeep-katta opened a pull request:
https://github.com/apache/spark/pull/22824
[SPARK-25834] Update Mode should not be supported for Outer Joins
## What changes were proposed in this pull request?
As per spark documentation only Append mode is supported
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22571#discussion_r227630523
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -2434,8 +2434,15 @@ class SparkContext(config: SparkConf) extends
Logging
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r226673513
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -207,6 +207,14 @@ class SessionCatalog
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r226548140
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -207,6 +207,14 @@ class SessionCatalog
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
> Btw, what if `create database if not exists ...`? Seems like an exception
will be thrown if the table exists even if we specify `if not exists`?
Good catch :+1: ,I have upda
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
@cloud-fan @gatorsmile all the testcases are passed and review comments are
addressed,can you help me to merge this PR please
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r225396925
--- Diff: python/pyspark/sql/tests.py ---
@@ -2993,6 +2990,7 @@ def test_current_database(self):
AnalysisException
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
> The major comments are in the test cases. Could you help clean up the
existing test cases?
All the comments are fixed and corrected the testca
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
can one of the admin ask for retest please ?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
@cloud-fan @gatorsmile if everything is okay,can you please merge this PR
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r223394163
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/SessionCatalog.scala
---
@@ -207,6 +207,16 @@ class SessionCatalog
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r222891761
--- Diff: python/pyspark/sql/tests.py ---
@@ -351,7 +351,7 @@ def tearDown(self):
super(SQLTests, self).tearDown
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r222891525
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala
---
@@ -407,6 +407,7 @@ abstract class DDLSuite extends
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22571
cc @cloud-fan Please review
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22571
Yes I am sending the pool Info around,so history server also can have
details.
And regarding the dark red ,I think git unable to parse it correctly
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22571
cc @vanzin @srowen ,Please review this improvement in History UI
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user sandeep-katta opened a pull request:
https://github.com/apache/spark/pull/22571
[SPARK-25392][Spark Job History]Inconsistent behaviour for pool details in
spark web UI and history server page
## What changes were proposed in this pull request?
1. Added
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
I am running the same test case with hive version **1.2.1.spark2** and it
is passing,can I know with what hive version CI is running and how
org.apache.hive.jdbc.HiveStatement and external
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r220873051
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -66,6 +66,19 @@ case class CreateDatabaseCommand
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r220872507
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -66,6 +66,19 @@ case class CreateDatabaseCommand
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/22466#discussion_r220564446
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/ddl.scala ---
@@ -66,6 +66,19 @@ case class CreateDatabaseCommand
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
cc @cloud-fan @srowen I have updated the code,Please review
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
seems @cloud-fan comments are valid as it will not result in any behavior
change, I will update the PR accordingly WDYT @srowen
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
> See JIRA, I don't think this should be merged.
I have referred Databricks doc
https://docs.databricks.com/spark/latest/spark-sql/language-manual/create-database.h
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/22466
Yes I agree 2 database should not point to same path,**currently this is
the loop hole in spark which is required to fix**.If this solution is not okay
,then we can append the dbname.db
GitHub user sandeep-katta opened a pull request:
https://github.com/apache/spark/pull/22466
[SPARK-25464][SQL]When database is dropped all the data related to it is
deleted
Modification content:If the database is external then not required to
delete it's content.
What
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/21565
yes all the review comments are addressed
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/21565#discussion_r198447523
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -488,9 +488,16 @@ private[spark] class
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/21565
@cloud-fan Build is passed.Can you push this PR
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user sandeep-katta commented on the issue:
https://github.com/apache/spark/pull/21565
@cloud-fan can you please review this small piece of code and merge this PR
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/21565#discussion_r195989897
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -488,9 +488,16 @@ private[spark] class
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/21565#discussion_r195895035
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -488,9 +488,16 @@ private[spark] class
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/21565#discussion_r195894845
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -488,9 +488,16 @@ private[spark] class
Github user sandeep-katta commented on a diff in the pull request:
https://github.com/apache/spark/pull/21565#discussion_r195894823
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -488,9 +488,16 @@ private[spark] class
GitHub user sandeep-katta opened a pull request:
https://github.com/apache/spark/pull/21565
wrong Idle Timeout value is used in case of the cacheBlock.
It is corrected as per the configuration.
## What changes were proposed in this pull request?
IdleTimeout info used
44 matches
Mail list logo