cloud-fan commented on code in PR #40593:
URL: https://github.com/apache/spark/pull/40593#discussion_r1152908374
##
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -928,11 +928,19 @@ primaryExpression
(FILTER LEFT_PAREN WHERE
cloud-fan commented on code in PR #40593:
URL: https://github.com/apache/spark/pull/40593#discussion_r1152908724
##
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -928,11 +928,19 @@ primaryExpression
(FILTER LEFT_PAREN WHERE
cloud-fan commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-1489876875
@yaooqinn this is a good point. If we are sure this is only for CLI display,
not thriftserver protocol, I agree we don't need to follow Hive.
--
This is an automated message from the
grundprinzip opened a new pull request, #40603:
URL: https://github.com/apache/spark/pull/40603
### What changes were proposed in this pull request?
Instead of just showing the Scala callsite show the abbreviate version of
the proto message in the Spark UI.
### Why are the changes
yaooqinn commented on PR #40601:
URL: https://github.com/apache/spark/pull/40601#issuecomment-1489866478
This change makes sense to me. This is a breaking change, then shall we add
a migration guide for it?
--
This is an automated message from the Apache Git Service.
To respond to the
MaxGekk commented on code in PR #40593:
URL: https://github.com/apache/spark/pull/40593#discussion_r1152878072
##
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBaseParser.g4:
##
@@ -928,11 +928,19 @@ primaryExpression
(FILTER LEFT_PAREN WHERE
grundprinzip commented on code in PR #40586:
URL: https://github.com/apache/spark/pull/40586#discussion_r1152826039
##
connector/connect/common/src/main/protobuf/spark/connect/commands.proto:
##
@@ -177,3 +179,97 @@ message WriteOperationV2 {
// (Optional) A condition for
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1152828935
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -679,6 +679,8 @@ object RemoveNoopUnion extends Rule[LogicalPlan] {
HeartSaVioR commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1152828935
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala:
##
@@ -679,6 +679,8 @@ object RemoveNoopUnion extends Rule[LogicalPlan] {
ScrapCodes commented on PR #40553:
URL: https://github.com/apache/spark/pull/40553#issuecomment-1489811022
Hi @VindhyaG, this might be useful - may be we can benefit from the usecase
you have for this. Is it just for logging?
Not sure what others think, it might be good to limit the API
yaooqinn commented on code in PR #40602:
URL: https://github.com/apache/spark/pull/40602#discussion_r1152824490
##
core/src/main/resources/error/error-classes.json:
##
@@ -129,6 +129,12 @@
],
"sqlState" : "429BB"
},
+ "CANNOT_RENAME_ACROSS_SCHEMA" : {
+
yaooqinn opened a new pull request, #40602:
URL: https://github.com/apache/spark/pull/40602
### What changes were proposed in this pull request?
Fix `rename a table` in derby and pg, which schema name is not allowed to
qualify the new table name
### Why
yaooqinn commented on code in PR #40437:
URL: https://github.com/apache/spark/pull/40437#discussion_r1152809362
##
sql/core/src/main/scala/org/apache/spark/sql/catalyst/analysis/KeepCommandOutputWithHive.scala:
##
@@ -0,0 +1,47 @@
+/*
+ * Licensed to the Apache Software
wangyum commented on code in PR #40601:
URL: https://github.com/apache/spark/pull/40601#discussion_r1152803208
##
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala:
##
@@ -424,6 +428,8 @@ class Analyzer(override val catalogManager:
yaooqinn commented on PR #40437:
URL: https://github.com/apache/spark/pull/40437#issuecomment-1489781086
I am not sure why we must keep consistent with Hive for such a case,
1. this is just output from the command line interface, not a programming
API.
2. the `hive` CLI itself is
itholic commented on PR #40525:
URL: https://github.com/apache/spark/pull/40525#issuecomment-1489771881
CI passed. cc @HyukjinKwon @ueshin @xinrong-meng @zhengruifeng PTAL when you
find some time.
I summarized key changes into PR description for review.
--
This is an automated
zsxwing commented on code in PR #40561:
URL: https://github.com/apache/spark/pull/40561#discussion_r1152772092
##
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala:
##
@@ -980,3 +1022,65 @@ object StreamingDeduplicateExec {
private val
wangyum opened a new pull request, #40601:
URL: https://github.com/apache/spark/pull/40601
### What changes were proposed in this pull request?
This PR makes it cast the result type of string +/- interval to timestamp
type instead of string type.
### Why are the changes
201 - 218 of 218 matches
Mail list logo