Github user shaolinliu closed the pull request at:
https://github.com/apache/spark/pull/17581
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
ok.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user shaolinliu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17581#discussion_r111080385
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -359,6 +359,16 @@ object SQLConf {
.booleanConf
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
Ok, I have modified the description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
Sorry, I am wrong. It's just increase user's query time, not occupy the
resource.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitH
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
In a department, we can not constraint everyone, but when we start ts2 with
this parameter, even if the user goes wrong, it does not matter.We have used
the
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
My opinion is:
In the production, the user often select without a limit, often lead to
service offline,this is a general situation, so increase the parameters.
When
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17581
@ueshin please take a look at this pr, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user shaolinliu opened a pull request:
https://github.com/apache/spark/pull/17581
[SPARK-20248][ SQL]Spark SQL add limit parameter to enhance the reliability.
## What changes were proposed in this pull request?
Add a parameter "spark.sql.thriftServer.retainedRe
Github user shaolinliu closed the pull request at:
https://github.com/apache/spark/pull/17561
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17561
ok.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17561
@ueshin the SQLConf.THRIFTSERVER_INCREMENTAL_COLLECT will cause the
resource waste of the cluster, because the cluster's resource will release when
the query is finish, so the executor
Github user shaolinliu commented on a diff in the pull request:
https://github.com/apache/spark/pull/17561#discussion_r110346704
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -359,6 +359,13 @@ object SQLConf {
.booleanConf
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17561
has modify the error, and retest the usecase.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17561
@ueshin please take a look at this pr, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user shaolinliu closed the pull request at:
https://github.com/apache/spark/pull/17560
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17560
@ueshin i resubmit the pr, please close this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user shaolinliu opened a pull request:
https://github.com/apache/spark/pull/17561
[SPARK-20248][ SQL]Spark SQL add limit parameter to enhance the reliability.
## What changes were proposed in this pull request?
Add a parameter
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17560
yes, i am fixing, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
GitHub user shaolinliu opened a pull request:
https://github.com/apache/spark/pull/17560
[SPARK-20248][ SQL]Spark SQL add limit parameter to enhance the reliability.
## What changes were proposed in this pull request?
Add a parameter "spark.sql.thriftServer.retainedResults&
Github user shaolinliu commented on the issue:
https://github.com/apache/spark/pull/17258
Thank you for the advice, your result description seems simpler and more
appropriate. And I have pushed the change to the PR.
---
If your project is set up for it, you can reply to this email
GitHub user shaolinliu opened a pull request:
https://github.com/apache/spark/pull/17258
[SPARK-19807][Web UI]Add reason for cancellation when a stage is killed
using web UI
## What changes were proposed in this pull request?
When a user kills a stage using web UI (in
22 matches
Mail list logo