Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/23104#discussion_r236929433
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -459,6 +459,7 @@ object LimitPushDown extends
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/23104#discussion_r236589331
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -459,6 +459,7 @@ object LimitPushDown extends
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/23104#discussion_r236143436
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -459,6 +459,7 @@ object LimitPushDown extends
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/23104
> The title has a typo.
Sorry, it has been fixed.
---
-
To unsubscribe, e-mail: reviews-unsub
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/23104#discussion_r236137253
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -459,6 +459,7 @@ object LimitPushDown extends
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/23104#discussion_r236115582
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -459,6 +459,7 @@ object LimitPushDown extends
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/23104
@cloud-fan @dongjoon-hyun @gatorsmile
Help review the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/23104
Yes I tested and understood, you are right. @mgaido91
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/23104
Cartesian product refers to the Cartesian product of two sets X and Y in
mathematics , also known as direct product , expressed as X Ã Y , the first
object is a member of X and the second
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/23104
OK, I will add some UTs.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/23104
[SPARK-26138][SQL] LimitPushDown cross join requires maybeBushLocalLimit
## What changes were proposed in this pull request?
In LimitPushDown batch, cross join can push down
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21784
We need to listen to @vanzin opinion.
Because the relevant code is what he wrote.
---
-
To unsubscribe, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21784
But for some spark-submit applications, I want these Application report for
information.
What should I do
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21827
Please add a switch. And represented by a constant. This configuration is
added to the running-on-yarn.md document. @hejiefang
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21784
what? I think we need to add a switch.
https://github.com/apache/spark/pull/21827
---
-
To unsubscribe, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21036
Thank you for your comments, I will close this PR, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/21036
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21036
1.No need to loop twice to filter to determine if the length is greater
than 0
2.This feature is to improve performance, the default switch needs to open
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/21036
Thanks, I will try to add test cases. @felixcheung
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/21036#discussion_r180655799
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -55,7 +56,8 @@ private[spark] class HadoopPartition(rddId: Int, override
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/21036#discussion_r180652894
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -55,7 +56,8 @@ private[spark] class HadoopPartition(rddId: Int, override
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/21036
[SPARK-23958][CORE] HadoopRdd filters empty files to avoid generating empty
tasks that affect the performance of the Spark computing performance.
## What changes were proposed in this pull
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20818
@ajbozarth @srowen
Help to review the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20818
[SPARK-23675][WEB-UI]Title add spark logo, use spark logo image
## What changes were proposed in this pull request?
Title add spark logo, use spark logo image. reference other big
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20557
Well, for now, I don't have a better solution.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20543
Oh, I just think it adds to make it clearer.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20543
@gatorsmile
Help to review the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20570
Okay, I check the other pages again today.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20557
@srowen @gatorsmile
![4](https://user-images.githubusercontent.com/26266482/36081707-86d3a7cc-0fdd-11e8-9ee8-1c17efd5d690.png)
Can I overload hive's
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20573
[SPARK-23384][WEB-UI]When it has no incomplete(completed) applications
found, the last updated time is not formatted and client local time zone is not
show in history server web ui
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20557#discussion_r167419765
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -539,15 +539,15 @@ case class DescribeTableCommand
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20570
[spark-23382][WEB-UI]Spark Streaming ui about the contents of the for need
to have hidden and show features, when the table records very much.
## What changes were proposed
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20557#discussion_r167416457
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -539,15 +539,15 @@ case class DescribeTableCommand
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20557
[SPARK-23364][SQL]'desc table' command in spark-sql add column head display
## What changes were proposed in this pull request?
Use 'desc partition_table' command in spark-sql
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20543
[SPARK-23357][CORE] 'SHOW TABLE EXTENDED LIKE pattern=STRING' add
âPartitionedâ display similar to hive, and partition is empty, also need to
show empty partition field []
## What
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r165253828
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r165251810
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r165247567
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r165239860
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r164996836
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r164975752
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20437#discussion_r164973156
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala
---
@@ -157,7 +157,7 @@ class FileInputDStream[K, V, F
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20437
thanks, Thank you for your review.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20437
[SPARK-23270][Streaming][WEB-UI]FileInputDStream Streaming UI 's records
should not be set to the default value of 0, it should be the total number of
rows of new files.
## What changes
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20259
Thank you for review, I will close this list. I'm going to use a script to
monitor the health of the Master process
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/20259
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/20287
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20287
@smurakozi @vanzin @srowen
Thanks, i will close the PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
Please help merge code, thank you.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20287
@smurakozi
Help review the code, this bug results from your added functionality.
---
-
To unsubscribe, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20287
Well, then you can tell me how specific changes? I do not have a good idea
right now. The problem is that the page crashes, it should be a fatal bug
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
test this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20259#discussion_r161959792
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -179,6 +181,7 @@ private[deploy] class Master
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20287
[SPARK-23121][WEB-UI] When the Spark Streaming app is running for a period
of time, the page is incorrectly reported when accessing '/jobs' or
'/jobs/job?id=13'
## What changes were
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20259#discussion_r161937136
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -125,6 +125,8 @@ private[deploy] class Master(
private var
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20259#discussion_r161936082
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -179,6 +181,7 @@ private[deploy] class Master
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
Yes, it makes the Workers / Apps lists collapsible in the same way as other
blocks.
---
-
To unsubscribe, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
I agree with your second suggestion, before I did not understand what you
mean, now I passed the test I understand what you mean.
1.In order for collapsible tables to persist
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20259
I set the startup time to a metric. The metric instead of the master page
display.
---
-
To unsubscribe, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20259
OK, i understand your suggestion.
Can I set the startup time to a metric?
---
-
To unsubscribe, e-mail: reviews
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
@ajbozarth
The first suggestion, I have already fixed it.
![3](https://user-images.githubusercontent.com/26266482/34931312-27b9e74a-fa09-11e7-89e5-8b7c0f5ad59b.png
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20259
1. Concerned about the start-up time is to see if the system is stable.
2. Our system has 50,000 + app running every day, maser will generate a lot
of app registration, management
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20259
Sir, I stick to my point for the following reasons:
1.When the spark system is running for some time, the log has been rolled
back, because we use log4j, simply do not see
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
1. The first suggestion, I will fix it.
2. The second suggestion, I think it is not necessary. Because spark system
is small, such as 3 workers, do not need to hide the table from
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20259
[SPARK-23066][WEB-UI] Master Page increase master start-up time.
## What changes were proposed in this pull request?
When a spark system runs stably for a long time, we do not know
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
@ajbozarth @srowen
Fix the code, increase the arrow of the form page, maintain the consistency
of the function.
after fix:
![4](https://user-images.githubusercontent.com
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20194#discussion_r161130352
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -141,7 +141,7 @@ statement
(LIKE? pattern
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
![3](https://user-images.githubusercontent.com/26266482/34856154-87b381b6-f77e-11e7-932e-bb14415dc56a.png
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
No, just hide the table, in fact, the data is already on the page, but we
can not see.
When we refresh the page, it will re-show all the data
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20194#discussion_r160869162
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -141,7 +141,7 @@ statement
(LIKE? pattern
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/20216
Dear Sir, However, the real spark big data environment, a very large number
of workers, every day running a very large number of applications, has
completed a very large number
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20216
[SPARK-23024][WEB-UI]Spark ui about the contents of the form need to have
hidden and show features, when the table records very much.
## What changes were proposed in this pull request
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/20194#discussion_r160313038
--- Diff:
sql/catalyst/src/main/antlr4/org/apache/spark/sql/catalyst/parser/SqlBase.g4 ---
@@ -141,7 +141,7 @@ statement
(LIKE? pattern
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/20194
[SPARK-22999][SQL]'show databases like command' can remove the like keyword
## What changes were proposed in this pull request?
SHOW DATABASES (LIKE pattern = STRING)? Can be like
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19841
+1, avoid unpredictable exceptions that cause the temporary directory or
file to be deleted.
---
-
To unsubscribe, e
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
@cloud-fan
Help merge the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
I have updated the title and description.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
Thank you for your review comments, I have to restore the code, not running
in the code calculation. Now only keep the document changes. Please review
again.
@srowen @jiangxb1987 @cloud
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19625
Please upload the screenshot in PR.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/19532#discussion_r147677958
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -120,7 +120,7 @@ private[spark] class SparkUI private (
attemptId
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19520
I would like to ask, under what circumstances the application id will
contain a forward slash?
---
-
To unsubscribe, e
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19507
@srowen
Help review the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
@jiangxb1987 @srowen
Help review the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19507
Please refer to https://github.com/apache/spark/pull/19346
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19532
@jiangxb1987 I modified it.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/19532#discussion_r145618914
--- Diff: core/src/main/scala/org/apache/spark/ui/SparkUI.scala ---
@@ -120,7 +120,7 @@ private[spark] class SparkUI private (
attemptId
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/19532
[CORE]stage api modify the description format, add version api, modify the
duration real-time calculation
## What changes were proposed in this pull request?
stage api
Github user guoxiaolongzte commented on a diff in the pull request:
https://github.com/apache/spark/pull/19242#discussion_r145346521
--- Diff: docs/configuration.md ---
@@ -740,6 +740,20 @@ Apart from these, the following properties are also
available, and may be useful
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19242
@srowen
Help to review the code, thanks.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/19360
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
GitHub user guoxiaolongzte reopened a pull request:
https://github.com/apache/spark/pull/19360
[SPARK-22139][CORE]Remove the variable which is never used in
SparkConf.scala
## What changes were proposed in this pull request?
Remove the variable which is never used
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19507
@ajbozarth
Sorry, upload the code before I accidentally withdrew the parenthesis. I
rejoined the parenthesis. I have fixed
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/19507
add count in fair scheduler pool page
## What changes were proposed in this pull request?
Add count in fair scheduler pool page. The purpose is to know the
statistics clearly
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19399
Nice, I think it should be merged.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user guoxiaolongzte closed the pull request at:
https://github.com/apache/spark/pull/19360
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user guoxiaolongzte commented on the issue:
https://github.com/apache/spark/pull/19360
@HyukjinKwon The problem of the PR you follow, I do not care, I will close
this PR.
---
-
To unsubscribe, e-mail
GitHub user guoxiaolongzte opened a pull request:
https://github.com/apache/spark/pull/19397
[SPARK-22173] Table CSS style needs to be adjusted in History Page and in
Executors Page.
## What changes were proposed in this pull request?
There is a problem with table CSS
1 - 100 of 398 matches
Mail list logo