Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
@srowen Yes. I agree with you! These places should be consistent, otherwise
it is easy to be confused. I will try to modify log statements and docs.
Should I modify it in this PR or a new one
Github user httfighter commented on a diff in the pull request:
https://github.com/apache/spark/pull/22683#discussion_r238215152
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1164,17 +1164,17 @@ private[spark] object Utils extends Logging {
} else
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
retest this please
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
@srowen Sorry, I just saw your message. I am a little busy on weekdays. I
will try to modify the test cases in recent days
GitHub user httfighter reopened a pull request:
https://github.com/apache/spark/pull/22683
[SPARK-25696] The storage memory displayed on spark Application UI isâ¦
⦠incorrect.
## What changes were proposed in this pull request?
In the reported heartbeat information
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
@srowen@ajbozarth I have added the changes, could you help me review the
code? Thank you very much.
---
-
To unsubscribe, e
Github user httfighter closed the pull request at:
https://github.com/apache/spark/pull/22683
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
@srowen OK. Thank you very much for your advice.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
@srowen @ajbozarth I am not sure about some things, can you give me some
advice? In the process of modification, I have a question. In Spark, whether M
and MB represent MiB. Spark does not use
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
@srowen Thank you for your review. I agree with you, and I will make
changes in the near future.
@wangyum Thank you for your help
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22683
It's ok. @ajbozarth
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
GitHub user httfighter opened a pull request:
https://github.com/apache/spark/pull/22683
[SPARK-25696] The storage memory displayed on spark Application UI isâ¦
⦠incorrect.
## What changes were proposed in this pull request?
Change the cardinality of the unit
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/22487
@wangyum In Hive, the INSERT OVERWRITE LOCAL DIRECTORY It does not use a
local staging directory but uses a distributed staging directory. It does not
have this problem in Hive
GitHub user httfighter opened a pull request:
https://github.com/apache/spark/pull/22487
[SPARK-25477] âINSERT OVERWRITE LOCAL DIRECTORYâï¼ the data files
alloâ¦
â¦cated on the non-driver node will not be written to the specified output
directory
## What changes
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21826
It failed again. I don't know what the problem is. Could you help me
trigger it again?@viirya
---
-
To unsubscribe, e-mail
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21826
Thank you very much! @viirya
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21826
The last test bulid failed, but all the test cases passed. I don't know
what the problem is. Could you help me trigger it again? @HyukjinKwon
Github user httfighter commented on a diff in the pull request:
https://github.com/apache/spark/pull/21826#discussion_r205934501
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/PredicateSuite.scala
---
@@ -455,4 +456,10 @@ class PredicateSuite
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21826
I have submitted a new code. Could you help me review the code? Thank you!
@HyukjinKwon @viirya @gatorsmile @rxin @hvanhovell
In Hive, "||" performs the function of STRING concat,
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21826
I have a suggestion that I don't know if it is reasonable.
In our spark, since we already support â||â as a string concatenation
function, I don't know if we can make such an improvement
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21826
I did the following tests in mysql.
mysql> select "abc" || "def";
++
| "abc" || "def" |
+
Github user httfighter commented on a diff in the pull request:
https://github.com/apache/spark/pull/21826#discussion_r204274497
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -442,8 +442,6 @@ case class Or(left: Expression
Github user httfighter commented on a diff in the pull request:
https://github.com/apache/spark/pull/21826#discussion_r204274481
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -442,8 +442,6 @@ case class Or(left: Expression
GitHub user httfighter opened a pull request:
https://github.com/apache/spark/pull/21826
[SPARK-24872] Remove the symbol â||â of the âORâ operation
## What changes were proposed in this pull request?
â||â will perform the function of STRING concat, and it is also
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21767
Thank you for your comments, @srowen @HyukjinKwon @wangyum. I will try to
contribute more valuable issues
GitHub user httfighter opened a pull request:
https://github.com/apache/spark/pull/21767
SPARK-24804 There are duplicate words in the title in the DatasetSuite
## What changes were proposed in this pull request?
In DatasetSuite.scala, in the 1299 line,
test("SPARK-
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/21023
@gatorsmile
Thank you very muchï¼Can you help me see this pr ?
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user httfighter opened a pull request:
https://github.com/apache/spark/pull/21023
[SPARK-23949] makes && supports the function of predicate operator and
[https://issues.apache.org/jira/browse/SPARK-23949](https://issues.apache.org/jira/browse/SPARK-23949)
[SPA
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/19380
I understand everyone's worries.But i hava few thoughts.
Firstly, the native unix_timestamp itself supports the "-MM-dd
HH:mm:ss.SSS" form of the date, but the resu
Github user httfighter commented on the issue:
https://github.com/apache/spark/pull/19380
In RDMS , unix_timestamp method can keep the milliseconds. For example,
execute the command as follows
select unix_timestamp("2017-10-10 10:10:20.111") from test;
y
GitHub user httfighter opened a pull request:
https://github.com/apache/spark/pull/19380
[SPARK-22157] [SQL] The uniux_timestamp method handles the time field that
is lost in mill
## What changes were proposed in this pull request?
keep the the mill part of the time field
33 matches
Mail list logo