Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19118
@jiangxb1987 well, the test case is hard to construct if we just run app in
local like comments above. Any ideas to crack
Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19118
@jiangxb1987 well, I passed that part above but met other initialization
chances before runJob. They are in the write function of SparkHadoopWriter.
>
// Assert the out
Github user awarrior commented on a diff in the pull request:
https://github.com/apache/spark/pull/19118#discussion_r138263099
--- Diff:
core/src/main/scala/org/apache/spark/internal/io/SparkHadoopWriter.scala ---
@@ -112,11 +112,12 @@ object SparkHadoopWriter extends Logging
Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19118
I met a trouble when I write a test case. It seems that this issue won't be
triggered in only one node. I found that Driver node do createPathFromString so
that there is no problem
Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19118
@jiangxb1987 ok. I add one later.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19115
@markhamstra sorry to make trouble, I have opened a new PR #19118.
---
-
To unsubscribe, e-mail: reviews-unsubscr
GitHub user awarrior opened a pull request:
https://github.com/apache/spark/pull/19118
[SPARK-21882][CORE] OutputMetrics doesn't count written bytes correctly in
the saveAsHadoopDataset function
spark-21882
## What changes were proposed in this pull request
Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19115
ok, thx
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user awarrior closed the pull request at:
https://github.com/apache/spark/pull/19115
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user awarrior commented on the issue:
https://github.com/apache/spark/pull/19115
@jerryshao hi~ I have modified this PR. But this patch just work in 2.2.0
(some changes apply now). I want to confirm whether I need to create a new PR.
Thanks!
---
If your project is set up
Github user awarrior closed the pull request at:
https://github.com/apache/spark/pull/19114
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user awarrior opened a pull request:
https://github.com/apache/spark/pull/19115
Update PairRDDFunctions.scala
[https://issues.apache.org/jira/browse/SPARK-21882](url)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
GitHub user awarrior opened a pull request:
https://github.com/apache/spark/pull/19114
Update PairRDDFunctions.scala
[https://issues.apache.org/jira/browse/SPARK-21882](url)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
13 matches
Mail list logo