Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3203#issuecomment-68023838
refer to jira, got it
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maji2014 closed the pull request at:
https://github.com/apache/spark/pull/3203
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3553#issuecomment-65904987
NP, done for title change
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3553#issuecomment-65761712
@pwendell any idea about this title?/
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/3553
[spark-4691][shuffle]code optimization for judgement
In HashShuffleReader.scala and HashShuffleWriter.scala, no need to judge
dep.aggregator.isEmpty again as this is judged
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3553#discussion_r21207386
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala ---
@@ -45,7 +45,7 @@ private[spark] class HashShuffleReader[K, C
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3553#discussion_r21213166
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala ---
@@ -45,7 +45,7 @@ private[spark] class HashShuffleReader[K, C
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3553#discussion_r21213167
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala ---
@@ -45,7 +45,7 @@ private[spark] class HashShuffleReader[K, C
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/3475
[SPARK-4619][Storage]delete redundant time suffix
Time suffix exists in Utils.getUsedTimeMs(startTime), no need to append
again, delete that
You can merge this pull request into a Git
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3203#issuecomment-62904632
any other place should be changed?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3203#issuecomment-62714198
Yes, not any other cases about hdfs should be modified from current
situation!
---
If your project is set up for it, you can reply to this email and have your
reply
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3203#issuecomment-62551435
I know that this form is not simple and elegant. This issue is found in our
project. The reason I define prefix and suffix variable is that i am not sure
how many
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3203#issuecomment-62661477
About other places when an incomplete file might be read, from my point of
view, HDFS file could be read by streaming[such as: HdfsWordCount] and
spark[such as HdfsTest
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3177#issuecomment-62363954
Yes, the test cases are all passed although exception throws before each
test case.
you can run SparkSinkSuite.scala directly and know the appearance like
4/11
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3177#discussion_r20130942
--- Diff:
external/flume-sink/src/test/scala/org/apache/spark/streaming/flume/sink/SparkSinkSuite.scala
---
@@ -159,6 +159,7 @@ class SparkSinkSuite
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3177#discussion_r20132263
--- Diff:
external/flume-sink/src/test/scala/org/apache/spark/streaming/flume/sink/SparkSinkSuite.scala
---
@@ -159,6 +159,7 @@ class SparkSinkSuite
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/3203
[SPARK-4314][Streaming] Exception when textFileStream attempts to read
deleted _COPYING_ file
The ephemeral file(_COPYING_) is caught by FileInputDStream interface.
On one hand, the file could
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/3177
[SPARK-4295][External]Fix exception in SparkSinkSuite
Handle exception in SparkSinkSuite, please refer to [SPARK-4295]
You can merge this pull request into a Git repository by running:
$ git
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/3037#issuecomment-61782535
I summit this request 5 days ago, but most codes are changed 2 days ago and
including sink2.stop and channel2.stop, so i close this request
---
If your project is set
Github user maji2014 closed the pull request at:
https://github.com/apache/spark/pull/3037
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3037#discussion_r19704384
--- Diff:
external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala
---
@@ -184,7 +184,7 @@ object FlumeUtils {
hostname
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/3037
sink2 and channel2 should be closed
as title
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/maji2014/spark master
Alternatively you can
Github user maji2014 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3037#discussion_r19700506
--- Diff:
external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala
---
@@ -184,7 +184,7 @@ object FlumeUtils {
hostname
Github user maji2014 closed the pull request at:
https://github.com/apache/spark/pull/1457
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/1494
Required AM memory is amMem, not args.amMemory
ERROR yarn.Client: Required AM memory (1024) is above the max threshold
(1048) of this cluster appears if this code is not changed. obviously, 1024
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/1457
Required AM memory is amMem, not args.amMemory
ERROR yarn.Client: Required AM memory (1024) is above the max threshold
(1048) of this cluster appears if this code is not changed. obviously, 1024
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/1457#issuecomment-49264699
Please focus on the second issue as the first issue is a old patch on June.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user maji2014 closed the pull request at:
https://github.com/apache/spark/pull/1457
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user maji2014 reopened a pull request:
https://github.com/apache/spark/pull/1457
Required AM memory is amMem, not args.amMemory
ERROR yarn.Client: Required AM memory (1024) is above the max threshold
(1048) of this cluster appears if this code is not changed. obviously, 1024
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/1457#issuecomment-49391500
Please focus on second issue as title. the first Update run-example is a
old patch.
---
If your project is set up for it, you can reply to this email and have your
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/988#issuecomment-45434259
ok, i agree that. please merge this patch
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/1011
Update run-example
Old code can only be ran under spark_home and use bin/run-example.
Error ./run-example: line 55: ./bin/spark-submit: No such file or
directory appears when running in other
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/988#issuecomment-45434458
i agree that and patch #1011 is opened against the master branch. you can
merge it and back-port it into 1.0.1
---
If your project is set up for it, you can reply
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/988#issuecomment-45315985
OK, SPARK-2057 is opened for tracing this issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user maji2014 opened a pull request:
https://github.com/apache/spark/pull/988
Update run-example
Old code can only be ran under spark_home and use bin/run-example.
Error ./run-example: line 55: ./bin/spark-submit: No such file or
directory appears when running in other
Github user maji2014 commented on the pull request:
https://github.com/apache/spark/pull/988#issuecomment-45296549
Maybe it's better to change it to $FWDIR/bin/spark-submit and commit it
into apache:master
---
If your project is set up for it, you can reply to this email and have
36 matches
Mail list logo