[GitHub] spark pull request: [SPARK-4314][Streaming] Exception throws when ...

2014-12-23 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3203#issuecomment-68023838 refer to jira, got it --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [SPARK-4314][Streaming] Exception throws when ...

2014-12-23 Thread maji2014
Github user maji2014 closed the pull request at: https://github.com/apache/spark/pull/3203 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: [SPARK-4691][Minor] Rewrite a few lines in shu...

2014-12-06 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3553#issuecomment-65904987 NP, done for title change --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have

[GitHub] spark pull request: [spark-4691][shuffle]Code improvement for aggr...

2014-12-05 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3553#issuecomment-65761712 @pwendell any idea about this title?/ --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project

[GitHub] spark pull request: [spark-4691][shuffle]code optimization for jud...

2014-12-02 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/3553 [spark-4691][shuffle]code optimization for judgement In HashShuffleReader.scala and HashShuffleWriter.scala, no need to judge dep.aggregator.isEmpty again as this is judged

[GitHub] spark pull request: [spark-4691][shuffle]code optimization for jud...

2014-12-02 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3553#discussion_r21207386 --- Diff: core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala --- @@ -45,7 +45,7 @@ private[spark] class HashShuffleReader[K, C

[GitHub] spark pull request: [spark-4691][shuffle]code optimization for jud...

2014-12-02 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3553#discussion_r21213166 --- Diff: core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala --- @@ -45,7 +45,7 @@ private[spark] class HashShuffleReader[K, C

[GitHub] spark pull request: [spark-4691][shuffle]code optimization for jud...

2014-12-02 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3553#discussion_r21213167 --- Diff: core/src/main/scala/org/apache/spark/shuffle/hash/HashShuffleReader.scala --- @@ -45,7 +45,7 @@ private[spark] class HashShuffleReader[K, C

[GitHub] spark pull request: [SPARK-4619][Storage]delete redundant time suf...

2014-11-26 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/3475 [SPARK-4619][Storage]delete redundant time suffix Time suffix exists in Utils.getUsedTimeMs(startTime), no need to append again, delete that You can merge this pull request into a Git

[GitHub] spark pull request: [SPARK-4314][Streaming] Exception when textFil...

2014-11-13 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3203#issuecomment-62904632 any other place should be changed? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does

[GitHub] spark pull request: [SPARK-4314][Streaming] Exception when textFil...

2014-11-12 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3203#issuecomment-62714198 Yes, not any other cases about hdfs should be modified from current situation! --- If your project is set up for it, you can reply to this email and have your reply

[GitHub] spark pull request: [SPARK-4314][Streaming] Exception when textFil...

2014-11-11 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3203#issuecomment-62551435 I know that this form is not simple and elegant. This issue is found in our project. The reason I define prefix and suffix variable is that i am not sure how many

[GitHub] spark pull request: [SPARK-4314][Streaming] Exception when textFil...

2014-11-11 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3203#issuecomment-62661477 About other places when an incomplete file might be read, from my point of view, HDFS file could be read by streaming[such as: HdfsWordCount] and spark[such as HdfsTest

[GitHub] spark pull request: [SPARK-4295][External]Fix exception in SparkSi...

2014-11-10 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3177#issuecomment-62363954 Yes, the test cases are all passed although exception throws before each test case. you can run SparkSinkSuite.scala directly and know the appearance like 4/11

[GitHub] spark pull request: [SPARK-4295][External]Fix exception in SparkSi...

2014-11-10 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3177#discussion_r20130942 --- Diff: external/flume-sink/src/test/scala/org/apache/spark/streaming/flume/sink/SparkSinkSuite.scala --- @@ -159,6 +159,7 @@ class SparkSinkSuite

[GitHub] spark pull request: [SPARK-4295][External]Fix exception in SparkSi...

2014-11-10 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3177#discussion_r20132263 --- Diff: external/flume-sink/src/test/scala/org/apache/spark/streaming/flume/sink/SparkSinkSuite.scala --- @@ -159,6 +159,7 @@ class SparkSinkSuite

[GitHub] spark pull request: [SPARK-4314][Streaming] Exception when textFil...

2014-11-10 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/3203 [SPARK-4314][Streaming] Exception when textFileStream attempts to read deleted _COPYING_ file The ephemeral file(_COPYING_) is caught by FileInputDStream interface. On one hand, the file could

[GitHub] spark pull request: [SPARK-4295][External]Fix exception in SparkSi...

2014-11-09 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/3177 [SPARK-4295][External]Fix exception in SparkSinkSuite Handle exception in SparkSinkSuite, please refer to [SPARK-4295] You can merge this pull request into a Git repository by running: $ git

[GitHub] spark pull request: Sink2 and channel2 should be closed in Flume

2014-11-05 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/3037#issuecomment-61782535 I summit this request 5 days ago, but most codes are changed 2 days ago and including sink2.stop and channel2.stop, so i close this request --- If your project is set

[GitHub] spark pull request: Sink2 and channel2 should be closed in Flume

2014-11-05 Thread maji2014
Github user maji2014 closed the pull request at: https://github.com/apache/spark/pull/3037 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: Sink2 and channel2 should be closed in Flume

2014-11-01 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3037#discussion_r19704384 --- Diff: external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala --- @@ -184,7 +184,7 @@ object FlumeUtils { hostname

[GitHub] spark pull request: sink2 and channel2 should be closed

2014-10-31 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/3037 sink2 and channel2 should be closed as title You can merge this pull request into a Git repository by running: $ git pull https://github.com/maji2014/spark master Alternatively you can

[GitHub] spark pull request: Sink2 and channel2 should be closed in Flume

2014-10-31 Thread maji2014
Github user maji2014 commented on a diff in the pull request: https://github.com/apache/spark/pull/3037#discussion_r19700506 --- Diff: external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeUtils.scala --- @@ -184,7 +184,7 @@ object FlumeUtils { hostname

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-19 Thread maji2014
Github user maji2014 closed the pull request at: https://github.com/apache/spark/pull/1457 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-19 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/1494 Required AM memory is amMem, not args.amMemory ERROR yarn.Client: Required AM memory (1024) is above the max threshold (1048) of this cluster appears if this code is not changed. obviously, 1024

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-17 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/1457 Required AM memory is amMem, not args.amMemory ERROR yarn.Client: Required AM memory (1024) is above the max threshold (1048) of this cluster appears if this code is not changed. obviously, 1024

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-17 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/1457#issuecomment-49264699 Please focus on the second issue as the first issue is a old patch on June. --- If your project is set up for it, you can reply to this email and have your reply

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-17 Thread maji2014
Github user maji2014 closed the pull request at: https://github.com/apache/spark/pull/1457 --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-17 Thread maji2014
GitHub user maji2014 reopened a pull request: https://github.com/apache/spark/pull/1457 Required AM memory is amMem, not args.amMemory ERROR yarn.Client: Required AM memory (1024) is above the max threshold (1048) of this cluster appears if this code is not changed. obviously, 1024

[GitHub] spark pull request: Required AM memory is amMem, not args.amMem...

2014-07-17 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/1457#issuecomment-49391500 Please focus on second issue as title. the first Update run-example is a old patch. --- If your project is set up for it, you can reply to this email and have your

[GitHub] spark pull request: Update run-example

2014-06-08 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/988#issuecomment-45434259 ok, i agree that. please merge this patch --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project

[GitHub] spark pull request: Update run-example

2014-06-08 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/1011 Update run-example Old code can only be ran under spark_home and use bin/run-example. Error ./run-example: line 55: ./bin/spark-submit: No such file or directory appears when running in other

[GitHub] spark pull request: Update run-example

2014-06-08 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/988#issuecomment-45434458 i agree that and patch #1011 is opened against the master branch. you can merge it and back-port it into 1.0.1 --- If your project is set up for it, you can reply

[GitHub] spark pull request: Update run-example

2014-06-06 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/988#issuecomment-45315985 OK, SPARK-2057 is opened for tracing this issue. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your

[GitHub] spark pull request: Update run-example

2014-06-05 Thread maji2014
GitHub user maji2014 opened a pull request: https://github.com/apache/spark/pull/988 Update run-example Old code can only be ran under spark_home and use bin/run-example. Error ./run-example: line 55: ./bin/spark-submit: No such file or directory appears when running in other

[GitHub] spark pull request: Update run-example

2014-06-05 Thread maji2014
Github user maji2014 commented on the pull request: https://github.com/apache/spark/pull/988#issuecomment-45296549 Maybe it's better to change it to $FWDIR/bin/spark-submit and commit it into apache:master --- If your project is set up for it, you can reply to this email and have