Github user ArcherShao closed the pull request at:
https://github.com/apache/spark/pull/5114
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ArcherShao closed the pull request at:
https://github.com/apache/spark/pull/5212
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/5886#issuecomment-99753457
@vanzin Thanks for your suggestion. Shoud the empty file's filename be a
fixed value or a random value every time creat it ?
---
If your project is set up f
Github user ArcherShao commented on a diff in the pull request:
https://github.com/apache/spark/pull/5886#discussion_r29664363
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -82,6 +77,11 @@ private[history] class FsHistoryProvider(conf
Github user ArcherShao commented on a diff in the pull request:
https://github.com/apache/spark/pull/5886#discussion_r29663820
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -186,13 +186,13 @@ private[history] class FsHistoryProvider
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/5886#issuecomment-98916572
Scan files will cost time, I mean start scan at time T, and finish at time
T + t. If some update happen to the first file between T and T + t, the bug
will repreduce
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/5886#issuecomment-98902823
@srowen Check all log files one time will cost some time, say 't', if the
first file(app1) check at time T, and the last file(appN) check at time T + t,
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/5886
Fix bug that applications status uncorrect on JobHistory UI.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ArcherShao/spark SPARK-7336
Github user ArcherShao closed the pull request at:
https://github.com/apache/spark/pull/5676
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/5676#issuecomment-95832663
@sryza Should I close this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/5676
[SPARK-6891] Fix the bug that ExecutorAllocationManager will request
negative number executors
In ExecutorAllocationManager, executor allocate schedule at a fix
rate(100ms), it will call the
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/5519#issuecomment-93244469
Ok, I got it, thanks a lot, I will close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user ArcherShao closed the pull request at:
https://github.com/apache/spark/pull/5519
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/5519
[Minor] Alter description of some configuration in yarn and mesos
The value of these configurations are calculate by 'math.max(a, b)', but
description is 'a with minimum of b
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/5212#issuecomment-87521396
@tdas can you take a review on this? thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/5212
[SPARK-2475][Streaming] Check whether has enough resources to process the
received data in local mode
You can merge this pull request into a Git repository by running:
$ git pull https
Github user ArcherShao commented on a diff in the pull request:
https://github.com/apache/spark/pull/5114#discussion_r26997367
--- Diff:
external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala
---
@@ -44,12 +44,14 @@ import
Github user ArcherShao commented on a diff in the pull request:
https://github.com/apache/spark/pull/5114#discussion_r26922663
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/receiver/Receiver.scala ---
@@ -107,9 +107,6 @@ abstract class Receiver[T](val storageLevel
Github user ArcherShao commented on a diff in the pull request:
https://github.com/apache/spark/pull/5114#discussion_r26913307
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/receiver/Receiver.scala ---
@@ -107,9 +107,6 @@ abstract class Receiver[T](val storageLevel
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/5114
[SPARK-5961][Streaming]Allow specific nodes in a Spark Streaming cluster to
be preferred as Receiver Worker Nodes
1.The function âsetPreferredLocationâ is in class ReceiverInputDStream
Github user ArcherShao closed the pull request at:
https://github.com/apache/spark/pull/792
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/792#issuecomment-83267442
@ankurdave @lianhuiwang @nchammas @maropu I will close this PRã
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/792#issuecomment-83267462
@ankurdave @lianhuiwang @nchammas @maropu I will close this PRã
---
If your project is set up for it, you can reply to this email and have your
reply appear on
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/5007
[SQL]Delete some dupliate code in HiveThriftServer2
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ArcherShao/spark 20150313
Alternatively
Github user ArcherShao commented on the pull request:
https://github.com/apache/spark/pull/740#issuecomment-43234498
PrimitiveKeyOpenHashMap has changed to GraphXPrimitiveKeyOpenHashMap, I
will send a new pull request.
---
If your project is set up for it, you can reply to this
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/792
Add a function that can build an EdgePartition faster.
If user can make sure every edge add by the order, use this function to
build an EdgePartition will be faster.
You can merge this pull
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/740
Add a function that can build an EdgePartition faster.
If user can make sure every edge add by the order, use this function to
build an EdgePartition will be faster.
You can merge this pull
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/738
Merge addWithoutResize and rehashIfNeeded into one function.
It will be more safety to add an element, users do not need to the
function rehashIfNeeded() after addWithoutResize().
You can merge
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/667
Update OpenHashSet.scala
Modify wrong comment of function addWithoutResize.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ArcherShao/spark
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/647
Update RoutingTable.scala
The class StorageLevel never used.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ArcherShao/spark patch-2
GitHub user ArcherShao opened a pull request:
https://github.com/apache/spark/pull/619
Update SchemaRDD.scala
Modify spelling errors
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/ArcherShao/spark patch-1
Alternatively you can
31 matches
Mail list logo