[SPARK-12790][CORE] Remove HistoryServer old multiple files format
Removed isLegacyLogDirectory code path and updated tests
andrewor14
Author: felixcheung
Closes #10860 from felixcheung/historyserverformat.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip
http://git-wip-us.apache.org/repos/asf/spark/blob/0df3cfb8/core/src/test/resources/spark-events/local-1422981780767
--
diff --git a/core/src/test/resources/spark-events/local-1422981780767
b/core/src/test/resources/spark-events/lo
Repository: spark
Updated Branches:
refs/heads/master be7a2fc07 -> 715a19d56
[SPARK-12637][CORE] Print stage info of finished stages properly
Improve printing of StageInfo in onStageCompleted
See also https://github.com/apache/spark/pull/10585
Author: Sean Owen
Closes #10922 from srowen/SP
Repository: spark
Updated Branches:
refs/heads/master a41b68b95 -> c9b89a0a0
[SPARK-12979][MESOS] Donât resolve paths on the local file system in Mesos
scheduler
The driver filesystem is likely different from where the executors will run, so
resolving paths (and symlinks, etc.) will lead t
Repository: spark
Updated Branches:
refs/heads/master 51b03b71f -> a41b68b95
[SPARK-12265][MESOS] Spark calls System.exit inside driver instead of throwing
exception
This takes over #10729 and makes sure that `spark-shell` fails with a proper
error message. There is a slight behavioral chang
Repository: spark
Updated Branches:
refs/heads/master 711ce048a -> 51b03b71f
[SPARK-12463][SPARK-12464][SPARK-12465][SPARK-10647][MESOS] Fix zookeeper dir
with mesos conf and add docs.
Fix zookeeper dir configuration used in cluster mode, and also add
documentation around these settings.
Au
Repository: spark
Updated Branches:
refs/heads/master c1da4d421 -> 6075573a9
[SPARK-6847][CORE][STREAMING] Fix stack overflow issue when updateStateByKey is
followed by a checkpointed dstream
Add a local property to indicate if checkpointing all RDDs that are marked with
the checkpoint flag,
s #10973 from andrewor14/fix-input-metrics-coalesce.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/12252d1d
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/12252d1d
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/122
gre-d3.min.js
@@ -1,4 +1,5 @@
-/* This is a custom version of dagre-d3 on top of v0.4.3. The full list of
commits can be found at http://github.com/andrewor14/dagre-d3/
*/!function(e){if("object"==typeof exports&&"undefined"!=typeof
module)module.exports=e();els
[SPARK-13088] Fix DAG viz in latest version of chrome
Apparently chrome removed `SVGElement.prototype.getTransformToElement`, which
is used by our JS library dagre-d3 when creating edges. The real diff can be
found here:
https://github.com/andrewor14/dagre-d3/commit
[SPARK-13088] Fix DAG viz in latest version of chrome
Apparently chrome removed `SVGElement.prototype.getTransformToElement`, which
is used by our JS library dagre-d3 when creating edges. The real diff can be
found here:
https://github.com/andrewor14/dagre-d3/commit
gre-d3.min.js
@@ -1,4 +1,5 @@
-/* This is a custom version of dagre-d3 on top of v0.4.3. The full list of
commits can be found at http://github.com/andrewor14/dagre-d3/
*/!function(e){if("object"==typeof exports&&"undefined"!=typeof
module)module.exports=e();els
gre-d3.min.js
@@ -1,4 +1,5 @@
-/* This is a custom version of dagre-d3 on top of v0.4.3. The full list of
commits can be found at http://github.com/andrewor14/dagre-d3/
*/!function(e){if("object"==typeof exports&&"undefined"!=typeof
module)module.exports=e();els
[SPARK-13088] Fix DAG viz in latest version of chrome
Apparently chrome removed `SVGElement.prototype.getTransformToElement`, which
is used by our JS library dagre-d3 when creating edges. The real diff can be
found here:
https://github.com/andrewor14/dagre-d3/commit
-d3.min.js
@@ -1,4 +1,5 @@
-/* This is a custom version of dagre-d3 on top of v0.4.3. The full list of
commits can be found at http://github.com/andrewor14/dagre-d3/
*/!function(e){if("object"==typeof exports&&"undefined"!=typeof
module)module.exports=e();else if("f
[SPARK-13088] Fix DAG viz in latest version of chrome
Apparently chrome removed `SVGElement.prototype.getTransformToElement`, which
is used by our JS library dagre-d3 when creating edges. The real diff can be
found here:
https://github.com/andrewor14/dagre-d3/commit
sed,
i.e. until the listener queue is empty.
https://amplab.cs.berkeley.edu/jenkins/job/spark-master-test-sbt-hadoop-2.7/79/testReport/junit/org.apache.spark.util.collection/ExternalAppendOnlyMapSuite/spilling/
Author: Andrew Or
Closes #10990 from andrewor14/accum-suite-less-flaky.
Project: h
s makes sense in the case of TaskMetrics because these are
just aggregated metrics that we want to collect throughout the task, so it
doesn't matter who's incrementing them.
Parent PR: #10717
Author: Andrew Or
Author: Josh Rosen
Author: andrewor14
Closes #10815 from andrewor1
Repository: spark
Updated Branches:
refs/heads/master 302bb569f -> b8cb548a4
[SPARK-10985][CORE] Avoid passing evicted blocks throughout BlockManager
This patch refactors portions of the BlockManager and CacheManager in order to
avoid having to pass `evictedBlocks` lists throughout the code.
Repository: spark
Updated Branches:
refs/heads/master bcc7373f6 -> 25782981c
[SPARK-12174] Speed up BlockManagerSuite getRemoteBytes() test
This patch significantly speeds up the BlockManagerSuite's "SPARK-9591:
getRemoteBytes from another location when Exception throw" test, reducing the
te
Repository: spark
Updated Branches:
refs/heads/master 962aac4db -> 8f659393b
[SPARK-12486] Worker should kill the executors more forcefully if possible.
This patch updates the ExecutorRunner's terminate path to use the new java 8 API
to terminate processes more forcefully if possible. If the e
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f7a322382 -> cd0203819
[SPARK-12486] Worker should kill the executors more forcefully if possible.
This patch updates the ExecutorRunner's terminate path to use the new java 8 API
to terminate processes more forcefully if possible. If t
Repository: spark
Updated Branches:
refs/heads/branch-1.6 5987b1658 -> b49856ae5
[SPARK-12411][CORE] Decrease executor heartbeat timeout to match heartbeat
interval
Previously, the rpc timeout was the default network timeout, which is the same
value
the driver uses to determine dead executor
est/consoleFull
This was introduced in #10284. It's harmless because the NPE is caused by a
race that occurs mainly in `local-cluster` tests (but don't actually fail the
tests).
Tested locally to verify that the NPE is gone.
Author: Andrew Or
Closes #10417 from andrewor14/fix-harmless-
est/consoleFull
This was introduced in #10284. It's harmless because the NPE is caused by a
race that occurs mainly in `local-cluster` tests (but don't actually fail the
tests).
Tested locally to verify that the NPE is gone.
Author: Andrew Or
Closes #10417 from andrewor14/fix-harmless-n
Repository: spark
Updated Branches:
refs/heads/master b0849b8ae -> a820ca19d
[SPARK-2331] SparkContext.emptyRDD should return RDD[T] not EmptyRDD[T]
Author: Reynold Xin
Closes #10394 from rxin/SPARK-2331.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.
Repository: spark
Updated Branches:
refs/heads/master fc6dbcc70 -> b0849b8ae
[SPARK-12339][SPARK-11206][WEBUI] Added a null check that was removed in
Updates made in SPARK-11206 missed an edge case which cause's a
NullPointerException when a task is killed. In some cases when a task ends in
Repository: spark
Updated Branches:
refs/heads/branch-1.5 eb54c914a -> 4d54ba896
Doc typo: ltrim = trim from left end, not right
Author: pshearer
Closes #10414 from pshearer/patch-1.
(cherry picked from commit fc6dbcc7038c2b030ef6a2dc8be5848499ccee1c)
Signed-off-by: Andrew Or
Project: ht
Repository: spark
Updated Branches:
refs/heads/master 1eb90bc9c -> fc6dbcc70
Doc typo: ltrim = trim from left end, not right
Author: pshearer
Closes #10414 from pshearer/patch-1.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d6a519ff2 -> c754a0879
Doc typo: ltrim = trim from left end, not right
Author: pshearer
Closes #10414 from pshearer/patch-1.
(cherry picked from commit fc6dbcc7038c2b030ef6a2dc8be5848499ccee1c)
Signed-off-by: Andrew Or
Project: ht
Repository: spark
Updated Branches:
refs/heads/master 935f46630 -> 1eb90bc9c
[SPARK-5882][GRAPHX] Add a test for GraphLoader.edgeListFile
Author: Takeshi YAMAMURO
Closes #4674 from maropu/AddGraphLoaderSuite.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip
Repository: spark
Updated Branches:
refs/heads/master 4883a5087 -> 935f46630
[SPARK-12392][CORE] Optimize a location order of broadcast blocks by
considering preferred local hosts
When multiple workers exist in a host, we can bypass unnecessary remote access
for broadcasts; block managers fe
Revert "[SPARK-12345][MESOS] Properly filter out SPARK_HOME in the Mesos REST
server"
This reverts commit 8184568810e8a2e7d5371db2c6a0366ef4841f70.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/8a9417bc
Tree: http://git-wi
Repository: spark
Updated Branches:
refs/heads/master ba9332edd -> a78a91f4d
Revert "[SPARK-12413] Fix Mesos ZK persistence"
This reverts commit 2bebaa39d9da33bc93ef682959cd42c1968a6a3e.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf
Revert "[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with
Mesos cluster mode."
This reverts commit ad8c1f0b840284d05da737fb2cc5ebf8848f4490.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a78a91f4
Tree:
Repository: spark
Updated Branches:
refs/heads/master 007a32f90 -> ba9332edd
[SPARK-12345][CORE] Do not send SPARK_HOME through Spark submit REST interface
It is usually an invalid location on the remote machine executing the job.
It is picked up by the Mesos support in cluster mode, and most
Repository: spark
Updated Branches:
refs/heads/master 0514e8d4b -> 007a32f90
[SPARK-11097][CORE] Add channelActive callback to RpcHandler to monitor the new
connections
Added `channelActive` to `RpcHandler` so that `NettyRpcHandler` doesn't need
`clients` any more.
Author: Shixiong Zhu
Cl
Repository: spark
Updated Branches:
refs/heads/master 60da0e11f -> 0514e8d4b
[SPARK-12411][CORE] Decrease executor heartbeat timeout to match heartbeat
interval
Previously, the rpc timeout was the default network timeout, which is the same
value
the driver uses to determine dead executors. T
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1dc71ec77 -> 3b903e44b
Revert "[SPARK-12365][CORE] Use ShutdownHookManager where
Runtime.getRuntime.addShutdownHook() is called"
This reverts commit 4af64385b085002d94c54d11bbd144f9f026bbd8.
Project: http://git-wip-us.apache.org/repo
Repository: spark
Updated Branches:
refs/heads/branch-1.6 881f2544e -> 88bbb5429
[SPARK-12390] Clean up unused serializer parameter in BlockManager
No change in functionality is intended. This only changes internal API.
Author: Andrew Or
Closes #10343 from andrewor14/clean-bm-seriali
Repository: spark
Updated Branches:
refs/heads/master d1508dd9b -> 97678edea
[SPARK-12390] Clean up unused serializer parameter in BlockManager
No change in functionality is intended. This only changes internal API.
Author: Andrew Or
Closes #10343 from andrewor14/clean-bm-seriali
Repository: spark
Updated Branches:
refs/heads/branch-1.6 154567dca -> 4ad08035d
[SPARK-12386][CORE] Fix NPE when spark.executor.port is set.
Author: Marcelo Vanzin
Closes #10339 from vanzin/SPARK-12386.
(cherry picked from commit d1508dd9b765489913bc948575a69ebab82f217b)
Signed-off-by: And
Repository: spark
Updated Branches:
refs/heads/master fdb382275 -> d1508dd9b
[SPARK-12386][CORE] Fix NPE when spark.executor.port is set.
Author: Marcelo Vanzin
Closes #10339 from vanzin/SPARK-12386.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apach
Repository: spark
Updated Branches:
refs/heads/master f590178d7 -> fdb382275
[SPARK-12186][WEB UI] Send the complete request URI including the query string
when redirecting.
Author: Rohit Agarwal
Closes #10180 from mindprince/SPARK-12186.
Project: http://git-wip-us.apache.org/repos/asf/sp
Repository: spark
Updated Branches:
refs/heads/branch-1.6 4af64385b -> 154567dca
[SPARK-12186][WEB UI] Send the complete request URI including the query string
when redirecting.
Author: Rohit Agarwal
Closes #10180 from mindprince/SPARK-12186.
(cherry picked from commit fdb38227564c1af40cbf
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fb02e4e3b -> 4af64385b
[SPARK-12365][CORE] Use ShutdownHookManager where
Runtime.getRuntime.addShutdownHook() is called
SPARK-9886 fixed ExternalBlockStore.scala
This PR fixes the remaining references to Runtime.getRuntime.addShutdown
Repository: spark
Updated Branches:
refs/heads/master 38d9795a4 -> f590178d7
[SPARK-12365][CORE] Use ShutdownHookManager where
Runtime.getRuntime.addShutdownHook() is called
SPARK-9886 fixed ExternalBlockStore.scala
This PR fixes the remaining references to Runtime.getRuntime.addShutdownHook
Repository: spark
Updated Branches:
refs/heads/branch-1.6 638b89bc3 -> fb02e4e3b
[SPARK-10248][CORE] track exceptions in dagscheduler event loop in tests
`DAGSchedulerEventLoop` normally only logs errors (so it can continue to
process more events, from other jobs). However, this is not desir
Repository: spark
Updated Branches:
refs/heads/master ce5fd4008 -> 38d9795a4
[SPARK-10248][CORE] track exceptions in dagscheduler event loop in tests
`DAGSchedulerEventLoop` normally only logs errors (so it can continue to
process more events, from other jobs). However, this is not desirable
Repository: spark
Updated Branches:
refs/heads/master 861549acd -> ce5fd4008
MAINTENANCE: Automated closing of pull requests.
This commit exists to close the following pull requests on Github:
Closes #1217 (requested by ankurdave, srowen)
Closes #4650 (requested by andrewor14)
Closes #5
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f81512729 -> e5b85713d
[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos
cluster mode.
SPARK_HOME is now causing problem with Mesos cluster mode since spark-submit
script has been changed recently to take pr
Repository: spark
Updated Branches:
refs/heads/master 26d70bd2b -> ad8c1f0b8
[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with Mesos
cluster mode.
SPARK_HOME is now causing problem with Mesos cluster mode since spark-submit
script has been changed recently to take precen
Repository: spark
Updated Branches:
refs/heads/master a89e8b612 -> ca0690b5e
[SPARK-4117][YARN] Spark on Yarn handle AM being told command from RM
Spark on Yarn handle AM being told command from RM
When RM throws ApplicationAttemptNotFoundException for allocate
invocation, making the Applicat
Repository: spark
Updated Branches:
refs/heads/branch-1.6 93095eb29 -> fb08f7b78
[SPARK-10477][SQL] using DSL in ColumnPruningSuite to improve readability
Author: Wenchen Fan
Closes #8645 from cloud-fan/test.
(cherry picked from commit a89e8b6122ee5a1517fbcf405b1686619db56696)
Signed-off-by
Repository: spark
Updated Branches:
refs/heads/master c5b6b398d -> a89e8b612
[SPARK-10477][SQL] using DSL in ColumnPruningSuite to improve readability
Author: Wenchen Fan
Closes #8645 from cloud-fan/test.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 8e9a60031 -> 93095eb29
[SPARK-12062][CORE] Change Master to asyc rebuild UI when application completes
This change builds the event history of completed apps asynchronously so the
RPC thread will not be blocked and allow new workers to
Repository: spark
Updated Branches:
refs/heads/master 8a215d233 -> c5b6b398d
[SPARK-12062][CORE] Change Master to asyc rebuild UI when application completes
This change builds the event history of completed apps asynchronously so the
RPC thread will not be blocked and allow new workers to reg
Repository: spark
Updated Branches:
refs/heads/master 63ccdef81 -> 8a215d233
[SPARK-9886][CORE] Fix to use ShutdownHookManager in
ExternalBlockStore.scala
Author: Naveen
Closes #10313 from naveenminchu/branch-fix-SPARK-9886.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commi
Repository: spark
Updated Branches:
refs/heads/branch-1.6 2c324d35a -> 8e9a60031
[SPARK-9886][CORE] Fix to use ShutdownHookManager in
ExternalBlockStore.scala
Author: Naveen
Closes #10313 from naveenminchu/branch-fix-SPARK-9886.
(cherry picked from commit 8a215d2338c6286253e20122640592f9d6
Repository: spark
Updated Branches:
refs/heads/master 765a48849 -> 63ccdef81
[SPARK-10123][DEPLOY] Support specifying deploy mode from configuration
Please help to review, thanks a lot.
Author: jerryshao
Closes #10195 from jerryshao/SPARK-10123.
Project: http://git-wip-us.apache.org/repos
Repository: spark
Updated Branches:
refs/heads/master a63d9edcf -> 765a48849
[SPARK-9026][SPARK-4514] Modifications to JobWaiter, FutureAction, and
AsyncRDDActions to support non-blocking operation
These changes rework the implementations of `SimpleFutureAction`,
`ComplexFutureAction`, `JobW
Repository: spark
Updated Branches:
refs/heads/master c2de99a7c -> a63d9edcf
[SPARK-9516][UI] Improvement of Thread Dump Page
https://issues.apache.org/jira/browse/SPARK-9516
- [x] new look of Thread Dump Page
- [x] click column title to sort
- [x] grep
- [x] search as you type
squito Jos
Repository: spark
Updated Branches:
refs/heads/branch-1.6 9e4ac5645 -> 2c324d35a
[SPARK-12351][MESOS] Add documentation about submitting Spark with mesos
cluster mode.
Adding more documentation about submitting jobs with mesos cluster mode.
Author: Timothy Chen
Closes #10086 from tnachen/m
Repository: spark
Updated Branches:
refs/heads/master 369127f03 -> c2de99a7c
[SPARK-12351][MESOS] Add documentation about submitting Spark with mesos
cluster mode.
Adding more documentation about submitting jobs with mesos cluster mode.
Author: Timothy Chen
Closes #10086 from tnachen/mesos
t sort's comparison on the front. cc JoshRosen
andrewor14
Author: Lianhui Wang
Closes #10131 from lianhuiwang/spark-12130.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/369127f0
Tree: http://git-wip-us.apache.org/re
Repository: spark
Updated Branches:
refs/heads/branch-1.6 08aa3b47e -> 9e4ac5645
[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling
setConf
This is continuation of SPARK-12056 where change is applied to
SqlNewHadoopRDD.scala
andrewor14
FYI
Author: tedyu
Clo
Repository: spark
Updated Branches:
refs/heads/master 840bd2e00 -> f725b2ec1
[SPARK-12056][CORE] Part 2 Create a TaskAttemptContext only after calling
setConf
This is continuation of SPARK-12056 where change is applied to
SqlNewHadoopRDD.scala
andrewor14
FYI
Author: tedyu
Closes #10
Repository: spark
Updated Branches:
refs/heads/master 31b391019 -> 840bd2e00
[HOTFIX] Compile error from commit 31b3910
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/840bd2e0
Tree: http://git-wip-us.apache.org/repos/asf/
Repository: spark
Updated Branches:
refs/heads/master 28112657e -> 31b391019
[SPARK-12105] [SQL] add convenient show functions
Author: Jean-Baptiste Onofré
Closes #10130 from jbonofre/SPARK-12105.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.
Repository: spark
Updated Branches:
refs/heads/branch-1.5 e4cf12118 -> 0fdf5542b
[SPARK-12275] [SQL] No plan for BroadcastHint in some condition - 1.5 backport
backport #10265 to branch 1.5.
When SparkStrategies.BasicOperators's "case BroadcastHint(child) =>
apply(child)" is hit,
it only rec
t](https://github.com/andrewor14/spark/blob/fix-oom/core/src/test/scala/org/apache/spark/memory/UnifiedMemoryManagerSuite.scala#L233)
that I stole from JoshRosen.
**Solution.** Fix the cap on task execution memory. It should take into account
the space that could have been freed by storage in additio
thub.com/andrewor14/spark/blob/fix-oom/core/src/test/scala/org/apache/spark/memory/UnifiedMemoryManagerSuite.scala#L233)
that I stole from JoshRosen.
**Solution.** Fix the cap on task execution memory. It should take into account
the space that could have been freed by storage in addition to the
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d0307deaa -> 9870e5c7a
[SPARK-12251] Document and improve off-heap memory configurations
This patch adds documentation for Spark configurations that affect off-heap
memory and makes some naming and validation improvements for those con
Repository: spark
Updated Branches:
refs/heads/master 6a6c1fc5c -> 23a9e62ba
[SPARK-12251] Document and improve off-heap memory configurations
This patch adds documentation for Spark configurations that affect off-heap
memory and makes some naming and validation improvements for those configs
Repository: spark
Updated Branches:
refs/heads/master 442a7715a -> aec5ea000
[SPARK-12165][SPARK-12189] Fix bugs in eviction of storage memory by execution
This patch fixes a bug in the eviction of storage memory by execution.
## The bug:
In general, execution should be able to evict storage
Repository: spark
Updated Branches:
refs/heads/branch-1.6 acd462420 -> 05e441e12
[SPARK-12165][SPARK-12189] Fix bugs in eviction of storage memory by execution
This patch fixes a bug in the eviction of storage memory by execution.
## The bug:
In general, execution should be able to evict sto
reak any compatibility. Otherwise, if it is merged into 1.6.1, then we
might need to add more backward compatibility handling logic (currently does
not exist yet).
Author: Andrew Or
Closes #10115 from andrewor14/smaller-event-logs.
Project: http://git-wip-us.apache.org/repos/asf/spark/rep
;t break any compatibility. Otherwise, if it is merged into 1.6.1, then we
might need to add more backward compatibility handling logic (currently does
not exist yet).
Author: Andrew Or
Closes #10115 from andrewor14/smaller-event-logs.
(cherry picked fro
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f7ae62c45 -> 8865d87f7
[SPARK-12059][CORE] Avoid assertion error when unexpected state transition met
in Master
Downgrade to warning log for unexpected state transition.
andrewor14 please review, thanks a lot.
Author: jerrys
Repository: spark
Updated Branches:
refs/heads/master 8fa3e474a -> 7bc9e1db2
[SPARK-12059][CORE] Avoid assertion error when unexpected state transition met
in Master
Downgrade to warning log for unexpected state transition.
andrewor14 please review, thanks a lot.
Author: jerryshao
Clo
s leaves `(1024 - 300) * 0.75 = 543MB` for execution and storage. This is
proposal (1) listed in the
[JIRA](https://issues.apache.org/jira/browse/SPARK-12081).
Author: Andrew Or
Closes #10081 from andrewor14/unified-memory-small-heaps.
Project: http://git-wip-us.apache.org/repos/asf/spark/rep
s,
this leaves `(1024 - 300) * 0.75 = 543MB` for execution and storage. This is
proposal (1) listed in the
[JIRA](https://issues.apache.org/jira/browse/SPARK-12081).
Author: Andrew Or
Closes #10081 from andrewor14/unified-memory-small-heaps.
(cherry picked fro
Repository: spark
Updated Branches:
refs/heads/master 2cef1cdfb -> 60b541ee1
[SPARK-12004] Preserve the RDD partitioner through RDD checkpointing
The solution is the save the RDD partitioner in a separate file in the RDD
checkpoint directory. That is, `/_partitioner`. In most cases,
whether
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1cf9d3858 -> 81db8d086
[SPARK-12004] Preserve the RDD partitioner through RDD checkpointing
The solution is the save the RDD partitioner in a separate file in the RDD
checkpoint directory. That is, `/_partitioner`. In most cases,
whe
[SPARK-12007][NETWORK] Avoid copies in the network lib's RPC layer.
This change seems large, but most of it is just replacing `byte[]`
with `ByteBuffer` and `new byte[]` with `ByteBuffer.allocate()`,
since it changes the network library's API.
The following are parts of the code that actually hav
[SPARK-12007][NETWORK] Avoid copies in the network lib's RPC layer.
This change seems large, but most of it is just replacing `byte[]`
with `ByteBuffer` and `new byte[]` with `ByteBuffer.allocate()`,
since it changes the network library's API.
The following are parts of the code that actually hav
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a4e134827 -> ef6f8c262
http://git-wip-us.apache.org/repos/asf/spark/blob/ef6f8c26/network/common/src/test/java/org/apache/spark/network/RequestTimeoutIntegrationSuite.java
-
Repository: spark
Updated Branches:
refs/heads/master 0a46e4377 -> 9bf212067
http://git-wip-us.apache.org/repos/asf/spark/blob/9bf21206/network/common/src/test/java/org/apache/spark/network/RpcIntegrationSuite.java
--
diff --gi
Repository: spark
Updated Branches:
refs/heads/branch-1.6 43ffa0373 -> a4e134827
[SPARK-12037][CORE] initialize heartbeatReceiverRef before calling
startDriverHeartbeat
https://issues.apache.org/jira/browse/SPARK-12037
a simple fix by changing the order of the statements
Author: CodingCat
Repository: spark
Updated Branches:
refs/heads/master e6dc89a33 -> 0a46e4377
[SPARK-12037][CORE] initialize heartbeatReceiverRef before calling
startDriverHeartbeat
https://issues.apache.org/jira/browse/SPARK-12037
a simple fix by changing the order of the statements
Author: CodingCat
Clo
Repository: spark
Updated Branches:
refs/heads/master d3ca8cfac -> e6dc89a33
[SPARK-12035] Add more debug information in include_example tag of Jekyll
https://issues.apache.org/jira/browse/SPARK-12035
When we debuging lots of example code files, like in
https://github.com/apache/spark/pull/1
Repository: spark
Updated Branches:
refs/heads/master 9f3e59a16 -> 88875d941
[SPARK-10558][CORE] Fix wrong executor state in Master
`ExecutorAdded` can only be sent to `AppClient` when worker report back the
executor state as `LOADING`, otherwise because of concurrency issue,
`AppClient` wil
Repository: spark
Updated Branches:
refs/heads/branch-1.6 7b720bf1c -> b4cf318ab
[SPARK-10558][CORE] Fix wrong executor state in Master
`ExecutorAdded` can only be sent to `AppClient` when worker report back the
executor state as `LOADING`, otherwise because of concurrency issue,
`AppClient`
Repository: spark
Updated Branches:
refs/heads/master 83653ac5e -> 9f3e59a16
[SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd
from wrong directory
* On windows the `bin/load-spark-env.cmd` tries to load `spark-env.cmd` from
`%~dp0..\..\conf`, where `~dp0` point
Repository: spark
Updated Branches:
refs/heads/branch-1.6 97317d346 -> 7b720bf1c
[SPARK-11880][WINDOWS][SPARK SUBMIT] bin/load-spark-env.cmd loads spark-env.cmd
from wrong directory
* On windows the `bin/load-spark-env.cmd` tries to load `spark-env.cmd` from
`%~dp0..\..\conf`, where `~dp0` p
Repository: spark
Updated Branches:
refs/heads/master 67b673208 -> 83653ac5e
[SPARK-10864][WEB UI] app name is hidden if window is resized
Currently the Web UI navbar has a minimum width of 1200px; so if a window is
resized smaller than that the app name goes off screen. The 1200px width seem
Repository: spark
Updated Branches:
refs/heads/branch-1.6 448208d0e -> 97317d346
[SPARK-10864][WEB UI] app name is hidden if window is resized
Currently the Web UI navbar has a minimum width of 1200px; so if a window is
resized smaller than that the app name goes off screen. The 1200px width
Repository: spark
Updated Branches:
refs/heads/master 0dee44a66 -> 67b673208
[DOCUMENTATION] Fix minor doc error
Author: Jeff Zhang
Closes #9956 from zjffdu/dev_typo.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/67b6
Repository: spark
Updated Branches:
refs/heads/branch-1.6 685b9c2f5 -> 448208d0e
[DOCUMENTATION] Fix minor doc error
Author: Jeff Zhang
Closes #9956 from zjffdu/dev_typo.
(cherry picked from commit 67b67320884282ccf3102e2af96f877e9b186517)
Signed-off-by: Andrew Or
Project: http://git-wip
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c7db01b20 -> 685b9c2f5
[MINOR] Remove unnecessary spaces in `include_example.rb`
Author: Yu ISHIKAWA
Closes #9960 from yu-iskw/minor-remove-spaces.
(cherry picked from commit 0dee44a6646daae0cc03dbc32125e080dff0f4ae)
Signed-off-by: A
401 - 500 of 1712 matches
Mail list logo