Repository: spark
Updated Branches:
refs/heads/master 0375a413b -> 8cd1692c9
[SPARK-6036][CORE] avoid race condition between eventlogListener and akka actor
system
For detail description, pls refer to
[SPARK-6036](https://issues.apache.org/jira/browse/SPARK-6036).
Author: Zhang, Liye
Clos
Repository: spark
Updated Branches:
refs/heads/branch-1.2 d0bf938ec -> d4ce702c4
fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode
jira case spark-6033 https://issues.apache.org/jira/browse/SPARK-6033
In standalone deploy mode, the cleanup will only remove the stopp
Repository: spark
Updated Branches:
refs/heads/branch-1.1 2785210fa -> 814934da6
fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode
jira case spark-6033 https://issues.apache.org/jira/browse/SPARK-6033
In standalone deploy mode, the cleanup will only remove the stopp
Repository: spark
Updated Branches:
refs/heads/branch-1.0 14e042b65 -> e751f8f26
fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode
jira case spark-6033 https://issues.apache.org/jira/browse/SPARK-6033
In standalone deploy mode, the cleanup will only remove the stopp
Repository: spark
Updated Branches:
refs/heads/branch-1.3 485b91934 -> b8db84c5b
fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode
jira case spark-6033 https://issues.apache.org/jira/browse/SPARK-6033
In standalone deploy mode, the cleanup will only remove the stopp
Repository: spark
Updated Branches:
refs/heads/master 7c99a014f -> 0375a413b
fix spark-6033, clarify the spark.worker.cleanup behavior in standalone mode
jira case spark-6033 https://issues.apache.org/jira/browse/SPARK-6033
In standalone deploy mode, the cleanup will only remove the stopped
Repository: spark
Updated Branches:
refs/heads/master 4a8a0a8ec -> 7c99a014f
[SPARK-6046] Privatize SparkConf.translateConfKey
The warning of deprecated configs is actually done when the configs are set,
not when they are get. As a result we don't need to explicitly call
`translateConfKey` o
Repository: spark
Updated Branches:
refs/heads/branch-1.3 6200f0709 -> 485b91934
SPARK-2168 [Spark core] Use relative URIs for the app links in the History
Server.
As agreed in PR #1160 adding test to verify if history server generates
relative links to applications.
Author: Lukasz Jastrzeb
Repository: spark
Updated Branches:
refs/heads/master 67595eb8f -> 4a8a0a8ec
SPARK-2168 [Spark core] Use relative URIs for the app links in the History
Server.
As agreed in PR #1160 adding test to verify if history server generates
relative links to applications.
Author: Lukasz Jastrzebski
Repository: spark
Updated Branches:
refs/heads/master 12135e905 -> 67595eb8f
[SPARK-5495][UI] Add app and driver kill function in master web UI
Add application kill function in master web UI for standalone mode. Details can
be seen in [SPARK-5495](https://issues.apache.org/jira/browse/SPARK-5
Repository: spark
Updated Branches:
refs/heads/master 5e5ad6558 -> 12135e905
[SPARK-5771][UI][hotfix] Change Requested Cores into * if default cores is not
set
cc andrewor14, srowen.
Author: jerryshao
Closes #4800 from jerryshao/SPARK-5771 and squashes the following commits:
a2483c2 [jerr
Repository: spark
Updated Branches:
refs/heads/branch-1.3 25a109e42 -> 6200f0709
[SPARK-6024][SQL] When a data source table has too many columns, it's schema
cannot be stored in metastore.
JIRA: https://issues.apache.org/jira/browse/SPARK-6024
Author: Yin Huai
Closes #4795 from yhuai/wideS
Repository: spark
Updated Branches:
refs/heads/master 4ad5153f5 -> 5e5ad6558
[SPARK-6024][SQL] When a data source table has too many columns, it's schema
cannot be stored in metastore.
JIRA: https://issues.apache.org/jira/browse/SPARK-6024
Author: Yin Huai
Closes #4795 from yhuai/wideSchem
Repository: spark
Updated Branches:
refs/heads/master 18f209843 -> 4ad5153f5
[SPARK-6037][SQL] Avoiding duplicate Parquet schema merging
`FilteringParquetRowInputFormat` manually merges Parquet schemas before
computing splits. However, it is duplicate because the schemas are already
merged i
Repository: spark
Updated Branches:
refs/heads/branch-1.3 b83a93e08 -> 25a109e42
[SPARK-6037][SQL] Avoiding duplicate Parquet schema merging
`FilteringParquetRowInputFormat` manually merges Parquet schemas before
computing splits. However, it is duplicate because the schemas are already
merg
Repository: spark
Updated Branches:
refs/heads/master fbc469473 -> 18f209843
[SPARK-5529][CORE]Add expireDeadHosts in HeartbeatReceiver
If a blockManager has not send heartBeat more than 120s,
BlockManagerMasterActor will remove it. But coarseGrainedSchedulerBackend can
only remove executor
Repository: spark
Updated Branches:
refs/heads/branch-1.2 58b3aa692 -> d0bf938ec
SPARK-4579 [WEBUI] Scheduling Delay appears negative
Ensure scheduler delay handles unfinished task case, and ensure delay is never
negative even due to rounding
Author: Sean Owen
Closes #4796 from srowen/SPAR
Repository: spark
Updated Branches:
refs/heads/master e60ad2f4c -> fbc469473
SPARK-4579 [WEBUI] Scheduling Delay appears negative
Ensure scheduler delay handles unfinished task case, and ensure delay is never
negative even due to rounding
Author: Sean Owen
Closes #4796 from srowen/SPARK-45
Repository: spark
Updated Branches:
refs/heads/branch-1.3 5b426cb1f -> b83a93e08
SPARK-4579 [WEBUI] Scheduling Delay appears negative
Ensure scheduler delay handles unfinished task case, and ensure delay is never
negative even due to rounding
Author: Sean Owen
Closes #4796 from srowen/SPAR
Repository: spark
Updated Branches:
refs/heads/master b38dec2ff -> e60ad2f4c
SPARK-6045 RecordWriter should be checked against null in PairRDDFunctio...
...ns#saveAsNewAPIHadoopDataset
Author: tedyu
Closes #4794 from tedyu/master and squashes the following commits:
2632a57 [tedyu] SPARK-60
Repository: spark
Updated Branches:
refs/heads/branch-1.3 297c3ef82 -> 5b426cb1f
[SPARK-5951][YARN] Remove unreachable driver memory properties in yarn client
mode
Remove unreachable driver memory properties in yarn client mode
Author: mohit.goyal
Closes #4730 from zuxqoj/master and squash
Repository: spark
Updated Branches:
refs/heads/master c871e2dae -> b38dec2ff
[SPARK-5951][YARN] Remove unreachable driver memory properties in yarn client
mode
Remove unreachable driver memory properties in yarn client mode
Author: mohit.goyal
Closes #4730 from zuxqoj/master and squashes t
Repository: spark
Updated Branches:
refs/heads/master 3fb53c029 -> c871e2dae
Add a note for context termination for History server on Yarn
The history server on Yarn only shows completed jobs. This adds a note
concerning the needed explicit context termination at the end of a spark job
which
Repository: spark
Updated Branches:
refs/heads/branch-1.1 36f3c499f -> 2785210fa
Add a note for context termination for History server on Yarn
The history server on Yarn only shows completed jobs. This adds a note
concerning the needed explicit context termination at the end of a spark job
w
Repository: spark
Updated Branches:
refs/heads/branch-1.0 f74bccbe3 -> 14e042b65
Add a note for context termination for History server on Yarn
The history server on Yarn only shows completed jobs. This adds a note
concerning the needed explicit context termination at the end of a spark job
w
Repository: spark
Updated Branches:
refs/heads/branch-1.2 64e0cbc73 -> 58b3aa692
Add a note for context termination for History server on Yarn
The history server on Yarn only shows completed jobs. This adds a note
concerning the needed explicit context termination at the end of a spark job
w
Repository: spark
Updated Branches:
refs/heads/branch-1.3 fe7967483 -> 297c3ef82
Add a note for context termination for History server on Yarn
The history server on Yarn only shows completed jobs. This adds a note
concerning the needed explicit context termination at the end of a spark job
w
Repository: spark
Updated Branches:
refs/heads/master 5f3238b3b -> 3fb53c029
SPARK-4300 [CORE] Race condition during SparkWorker shutdown
Close appender saving stdout/stderr before destroying process to avoid
exception on reading closed input stream.
(This also removes a redundant `waitFor()`
Repository: spark
Updated Branches:
refs/heads/branch-1.2 2d83442f2 -> 64e0cbc73
SPARK-4300 [CORE] Race condition during SparkWorker shutdown
Close appender saving stdout/stderr before destroying process to avoid
exception on reading closed input stream.
(This also removes a redundant `waitFo
Repository: spark
Updated Branches:
refs/heads/branch-1.2 e21475d16 -> 2d83442f2
SPARK-794 [CORE] Backport. Remove sleep() in ClusterScheduler.stop
Backport https://github.com/apache/spark/pull/3851 to branch 1.2: remove
Thread.sleep(1000) in TaskSchedulerImpl.
Teeing this up for Jenkins per
Repository: spark
Updated Branches:
refs/heads/branch-1.3 731a997db -> fe7967483
[SPARK-6018] [YARN] NoSuchMethodError in Spark app is swallowed by YARN AM
Author: Cheolsoo Park
Closes #4773 from piaozhexiu/SPARK-6018 and squashes the following commits:
2a919d5 [Cheolsoo Park] Rename e with
Repository: spark
Updated Branches:
refs/heads/branch-1.2 94faf4c49 -> e21475d16
[SPARK-6018] [YARN] NoSuchMethodError in Spark app is swallowed by YARN AM
Author: Cheolsoo Park
Closes #4773 from piaozhexiu/SPARK-6018 and squashes the following commits:
2a919d5 [Cheolsoo Park] Rename e with
Repository: spark
Updated Branches:
refs/heads/master aa63f633d -> 5f3238b3b
[SPARK-6018] [YARN] NoSuchMethodError in Spark app is swallowed by YARN AM
Author: Cheolsoo Park
Closes #4773 from piaozhexiu/SPARK-6018 and squashes the following commits:
2a919d5 [Cheolsoo Park] Rename e with cau
Repository: spark
Updated Branches:
refs/heads/branch-1.3 62652dc5b -> 731a997db
[SPARK-6027][SPARK-5546] Fixed --jar and --packages not working for KafkaUtils
and improved error message
The problem with SPARK-6027 in short is that JARs like the kafka-assembly.jar
does not work in python as
Repository: spark
Updated Branches:
refs/heads/master 8942b522d -> aa63f633d
[SPARK-6027][SPARK-5546] Fixed --jar and --packages not working for KafkaUtils
and improved error message
The problem with SPARK-6027 in short is that JARs like the kafka-assembly.jar
does not work in python as the
Repository: spark
Updated Branches:
refs/heads/master 10094a523 -> 8942b522d
[SPARK-3562]Periodic cleanup event logs
Author: xukun 00228947
Closes #4214 from viper-kun/cleaneventlog and squashes the following commits:
7a5b9c5 [xukun 00228947] fix issue
31674ee [xukun 00228947] fix issue
6e3
Repository: spark
Updated Branches:
refs/heads/branch-1.3 5d309ad6c -> 62652dc5b
Modify default value description for
spark.scheduler.minRegisteredResourcesRatio on docs.
The configuration is not supported in mesos mode now.
See https://github.com/apache/spark/pull/1462
Author: Li Zhihui
C
Repository: spark
Updated Branches:
refs/heads/branch-1.2 602d5c1fc -> 94faf4c49
Modify default value description for
spark.scheduler.minRegisteredResourcesRatio on docs.
The configuration is not supported in mesos mode now.
See https://github.com/apache/spark/pull/1462
Author: Li Zhihui
C
Repository: spark
Updated Branches:
refs/heads/master cd5c8d7bb -> 10094a523
Modify default value description for
spark.scheduler.minRegisteredResourcesRatio on docs.
The configuration is not supported in mesos mode now.
See https://github.com/apache/spark/pull/1462
Author: Li Zhihui
Close
Repository: spark
Updated Branches:
refs/heads/branch-1.2 cc7313d09 -> 602d5c1fc
SPARK-4704 [CORE] SparkSubmitDriverBootstrap doesn't flush output
Join on output threads to make sure any lingering output from process reaches
stdout, stderr before exiting
CC andrewor14 since I believe he crea
Repository: spark
Updated Branches:
refs/heads/master 7fa960e65 -> cd5c8d7bb
SPARK-4704 [CORE] SparkSubmitDriverBootstrap doesn't flush output
Join on output threads to make sure any lingering output from process reaches
stdout, stderr before exiting
CC andrewor14 since I believe he created
Repository: spark
Updated Branches:
refs/heads/master cfff397f0 -> 7fa960e65
[SPARK-5363] Fix bug in PythonRDD: remove() inside iterator is not safe
Removing elements from a mutable HashSet while iterating over it can cause the
iteration to incorrectly skip over entries that were not removed.
Repository: spark
Updated Branches:
refs/heads/branch-1.2 015895ab5 -> cc7313d09
[SPARK-5363] Fix bug in PythonRDD: remove() inside iterator is not safe
Removing elements from a mutable HashSet while iterating over it can cause the
iteration to incorrectly skip over entries that were not remov
Repository: spark
Updated Branches:
refs/heads/branch-1.3 dafb3d210 -> 5d309ad6c
[SPARK-5363] Fix bug in PythonRDD: remove() inside iterator is not safe
Removing elements from a mutable HashSet while iterating over it can cause the
iteration to incorrectly skip over entries that were not remov
Repository: spark
Updated Branches:
refs/heads/master 235865754 -> cfff397f0
[SPARK-6004][MLlib] Pick the best model when training GradientBoostedTrees with
validation
Since the validation error does not change monotonically, in practice, it
should be proper to pick the best model when train
Repository: spark
Updated Branches:
refs/heads/branch-1.3 7c779d8d5 -> dafb3d210
[SPARK-6015] fix links to source code in Python API docs
Author: Davies Liu
Closes #4772 from davies/source_link and squashes the following commits:
389f0c6 [Davies Liu] fix link to source code in Pyton API doc
Repository: spark
Updated Branches:
refs/heads/branch-1.2 00112baf9 -> 015895ab5
[SPARK-6015] fix links to source code in Python API docs
Author: Davies Liu
Closes #4772 from davies/source_link and squashes the following commits:
389f0c6 [Davies Liu] fix link to source code in Pyton API doc
Repository: spark
Updated Branches:
refs/heads/master df3d559b3 -> 235865754
[SPARK-6007][SQL] Add numRows param in DataFrame.show()
It is useful to let the user decide the number of rows to show in DataFrame.show
Author: Jacky Li
Closes #4767 from jackylk/show and squashes the following co
Repository: spark
Updated Branches:
refs/heads/branch-1.3 b5c5e93d7 -> 7c779d8d5
[SPARK-6007][SQL] Add numRows param in DataFrame.show()
It is useful to let the user decide the number of rows to show in DataFrame.show
Author: Jacky Li
Closes #4767 from jackylk/show and squashes the followin
Repository: spark
Updated Branches:
refs/heads/master 192e42a29 -> df3d559b3
[SPARK-5801] [core] Avoid creating nested directories.
Cache the value of the local root dirs to use for storing local data,
so that the same directories are reused.
Also, to avoid an extra level of nesting, use a di
Repository: spark
Updated Branches:
refs/heads/master f02394d06 -> 192e42a29
[SPARK-6016][SQL] Cannot read the parquet table after overwriting the existing
table when spark.sql.parquet.cacheMetadata=true
Please see JIRA (https://issues.apache.org/jira/browse/SPARK-6016) for details
of the bu
Repository: spark
Updated Branches:
refs/heads/branch-1.3 e0f5fb0ad -> b5c5e93d7
[SPARK-6016][SQL] Cannot read the parquet table after overwriting the existing
table when spark.sql.parquet.cacheMetadata=true
Please see JIRA (https://issues.apache.org/jira/browse/SPARK-6016) for details
of th
Repository: spark
Updated Branches:
refs/heads/master 51a6f9097 -> f02394d06
[SPARK-6023][SQL] ParquetConversions fails to replace the destination
MetastoreRelation of an InsertIntoTable node to ParquetRelation2
JIRA: https://issues.apache.org/jira/browse/SPARK-6023
Author: Yin Huai
Closes
Repository: spark
Updated Branches:
refs/heads/branch-1.3 a51d9dbeb -> e0f5fb0ad
[SPARK-6023][SQL] ParquetConversions fails to replace the destination
MetastoreRelation of an InsertIntoTable node to ParquetRelation2
JIRA: https://issues.apache.org/jira/browse/SPARK-6023
Author: Yin Huai
Cl
Repository: spark
Updated Branches:
refs/heads/master e43139f40 -> 51a6f9097
[SPARK-5914] to run spark-submit requiring only user perm on windows
Because windows on-default does not grant read permission to jars except to
admin, spark-submit would fail with "ClassNotFound" exception if user r
55 matches
Mail list logo