Repository: spark
Updated Branches:
refs/heads/branch-1.3 1cde04f21 - ab1b8edb8
[SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py
The spark_ec2.py script uses public_dns_name everywhere in the script except
for testing ssh availability, which is done using the public ip address
Repository: spark
Updated Branches:
refs/heads/master a0846c4b6 - 6f0d55d76
[SPARK-6636] Use public DNS hostname everywhere in spark_ec2.py
The spark_ec2.py script uses public_dns_name everywhere in the script except
for testing ssh availability, which is done using the public ip address of
Repository: spark
Updated Branches:
refs/heads/master e40ea8742 - a0846c4b6
[SPARK-6716] Change SparkContext.DRIVER_IDENTIFIER from driver to driver
Currently, the driver's executorId is set to `driver`. This choice of ID was
present in older Spark versions, but it has started to cause
Repository: spark
Updated Branches:
refs/heads/master 6f0d55d76 - ae980eb41
[SPARK-6736][GraphX][Doc]Example of Graph#aggregateMessages has error
Example of Graph#aggregateMessages has error.
Since aggregateMessages is a method of Graph, It should be written
rawGraph.aggregateMessages
Repository: spark
Updated Branches:
refs/heads/master ae980eb41 - b65bad65c
[SPARK-3591][YARN]fire and forget for YARN cluster mode
https://issues.apache.org/jira/browse/SPARK-3591
The output after this patch:
doggie153:/opt/oss/spark-1.3.0-bin-hadoop2.4/bin # ./spark-submit --class
Repository: spark
Updated Branches:
refs/heads/master 7162ecf88 - 2c32bef17
Replace use of .size with .length for Arrays
Invoking .size on arrays is valid, but requires an implicit conversion to
SeqLike. This incurs a compile time overhead and more importantly a runtime
overhead, as the
Repository: spark
Updated Branches:
refs/heads/master b65bad65c - 7162ecf88
[SPARK-6733][ Scheduler]Added scala.language.existentials
Author: Vinod K C vinod...@huawei.com
Closes #5384 from vinodkc/Suppression_Scala_existential_code and squashes the
following commits:
82a3a1f [Vinod K C]
Repository: spark
Updated Branches:
refs/heads/master 2c32bef17 - 123221591
[SPARK-6750] Upgrade ScalaStyle to 0.7.
0.7 fixes a bug that's pretty useful, i.e. inline functions no longer return
explicit type definition.
Author: Reynold Xin r...@databricks.com
Closes #5399 from rxin/style0.7
Repository: spark
Updated Branches:
refs/heads/master 123221591 - 596ba77c5
[SPARK-6568] spark-shell.cmd --jars option does not accept the jar that has
space in its path
escape spaces in the arguments.
Author: Masayoshi TSUZUKI tsudu...@oss.nttdata.co.jp
Closes #5347 from
Repository: spark
Updated Branches:
refs/heads/master 596ba77c5 - e6f08fb42
Revert [SPARK-6568] spark-shell.cmd --jars option does not accept the jar that
has space in its path
This reverts commit 596ba77c5fdca79486396989e549632153055caf.
Project:
Repository: spark
Updated Branches:
refs/heads/master e6f08fb42 - fc957dc78
[SPARK-6720][MLLIB] PySpark MultivariateStatisticalSummary unit test for
normL1...
... and normL2.
Add test cases to insufficient unit test for `normL1` and `normL2`.
Ref: https://github.com/apache/spark/pull/5359
Repository: spark
Updated Branches:
refs/heads/master fc957dc78 - 77bcceb9f
[SPARK-6748] [SQL] Makes QueryPlan.schema a lazy val
`DataFrame.collect()` calls `SparkPlan.executeCollect()`, which consists of a
single line:
```scala
execute().map(ScalaReflection.convertRowToScala(_,
Repository: spark
Updated Branches:
refs/heads/master 77bcceb9f - c83e03948
[SPARK-6737] Fix memory leak in OutputCommitCoordinator
This patch fixes a memory leak in the DAGScheduler, which caused us to leak a
map entry per submitted stage. The problem is that the OutputCommitCoordinator
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ab1b8edb8 - 277733b1d
[SPARK-6737] Fix memory leak in OutputCommitCoordinator
This patch fixes a memory leak in the DAGScheduler, which caused us to leak a
map entry per submitted stage. The problem is that the
Repository: spark
Updated Branches:
refs/heads/master d138aa8ee - 8d2a36c0f
[SPARK-6754] Remove unnecessary TaskContextHelper
The TaskContextHelper was originally necessary because TaskContext was written
in Java, which does
not have a way to specify that classes are package-private, so
Repository: spark
Updated Branches:
refs/heads/master c83e03948 - d138aa8ee
[SPARK-6705][MLLIB] Add fit intercept api to ml logisticregression
I have the fit intercept enabled by default for logistic regression, I
wonder what others think here. I understand that it enables allocation
by
Revert Preparing Spark release v1.3.1-rc1
This reverts commit 0dcb5d9f31b713ed90bcec63ebc4e530cbb69851.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/00837ccd
Tree:
Repository: spark
Updated Branches:
refs/heads/branch-1.3 277733b1d - 00837ccd0
Revert Preparing development version 1.3.2-SNAPSHOT
This reverts commit 728c1f927822eb6b12f04dc47109feb6fbe02ec2.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/branch-1.3 00837ccd0 - cdef7d080
Preparing Spark release v1.3.1-rc2
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7c4473aa
Tree:
Repository: spark
Updated Tags: refs/tags/v1.3.1-rc2 [created] 7c4473aa5
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Preparing development version 1.3.2-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/cdef7d08
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/cdef7d08
Diff:
21 matches
Mail list logo