Repository: spark
Updated Branches:
refs/heads/master 9e48cdfbd -> b541b3163
[DOC][MINOR][SQL] Fix internal link
It doesn't show up as a hyperlink currently. It will show up as a hyperlink
after this change.
Author: Rohit Agarwal
Closes #9544 from mindprince/patch-2.
Repository: spark
Updated Branches:
refs/heads/branch-1.6 f53c9fb18 -> 0f03bd13e
[DOC][MINOR][SQL] Fix internal link
It doesn't show up as a hyperlink currently. It will show up as a hyperlink
after this change.
Author: Rohit Agarwal
Closes #9544 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 7eaf48eeb -> f53c9fb18
[SPARK-11218][CORE] show help messages for start-slave and start-master
Addressing https://issues.apache.org/jira/browse/SPARK-11218, mostly copied
start-thriftserver.sh.
```
charlesyeh-mbp:spark charlesyeh$
Repository: spark
Updated Branches:
refs/heads/master 88a3fdcc7 -> 5039a49b6
[SPARK-10471][CORE][MESOS] prevent getting offers for unmet constraints
this change rejects offers for slaves with unmet constraints for 120s to
mitigate offer starvation.
this prevents mesos to send us these offers
Repository: spark
Updated Branches:
refs/heads/branch-1.6 2459b3432 -> 74f50275e
[SPARK-10471][CORE][MESOS] prevent getting offers for unmet constraints
this change rejects offers for slaves with unmet constraints for 120s to
mitigate offer starvation.
this prevents mesos to send us these
Repository: spark
Updated Branches:
refs/heads/branch-1.6 62f664c5a -> 2459b3432
[SPARK-10280][MLLIB][PYSPARK][DOCS] Add @since annotation to
pyspark.ml.classification
Author: Yu ISHIKAWA
Closes #8690 from yu-iskw/SPARK-10280.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master 860ea0d38 -> 88a3fdcc7
[SPARK-10280][MLLIB][PYSPARK][DOCS] Add @since annotation to
pyspark.ml.classification
Author: Yu ISHIKAWA
Closes #8690 from yu-iskw/SPARK-10280.
Project:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 74f50275e -> 129cfab4f
[SPARK-11552][DOCS][Replaced example code in ml-decision-tree.md using
include_example]
I have tested it on my local, it is working fine, please review
Author: sachin aggarwal
Repository: spark
Updated Branches:
refs/heads/branch-1.6 129cfab4f -> 85bb319a2
[SPARK-11548][DOCS] Replaced example code in mllib-collaborative-filtering.md
using include_example
Kindly review the changes.
Author: Rishabh Bhardwaj
Closes #9519 from
Repository: spark
Updated Branches:
refs/heads/master 51d41e4b1 -> b7720fa45
[SPARK-11548][DOCS] Replaced example code in mllib-collaborative-filtering.md
using include_example
Kindly review the changes.
Author: Rishabh Bhardwaj
Closes #9519 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 dccc4645d -> ab7da0eae
[SPARK-11462][STREAMING] Add JavaStreamingListener
Currently, StreamingListener is not Java friendly because it exposes some Scala
collections to Java users directly, such as Option, Map.
This PR added a Java
Repository: spark
Updated Branches:
refs/heads/master 1f0f14efe -> 6502944f3
[SPARK-11333][STREAMING] Add executorId to ReceiverInfo and display it in UI
Expose executorId to `ReceiverInfo` and UI since it's helpful when there are
multiple executors running in the same host. Screenshot:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 ab7da0eae -> d33f18c42
[SPARK-11333][STREAMING] Add executorId to ReceiverInfo and display it in UI
Expose executorId to `ReceiverInfo` and UI since it's helpful when there are
multiple executors running in the same host. Screenshot:
Repository: spark
Updated Branches:
refs/heads/master 6502944f3 -> 1431319e5
Add mockito as an explicit test dependency to spark-streaming
While sbt successfully compiles as it properly pulls the mockito dependency,
maven builds have broken. We need this in ASAP.
tdas
Author: Burak Yavuz
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d33f18c42 -> d6f4b56a6
Add mockito as an explicit test dependency to spark-streaming
While sbt successfully compiles as it properly pulls the mockito dependency,
maven builds have broken. We need this in ASAP.
tdas
Author: Burak
Repository: spark
Updated Branches:
refs/heads/master 1431319e5 -> c4e19b381
[SPARK-11587][SPARKR] Fix the summary generic to match base R
The signature is summary(object, ...) as defined in
https://stat.ethz.ch/R-manual/R-devel/library/base/html/summary.html
Author: Shivaram Venkataraman
Repository: spark
Updated Branches:
refs/heads/branch-1.6 d6f4b56a6 -> a5651f0a5
[SPARK-11587][SPARKR] Fix the summary generic to match base R
The signature is summary(object, ...) as defined in
https://stat.ethz.ch/R-manual/R-devel/library/base/html/summary.html
Author: Shivaram
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a91d21314 -> c859be2dd
Typo fixes + code readability improvements
Author: Jacek Laskowski
Closes #9501 from jaceklaskowski/typos-with-style.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/branch-1.6 c859be2dd -> 42d933fbb
[SPARK-2] DAG visualization: display RDD callsite
https://cloud.githubusercontent.com/assets/2133137/10870343/2a8cd070-807d-11e5-857a-4ebcace77b5b.png;>
mateiz sarutak
Author: Andrew Or
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fb469e76a -> 7b4d7abfc
[SPARK-9865][SPARKR] Flaky SparkR test: test_sparkSQL.R: sample on a DataFrame
Make sample test less flaky by setting the seed
Tested with
```
repeat { if (count(sample(df, FALSE, 0.1)) == 3) { break } }
```
Repository: spark
Updated Branches:
refs/heads/branch-1.6 2946c85f5 -> fb469e76a
[SPARK-2] Fix Scala 2.11 compilation error in RDDInfo.scala
As shown in
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/job/Spark-Master-Scala211-Compile/1946/console
, compilation fails with:
Repository: spark
Updated Branches:
refs/heads/master 404a28f4e -> cd174882a
[SPARK-9865][SPARKR] Flaky SparkR test: test_sparkSQL.R: sample on a DataFrame
Make sample test less flaky by setting the seed
Tested with
```
repeat { if (count(sample(df, FALSE, 0.1)) == 3) { break } }
```
Repository: spark
Updated Branches:
refs/heads/branch-1.6 7b4d7abfc -> 006d73a74
[DOCS] Fix typo for Python section on unifying Kafka streams
1) kafkaStreams is a list. The list should be unpacked when passing it into
the streaming context union method, which accepts a variable number of
Repository: spark
Updated Branches:
refs/heads/branch-1.5 78a5cf198 -> 6b314fe9e
[SPARK-11577][SQL] Handle code review comments for SPARK-11188
Handle the code review comments from Michael for SPARK-11188
Author: Dilip Biswal
Closes #9551 from
Repository: spark
Updated Branches:
refs/heads/master 9b88e1dca -> 08a7a836c
[SPARK-10565][CORE] add missing web UI stats to /api/v1/applications JSON
I looked at the other endpoints, and they don't seem to be missing any fields.
Added fields:
Repository: spark
Updated Branches:
refs/heads/master 08a7a836c -> 404a28f4e
[SPARK-2] Fix Scala 2.11 compilation error in RDDInfo.scala
As shown in
https://amplab.cs.berkeley.edu/jenkins/view/Spark-QA-Compile/job/Spark-Master-Scala211-Compile/1946/console
, compilation fails with:
```
Repository: spark
Updated Branches:
refs/heads/branch-1.6 fc2942d12 -> 2946c85f5
[SPARK-11362] [SQL] Use Spark BitSet in BroadcastNestedLoopJoin
JIRA: https://issues.apache.org/jira/browse/SPARK-11362
We use scala.collection.mutable.BitSet in BroadcastNestedLoopJoin now. We
should use
Repository: spark
Updated Branches:
refs/heads/master cd174882a -> 874cd66d4
[DOCS] Fix typo for Python section on unifying Kafka streams
1) kafkaStreams is a list. The list should be unpacked when passing it into
the streaming context union method, which accepts a variable number of
Repository: spark
Updated Branches:
refs/heads/branch-1.5 6b314fe9e -> a33fd737c
[DOCS] Fix typo for Python section on unifying Kafka streams
1) kafkaStreams is a list. The list should be unpacked when passing it into
the streaming context union method, which accepts a variable number of
Repository: spark
Updated Branches:
refs/heads/master b541b3163 -> 8c0e1b50e
[SPARK-11494][ML][R] Expose R-like summary statistics in SparkR::glm for linear
regression
Expose R-like summary statistics in SparkR::glm for linear regression, the
output of ```summary``` like
```Java
Repository: spark
Updated Branches:
refs/heads/branch-1.6 0f03bd13e -> 029e931da
[SPARK-11494][ML][R] Expose R-like summary statistics in SparkR::glm for linear
regression
Expose R-like summary statistics in SparkR::glm for linear regression, the
output of ```summary``` like
```Java
Repository: spark
Updated Branches:
refs/heads/branch-1.6 029e931da -> a85a9122f
[SPARK-10689][ML][DOC] User guide and example code for AFTSurvivalRegression
Add user guide and example code for ```AFTSurvivalRegression```.
Author: Yanbo Liang
Closes #9491 from
Repository: spark
Updated Branches:
refs/heads/master 8c0e1b50e -> d50a66cc0
[SPARK-10689][ML][DOC] User guide and example code for AFTSurvivalRegression
Add user guide and example code for ```AFTSurvivalRegression```.
Author: Yanbo Liang
Closes #9491 from
Repository: spark
Updated Branches:
refs/heads/master d50a66cc0 -> 9b88e1dca
[SPARK-11582][MLLIB] specifying pmml version attribute =4.2 in the root node of
pmml model
The current pmml models generated do not specify the pmml version in its root
node. This is a problem when using this pmml
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a85a9122f -> a91d21314
[SPARK-11582][MLLIB] specifying pmml version attribute =4.2 in the root node of
pmml model
The current pmml models generated do not specify the pmml version in its root
node. This is a problem when using this
Repository: spark
Updated Branches:
refs/heads/branch-1.5 a33fd737c -> 0512960fc
[SPARK-11581][DOCS] Example mllib code in documentation incorrectly computes MSE
Author: Bharat Lal
Closes #9560 from bharatl/SPARK-11581.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/branch-1.6 006d73a74 -> 62f664c5a
[SPARK-11581][DOCS] Example mllib code in documentation incorrectly computes MSE
Author: Bharat Lal
Closes #9560 from bharatl/SPARK-11581.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/branch-1.4 4f98014b9 -> 72ab06e8a
[SPARK-11581][DOCS] Example mllib code in documentation incorrectly computes MSE
Author: Bharat Lal
Closes #9560 from bharatl/SPARK-11581.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master 874cd66d4 -> 860ea0d38
[SPARK-11581][DOCS] Example mllib code in documentation incorrectly computes MSE
Author: Bharat Lal
Closes #9560 from bharatl/SPARK-11581.
Project:
Repository: spark
Updated Branches:
refs/heads/master c4e19b381 -> d6cd3a18e
[SPARK-11599] [SQL] fix NPE when resolve Hive UDF in SQLParser
The DataFrame APIs that takes a SQL expression always use SQLParser, then the
HiveFunctionRegistry will called outside of Hive state, cause NPE if there
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a5651f0a5 -> b426d24db
[SPARK-11599] [SQL] fix NPE when resolve Hive UDF in SQLParser
The DataFrame APIs that takes a SQL expression always use SQLParser, then the
HiveFunctionRegistry will called outside of Hive state, cause NPE if
Repository: spark
Updated Branches:
refs/heads/master d6cd3a18e -> 521b3cae1
[SPARK-11598] [SQL] enable tests for ShuffledHashOuterJoin
Author: Davies Liu
Closes #9573 from davies/join_condition.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/master 26062d226 -> 0ce6f9b2d
[SPARK-11141][STREAMING] Batch ReceivedBlockTrackerLogEvents for WAL writes
When using S3 as a directory for WALs, the writes take too long. The driver
gets very easily bottlenecked when multiple receivers send
Repository: spark
Updated Branches:
refs/heads/branch-1.6 116b7158f -> dccc4645d
[SPARK-11141][STREAMING] Batch ReceivedBlockTrackerLogEvents for WAL writes
When using S3 as a directory for WALs, the writes take too long. The driver
gets very easily bottlenecked when multiple receivers send
Repository: spark
Updated Branches:
refs/heads/master 0ce6f9b2d -> 1f0f14efe
[SPARK-11462][STREAMING] Add JavaStreamingListener
Currently, StreamingListener is not Java friendly because it exposes some Scala
collections to Java users directly, such as Option, Map.
This PR added a Java
Repository: spark
Updated Branches:
refs/heads/branch-1.6 bdd8a6bd4 -> 9e80db7c7
[SPARK-11359][STREAMING][KINESIS] Checkpoint to DynamoDB even when new data
doesn't come in
Currently, the checkpoints to DynamoDB occur only when new data comes in, as we
update the clock for the
Repository: spark
Updated Branches:
refs/heads/master a3a7c9103 -> 8a2336893
[SPARK-6517][MLLIB] Implement the Algorithm of Hierarchical Clustering
I implemented a hierarchical clustering algorithm again. This PR doesn't
include examples, documentation and spark.ml APIs. I am going to send
Repository: spark
Updated Branches:
refs/heads/branch-1.6 1585f559d -> b9adfdf9c
[SPARK-11564][SQL][FOLLOW-UP] improve java api for GroupedDataset
created `MapGroupFunction`, `FlatMapGroupFunction`, `CoGroupFunction`
Author: Wenchen Fan
Closes #9564 from
Repository: spark
Updated Branches:
refs/heads/branch-1.6 b9adfdf9c -> c42433d02
[SPARK-9557][SQL] Refactor ParquetFilterSuite and remove old ParquetFilters code
Actually this was resolved by https://github.com/apache/spark/pull/8275.
But I found the JIRA issue for this is not marked as
Repository: spark
Updated Branches:
refs/heads/master 5039a49b6 -> 51d41e4b1
[SPARK-11552][DOCS][Replaced example code in ml-decision-tree.md using
include_example]
I have tested it on my local, it is working fine, please review
Author: sachin aggarwal
Closes
Repository: spark
Updated Branches:
refs/heads/master b7720fa45 -> f138cb873
[SPARK-9301][SQL] Add collect_set and collect_list aggregate functions
For now they are thin wrappers around the corresponding Hive UDAFs.
One limitation with these in Hive 0.13.0 is they only support aggregating
Repository: spark
Updated Branches:
refs/heads/branch-1.6 85bb319a2 -> a6ee4f989
[SPARK-9301][SQL] Add collect_set and collect_list aggregate functions
For now they are thin wrappers around the corresponding Hive UDAFs.
One limitation with these in Hive 0.13.0 is they only support
Repository: spark
Updated Branches:
refs/heads/master 7dc9d8dba -> 61f9c8711
[SPARK-11069][ML] Add RegexTokenizer option to convert to lowercase
jira: https://issues.apache.org/jira/browse/SPARK-11069
quotes from jira:
Tokenizer converts strings to lowercase automatically, but RegexTokenizer
Repository: spark
Updated Branches:
refs/heads/branch-1.6 08253874a -> 34e824d90
[SPARK-11069][ML] Add RegexTokenizer option to convert to lowercase
jira: https://issues.apache.org/jira/browse/SPARK-11069
quotes from jira:
Tokenizer converts strings to lowercase automatically, but
Author: rxin
Date: Mon Nov 9 23:54:32 2015
New Revision: 1713570
URL: http://svn.apache.org/viewvc?rev=1713570=rev
Log:
Add Spark 1.5.2 doc
[This commit notification would consist of 805 parts,
which exceeds the limit of 50 ones, so it was shortened to the summary.]
Repository: spark
Updated Tags: refs/tags/v1.4.2-rc1 [deleted] 0b22a3c7a
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v1.5.2-rc1 [deleted] ad6ade124
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v1.3.2-rc1 [deleted] 5a139750b
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/master 9c740a9dd -> 675c7e723
[SPARK-11564][SQL] Fix documentation for DataFrame.take/collect
Author: Reynold Xin
Closes #9557 from rxin/SPARK-11564-1.
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 5616282ce -> 08253874a
[SPARK-11610][MLLIB][PYTHON][DOCS] Make the docs of LDAModel.describeTopics in
Python more specific
cc jkbradley
Author: Yu ISHIKAWA
Closes #9577 from yu-iskw/SPARK-11610.
(cherry
Author: rxin
Date: Mon Nov 9 23:42:48 2015
New Revision: 11096
Log:
Add spark-1.5.2-rc2
Added:
dev/spark/spark-1.5.2-rc2/
dev/spark/spark-1.5.2-rc2/spark-1.5.2-bin-cdh4.tgz (with props)
dev/spark/spark-1.5.2-rc2/spark-1.5.2-bin-cdh4.tgz.asc
Repository: spark
Updated Tags: refs/tags/v1.5.2 [created] 5cf17f954
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/master 2f3837885 -> 9c740a9dd
[SPARK-11578][SQL] User API for Typed Aggregation
This PR adds a new interface for user-defined aggregations, that can be used in
`DataFrame` and `Dataset` operations to take all of the elements of a group and
Repository: spark
Updated Branches:
refs/heads/branch-1.6 523db0df5 -> a9f58b445
[SPARK-11578][SQL] User API for Typed Aggregation
This PR adds a new interface for user-defined aggregations, that can be used in
`DataFrame` and `Dataset` operations to take all of the elements of a group and
Repository: spark
Updated Branches:
refs/heads/master f138cb873 -> 150f6a89b
[SPARK-11595] [SQL] Fixes ADD JAR when the input path contains URL scheme
Author: Cheng Lian
Closes #9569 from liancheng/spark-11595.fix-add-jar.
Project:
Repository: spark
Updated Branches:
refs/heads/branch-1.6 a6ee4f989 -> bdd8a6bd4
[SPARK-11595] [SQL] Fixes ADD JAR when the input path contains URL scheme
Author: Cheng Lian
Closes #9569 from liancheng/spark-11595.fix-add-jar.
(cherry picked from commit
Repository: spark
Updated Branches:
refs/heads/master 61f9c8711 -> 26062d226
[SPARK-11198][STREAMING][KINESIS] Support de-aggregation of records during
recovery
While the KCL handles de-aggregation during the regular operation, during
recovery we use the lower level api, and therefore need
Repository: spark
Updated Branches:
refs/heads/branch-1.6 34e824d90 -> 116b7158f
[SPARK-11198][STREAMING][KINESIS] Support de-aggregation of records during
recovery
While the KCL handles de-aggregation during the regular operation, during
recovery we use the lower level api, and therefore
68 matches
Mail list logo