Repository: spark
Updated Branches:
refs/heads/branch-1.0 86ad12d44 -> fed98b934
[SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust
The current checking does version `1.x' is less than `1.4' this will fail if x
has greater than 1 digit, since x > 4, however `1.x` < `1
Repository: spark
Updated Branches:
refs/heads/branch-1.1 672f3228c -> 36eed2f9e
[SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust
The current checking does version `1.x' is less than `1.4' this will fail if x
has greater than 1 digit, since x > 4, however `1.x` < `1
Repository: spark
Updated Branches:
refs/heads/branch-1.2 aefb113c8 -> 23bf3071f
[SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust
The current checking does version `1.x' is less than `1.4' this will fail if x
has greater than 1 digit, since x > 4, however `1.x` < `1
Repository: spark
Updated Branches:
refs/heads/branch-1.3 476b87d31 -> bbd377228
[SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust
The current checking does version `1.x' is less than `1.4' this will fail if x
has greater than 1 digit, since x > 4, however `1.x` < `1
Repository: spark
Updated Branches:
refs/heads/master 43adbd561 -> 452eb82dd
[SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust
The current checking does version `1.x' is less than `1.4' this will fail if x
has greater than 1 digit, since x > 4, however `1.x` < `1.4`
Repository: spark
Updated Branches:
refs/heads/branch-1.4 33edb2b79 -> bd57af387
[SPARK-8032] [PYSPARK] Make version checking for NumPy in MLlib more robust
The current checking does version `1.x' is less than `1.4' this will fail if x
has greater than 1 digit, since x > 4, however `1.x` < `1
Repository: spark
Updated Branches:
refs/heads/branch-1.4 88399c34b -> 33edb2b79
[SPARK-8043] [MLLIB] [DOC] update NaiveBayes and SVM examples in doc
jira: https://issues.apache.org/jira/browse/SPARK-8043
I found some issues during testing the save/load examples in markdown
Documents, as a p
Repository: spark
Updated Branches:
refs/heads/master ccaa82329 -> 43adbd561
[SPARK-8043] [MLLIB] [DOC] update NaiveBayes and SVM examples in doc
jira: https://issues.apache.org/jira/browse/SPARK-8043
I found some issues during testing the save/load examples in markdown
Documents, as a part
Repository: spark
Updated Branches:
refs/heads/master 07c16cb5b -> ccaa82329
[MINOR] make the launcher project name consistent with others
I found this by chance while building spark and think it is better to keep its
name consistent with other sub-projects (Spark Project *).
I am not gonna
Repository: spark
Updated Branches:
refs/heads/master cafd5056e -> 07c16cb5b
[SPARK-8053] [MLLIB] renamed scalingVector to scalingVec
I searched the Spark codebase for all occurrences of "scalingVector"
CC: mengxr
Author: Joseph K. Bradley
Closes #6596 from jkbradley/scalingVec-rename and
Repository: spark
Updated Branches:
refs/heads/branch-1.4 6391be872 -> 88399c34b
[SPARK-8053] [MLLIB] renamed scalingVector to scalingVec
I searched the Spark codebase for all occurrences of "scalingVector"
CC: mengxr
Author: Joseph K. Bradley
Closes #6596 from jkbradley/scalingVec-rename
Author: pwendell
Date: Wed Jun 3 05:21:50 2015
New Revision: 1683233
URL: http://svn.apache.org/r1683233
Log:
Adding information about nightly builds
Modified:
spark/downloads.md
spark/site/downloads.html
Modified: spark/downloads.md
URL:
http://svn.apache.org/viewvc/spark/downloads.md
Repository: spark
Updated Branches:
refs/heads/master a86b3e9b9 -> cafd5056e
[SPARK-7691] [SQL] Refactor CatalystTypeConverter to use type-specific row
accessors
This patch significantly refactors CatalystTypeConverters to both clean up the
code and enable these conversions to work with futu
Repository: spark
Updated Branches:
refs/heads/branch-1.4 6a3e32ad1 -> 6391be872
[SPARK-7547] [ML] Scala Example code for ElasticNet
This is scala example code for both linear and logistic regression. Python and
Java versions are to be added.
Author: DB Tsai
Closes #6576 from dbtsai/elasti
Repository: spark
Updated Branches:
refs/heads/master c3f4c3257 -> a86b3e9b9
[SPARK-7547] [ML] Scala Example code for ElasticNet
This is scala example code for both linear and logistic regression. Python and
Java versions are to be added.
Author: DB Tsai
Closes #6576 from dbtsai/elasticNet
Repository: spark
Updated Branches:
refs/heads/branch-1.4 ab713af56 -> 6a3e32ad1
[SPARK-7387] [ML] [DOC] CrossValidator example code in Python
Author: Ram Sriharsha
Closes #6358 from harsha2010/SPARK-7387 and squashes the following commits:
63efda2 [Ram Sriharsha] more examples for classifi
Repository: spark
Updated Branches:
refs/heads/master 5cd6a63d9 -> c3f4c3257
[SPARK-7387] [ML] [DOC] CrossValidator example code in Python
Author: Ram Sriharsha
Closes #6358 from harsha2010/SPARK-7387 and squashes the following commits:
63efda2 [Ram Sriharsha] more examples for classifier t
Preparing development version 1.4.0-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/ab713af5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/ab713af5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/ab7
Repository: spark
Updated Branches:
refs/heads/branch-1.4 0d8372099 -> ab713af56
Preparing Spark release v1.4.0-rc4
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/22596c53
Tree: http://git-wip-us.apache.org/repos/asf/spar
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [created] 22596c534
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [deleted] a14fad11e
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/branch-1.4 daeaa0c5a -> 0d8372099
[SQL] [TEST] [MINOR] Follow-up of PR #6493, use Guava API to ensure Java 6
friendliness
This is a follow-up of PR #6493, which has been reverted in branch-1.4 because
it uses Java 7 specific APIs and breaks Java
Repository: spark
Updated Branches:
refs/heads/master 89f21f66b -> 5cd6a63d9
[SQL] [TEST] [MINOR] Follow-up of PR #6493, use Guava API to ensure Java 6
friendliness
This is a follow-up of PR #6493, which has been reverted in branch-1.4 because
it uses Java 7 specific APIs and breaks Java 6 b
Repository: spark
Updated Branches:
refs/heads/branch-1.4 e3c35b217 -> daeaa0c5a
[SQL] [TEST] [MINOR] Uses a temporary log4j.properties in HiveThriftServer2Test
to ensure expected logging behavior
The `HiveThriftServer2Test` relies on proper logging behavior to assert whether
the Thrift serv
Preparing development version 1.4.0-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/e3c35b21
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/e3c35b21
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/e3c
Repository: spark
Updated Branches:
refs/heads/branch-1.4 97d4cd074 -> e3c35b217
Preparing Spark release v1.4.0-rc4
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a14fad11
Tree: http://git-wip-us.apache.org/repos/asf/spar
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [created] a14fad11e
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [deleted] d630f4d69
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/branch-1.4 92ccc5ba3 -> 97d4cd074
[SPARK-8049] [MLLIB] drop tmp col from OneVsRest output
The temporary column should be dropped after we get the prediction column.
harsha2010
Author: Xiangrui Meng
Closes #6592 from mengxr/SPARK-8049 and squas
Repository: spark
Updated Branches:
refs/heads/master 605ddbb27 -> 89f21f66b
[SPARK-8049] [MLLIB] drop tmp col from OneVsRest output
The temporary column should be dropped after we get the prediction column.
harsha2010
Author: Xiangrui Meng
Closes #6592 from mengxr/SPARK-8049 and squashes
Preparing development version 1.4.0-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/92ccc5ba
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/92ccc5ba
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/92c
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [created] d630f4d69
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/branch-1.4 6b0f61563 -> 92ccc5ba3
Preparing Spark release v1.4.0-rc4
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/d630f4d6
Tree: http://git-wip-us.apache.org/repos/asf/spar
Repository: spark
Updated Branches:
refs/heads/branch-1.4 cbaf59544 -> 6b0f61563
[SPARK-8038] [SQL] [PYSPARK] fix Column.when() and otherwise()
Thanks ogirardot, closes #6580
cc rxin JoshRosen
Author: Davies Liu
Closes #6590 from davies/when and squashes the following commits:
c0f2069 [Da
Repository: spark
Updated Branches:
refs/heads/master 686a45f0b -> 605ddbb27
[SPARK-8038] [SQL] [PYSPARK] fix Column.when() and otherwise()
Thanks ogirardot, closes #6580
cc rxin JoshRosen
Author: Davies Liu
Closes #6590 from davies/when and squashes the following commits:
c0f2069 [Davies
Repository: spark
Updated Branches:
refs/heads/branch-1.4 815e05654 -> cbaf59544
[SPARK-8014] [SQL] Avoid premature metadata discovery when writing a
HadoopFsRelation with a save mode other than Append
The current code references the schema of the DataFrame to be written before
checking save
Repository: spark
Updated Branches:
refs/heads/master ad06727fe -> 686a45f0b
[SPARK-8014] [SQL] Avoid premature metadata discovery when writing a
HadoopFsRelation with a save mode other than Append
The current code references the schema of the DataFrame to be written before
checking save mod
Repository: spark
Updated Branches:
refs/heads/branch-1.4 139c8240f -> 815e05654
[SPARK-7985] [ML] [MLlib] [Docs] Remove "fittingParamMap" references. Updating
ML Doc "Estimator, Transformer, and Param" examples.
Updating ML Doc's *"Estimator, Transformer, and Param"* example to use
`model.e
Repository: spark
Updated Branches:
refs/heads/master 0071bd8d3 -> ad06727fe
[SPARK-7985] [ML] [MLlib] [Docs] Remove "fittingParamMap" references. Updating
ML Doc "Estimator, Transformer, and Param" examples.
Updating ML Doc's *"Estimator, Transformer, and Param"* example to use
`model.extra
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [deleted] 48c506724
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Repository: spark
Updated Branches:
refs/heads/branch-1.4 fa292dc3d -> 139c8240f
[MINOR] Enable PySpark SQL readerwriter and window tests
PySpark SQL's `readerwriter` and `window` doctests weren't being run by our
test runner script; this patch re-enables them.
Author: Josh Rosen
Closes #6
Repository: spark
Updated Branches:
refs/heads/branch-1.3 ad5daa3a3 -> 476b87d31
[MINOR] [UI] Improve error message on log page
Currently if a bad log type if specified, then we get blank.
We should provide a more informative error message.
Project: http://git-wip-us.apache.org/repos/asf/spa
Repository: spark
Updated Branches:
refs/heads/master 1bb5d716c -> 0071bd8d3
[SPARK-8015] [FLUME] Remove Guava dependency from flume-sink.
The minimal change would be to disable shading of Guava in the module,
and rely on the transitive dependency from other libraries instead. But
since Guava'
Repository: spark
Updated Branches:
refs/heads/branch-1.4 f71a09de6 -> fa292dc3d
[SPARK-8015] [FLUME] Remove Guava dependency from flume-sink.
The minimal change would be to disable shading of Guava in the module,
and rely on the transitive dependency from other libraries instead. But
since Gu
Repository: spark
Updated Branches:
refs/heads/branch-1.4 8c3fc3a6c -> f71a09de6
[SPARK-8037] [SQL] Ignores files whose name starts with dot in HadoopFsRelation
Author: Cheng Lian
Closes #6581 from liancheng/spark-8037 and squashes the following commits:
d08e97b [Cheng Lian] Ignores files w
Repository: spark
Updated Branches:
refs/heads/master bd97840d5 -> 1bb5d716c
[SPARK-8037] [SQL] Ignores files whose name starts with dot in HadoopFsRelation
Author: Cheng Lian
Closes #6581 from liancheng/spark-8037 and squashes the following commits:
d08e97b [Cheng Lian] Ignores files whose
Repository: spark
Updated Branches:
refs/heads/branch-1.4 97fedf1a0 -> 8c3fc3a6c
[HOT-FIX] Add EvaluatedType back to RDG
https://github.com/apache/spark/commit/87941ff8c49a6661f22c31aa7b84ac1fce768135
accidentally removed the EvaluatedType.
Author: Yin Huai
Closes #6589 from yhuai/getBackE
Repository: spark
Updated Branches:
refs/heads/master 445647a1a -> bd97840d5
[SPARK-7432] [MLLIB] fix flaky CrossValidator doctest
The new test uses CV to compare `maxIter=0` and `maxIter=1`, and validate on
the evaluation result. jkbradley
Author: Xiangrui Meng
Closes #6572 from mengxr/SP
Repository: spark
Updated Branches:
refs/heads/branch-1.4 92a677891 -> 97fedf1a0
[SPARK-7432] [MLLIB] fix flaky CrossValidator doctest
The new test uses CV to compare `maxIter=0` and `maxIter=1`, and validate on
the evaluation result. jkbradley
Author: Xiangrui Meng
Closes #6572 from mengx
Repository: spark
Updated Branches:
refs/heads/branch-1.4 292ee1a99 -> 92a677891
Preparing Spark release v1.4.0-rc4
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/48c50672
Tree: http://git-wip-us.apache.org/repos/asf/spar
Repository: spark
Updated Tags: refs/tags/v1.4.0-rc4 [created] 48c506724
-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org
Preparing development version 1.4.0-SNAPSHOT
Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/92a67789
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/92a67789
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/92a
Repository: spark
Updated Branches:
refs/heads/branch-1.4 87941ff8c -> 292ee1a99
[SPARK-8021] [SQL] [PYSPARK] make Python read/write API consistent with Scala
add schema()/format()/options() for reader, add
mode()/format()/options()/partitionBy() for writer
cc rxin yhuai pwendell
Author:
Repository: spark
Updated Branches:
refs/heads/master 0f80990bf -> 445647a1a
[SPARK-8021] [SQL] [PYSPARK] make Python read/write API consistent with Scala
add schema()/format()/options() for reader, add
mode()/format()/options()/partitionBy() for writer
cc rxin yhuai pwendell
Author: Davi
Repository: spark
Updated Branches:
refs/heads/branch-1.4 4940630f5 -> 87941ff8c
[SPARK-8023][SQL] Add "deterministic" attribute to Expression to avoid
collapsing nondeterministic projects.
This closes #6570.
Author: Yin Huai
Author: Reynold Xin
Closes #6573 from rxin/deterministic and sq
Repository: spark
Updated Branches:
refs/heads/master 7b7f7b6c6 -> 0f80990bf
[SPARK-8023][SQL] Add "deterministic" attribute to Expression to avoid
collapsing nondeterministic projects.
This closes #6570.
Author: Yin Huai
Author: Reynold Xin
Closes #6573 from rxin/deterministic and squash
Repository: spark
Updated Branches:
refs/heads/master bcb47ad77 -> 7b7f7b6c6
[SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get
constructed too early
https://issues.apache.org/jira/browse/SPARK-8020
Author: Yin Huai
Closes #6571 from yhuai/SPARK-8020-1 and squas
Repository: spark
Updated Branches:
refs/heads/branch-1.4 9d6475b93 -> 4940630f5
[SPARK-8020] [SQL] Spark SQL conf in spark-defaults.conf make metadataHive get
constructed too early
https://issues.apache.org/jira/browse/SPARK-8020
Author: Yin Huai
Closes #6571 from yhuai/SPARK-8020-1 and s
58 matches
Mail list logo