Repository: spark
Updated Branches:
refs/heads/branch-1.4 2337ccc15 - 065d114c6
[SPARK-7430] [STREAMING] [TEST] General improvements to streaming tests to
increase debuggability
Author: Tathagata Das tathagata.das1...@gmail.com
Closes #5961 from tdas/SPARK-7430 and squashes the following
Repository: spark
Updated Branches:
refs/heads/branch-1.4 a038c5174 - 91ce13109
[SPARK-7429] [ML] Params cleanups
Params.setDefault taking a set of ParamPairs should be annotated with varargs.
I thought it would not work before, but it apparently does.
CrossValidator.transform should call
Repository: spark
Updated Branches:
refs/heads/master 8b6b46e4f - 4f87e9562
[SPARK-7429] [ML] Params cleanups
Params.setDefault taking a set of ParamPairs should be annotated with varargs.
I thought it would not work before, but it apparently does.
CrossValidator.transform should call
Repository: spark
Updated Branches:
refs/heads/master 0c33bf817 - 4eecf550a
[SPARK-7373] [MESOS] Add docker support for launching drivers in mesos cluster
mode.
Using the existing docker support for mesos, also enabling the mesos cluster
mode scheduler to launch Spark drivers in docker
Repository: spark
Updated Branches:
refs/heads/master f1216514b - 88717ee4e
[SPARK-7347] DAG visualization: add tooltips to RDDs
This is an addition to #5729.
Here's an example with ALS.
img
src=https://issues.apache.org/jira/secure/attachment/12731039/tooltip.png;
width=400px/img
Author:
Repository: spark
Updated Branches:
refs/heads/branch-1.4 800c0fc8d - 1b742a414
[SPARK-7347] DAG visualization: add tooltips to RDDs
This is an addition to #5729.
Here's an example with ALS.
img
src=https://issues.apache.org/jira/secure/attachment/12731039/tooltip.png;
width=400px/img
Repository: spark
Updated Branches:
refs/heads/branch-1.4 6b9737a83 - 3038b26f1
[SPARK-7118] [Python] Add the coalesce Spark SQL function available in PySpark
This patch adds a proxy call from PySpark to the Spark SQL coalesce function
and this patch comes out of a discussion on devspark
Repository: spark
Updated Branches:
refs/heads/master dec8f5371 - 074d75d4c
[SPARK-5213] [SQL] Remove the duplicated SparkSQLParser
This is a follow up of #5827 to remove the additional `SparkSQLParser`
Author: Cheng Hao hao.ch...@intel.com
Closes #5965 from
Repository: spark
Updated Branches:
refs/heads/branch-1.4 86f141c90 - 2b0c42385
[SPARK-5213] [SQL] Remove the duplicated SparkSQLParser
This is a follow up of #5827 to remove the additional `SparkSQLParser`
Author: Cheng Hao hao.ch...@intel.com
Closes #5965 from
Repository: spark
Updated Branches:
refs/heads/branch-1.4 3038b26f1 - ef835dc52
[SPARK-6093] [MLLIB] Add RegressionMetrics in PySpark/MLlib
https://issues.apache.org/jira/browse/SPARK-6093
Author: Yanbo Liang yblia...@gmail.com
Closes #5941 from yanboliang/spark-6093 and squashes the
Repository: spark
Updated Branches:
refs/heads/master 068c3158a - 1712a7c70
[SPARK-6093] [MLLIB] Add RegressionMetrics in PySpark/MLlib
https://issues.apache.org/jira/browse/SPARK-6093
Author: Yanbo Liang yblia...@gmail.com
Closes #5941 from yanboliang/spark-6093 and squashes the following
Repository: spark
Updated Branches:
refs/heads/branch-1.4 2b0c42385 - d4e31bfcd
[SPARK-7399] [SPARK CORE] Fixed compilation error in scala 2.11
scala has deterministic naming-scheme for the generated methods which return
default arguments . here one of the default argument of overloaded
Repository: spark
Updated Branches:
refs/heads/branch-1.4 9dcf4f78f - 86f141c90
[SPARK-7116] [SQL] [PYSPARK] Remove cache() causing memory leak
This patch simply removes a `cache()` on an intermediate RDD when evaluating
Python UDFs.
Author: ksonj k...@siberie.de
Closes #5973 from
Repository: spark
Updated Branches:
refs/heads/branch-1.4 ef835dc52 - 9dcf4f78f
[SPARK-1442] [SQL] [FOLLOW-UP] Address minor comments in Window Function PR
(#5604).
Address marmbrus and scwf's comments in #5604.
Author: Yin Huai yh...@databricks.com
Closes #5945 from yhuai/windowFollowup
Repository: spark
Updated Branches:
refs/heads/master 1712a7c70 - 5784c8d95
[SPARK-1442] [SQL] [FOLLOW-UP] Address minor comments in Window Function PR
(#5604).
Address marmbrus and scwf's comments in #5604.
Author: Yin Huai yh...@databricks.com
Closes #5945 from yhuai/windowFollowup and
Repository: spark
Updated Branches:
refs/heads/master 3af423c92 - 714db2ef5
[SPARK-7470] [SQL] Spark shell SQLContext crashes without hive
This only happens if you have `SPARK_PREPEND_CLASSES` set. Then I built it with
`build/sbt clean assembly compile` and just ran it with
Repository: spark
Updated Branches:
refs/heads/branch-1.4 1a3e9e982 - bb5872f2d
[SPARK-7232] [SQL] Add a Substitution batch for spark sql analyzer
Added a new batch named `Substitution` before `Resolution` batch. The
motivation for this is there are kind of cases we want to do some
Repository: spark
Updated Branches:
refs/heads/master 714db2ef5 - f496bf3c5
[SPARK-7232] [SQL] Add a Substitution batch for spark sql analyzer
Added a new batch named `Substitution` before `Resolution` batch. The
motivation for this is there are kind of cases we want to do some
[SPARK-6908] [SQL] Use isolated Hive client
This PR switches Spark SQL's Hive support to use the isolated hive client
interface introduced by #5851, instead of directly interacting with the client.
By using this isolated client we can now allow users to dynamically configure
the version of
Repository: spark
Updated Branches:
refs/heads/branch-1.4 2e8a141b5 - 05454fd8a
http://git-wip-us.apache.org/repos/asf/spark/blob/05454fd8/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala
--
diff
Repository: spark
Updated Branches:
refs/heads/master 22ab70e06 - cd1d4110c
http://git-wip-us.apache.org/repos/asf/spark/blob/cd1d4110/sql/hive/src/main/scala/org/apache/spark/sql/hive/client/ClientWrapper.scala
--
diff --git
Repository: spark
Updated Branches:
refs/heads/branch-1.4 28d423870 - 9d0d28940
[SPARK-6986] [SQL] Use Serializer2 in more cases.
With
https://github.com/apache/spark/commit/0a2b15ce43cf6096e1a7ae060b7c8a4010ce3b92,
the serialization stream and deserialization stream has enough information
Repository: spark
Updated Branches:
refs/heads/master 92f8f803a - 3af423c92
[SPARK-6986] [SQL] Use Serializer2 in more cases.
With
https://github.com/apache/spark/commit/0a2b15ce43cf6096e1a7ae060b7c8a4010ce3b92,
the serialization stream and deserialization stream has enough information to
Repository: spark
Updated Branches:
refs/heads/branch-1.4 9d0d28940 - 1a3e9e982
[SPARK-7470] [SQL] Spark shell SQLContext crashes without hive
This only happens if you have `SPARK_PREPEND_CLASSES` set. Then I built it with
`build/sbt clean assembly compile` and just ran it with
Repository: spark
Updated Branches:
refs/heads/branch-1.4 4436e26e4 - 76e58b5d8
[SPARK-5726] [MLLIB] Elementwise (Hadamard) Vector Product Transformer
See https://issues.apache.org/jira/browse/SPARK-5726
Author: Octavian Geagla ogea...@gmail.com
Author: Joseph K. Bradley
Repository: spark
Updated Branches:
refs/heads/master 658a478d3 - e43803b8f
[SPARK-6948] [MLLIB] compress vectors in VectorAssembler
The compression is based on storage. brkyvz
Author: Xiangrui Meng m...@databricks.com
Closes #5985 from mengxr/SPARK-6948 and squashes the following commits:
Repository: spark
Updated Branches:
refs/heads/master 88717ee4e - 347a329a3
[SPARK-7328] [MLLIB] [PYSPARK] Pyspark.mllib.linalg.Vectors: Missing items
Add
1. Class methods squared_dist
3. parse
4. norm
5. numNonzeros
6. copy
I made a few vectorizations wrt squared_dist and dot as well. I
Repository: spark
Updated Branches:
refs/heads/branch-1.4 1b742a414 - 4436e26e4
[SPARK-7328] [MLLIB] [PYSPARK] Pyspark.mllib.linalg.Vectors: Missing items
Add
1. Class methods squared_dist
3. parse
4. norm
5. numNonzeros
6. copy
I made a few vectorizations wrt squared_dist and dot as well. I
Repository: spark
Updated Branches:
refs/heads/master ea3077f19 - 937ba798c
[SPARK-5281] [SQL] Registering table on RDD is giving MissingRequirementError
Go through the context classloader when reflecting on user types in
ScalaReflection.
Replaced calls to `typeOf` with
Repository: spark
Updated Branches:
refs/heads/branch-1.4 7064ea0cd - 9fd25f7a3
[SPARK-5281] [SQL] Registering table on RDD is giving MissingRequirementError
Go through the context classloader when reflecting on user types in
ScalaReflection.
Replaced calls to `typeOf` with
Repository: spark
Updated Branches:
refs/heads/master 937ba798c - 35f0173b8
[SPARK-2155] [SQL] [WHEN D THEN E] [ELSE F] add CaseKeyWhen for CASE a WHEN b
THEN c * END
Avoid translating to CaseWhen and evaluate the key expression many times.
Author: Wenchen Fan cloud0...@outlook.com
Closes
Repository: spark
Updated Branches:
refs/heads/branch-1.4 9fd25f7a3 - 622a0c51c
[SPARK-2155] [SQL] [WHEN D THEN E] [ELSE F] add CaseKeyWhen for CASE a WHEN b
THEN c * END
Avoid translating to CaseWhen and evaluate the key expression many times.
Author: Wenchen Fan cloud0...@outlook.com
Repository: spark
Updated Branches:
refs/heads/master 4f87e9562 - ed9be06a4
[SPARK-7330] [SQL] avoid NPE at jdbc rdd
Thank nadavoosh point this out in #5590
Author: Daoyuan Wang daoyuan.w...@intel.com
Closes #5877 from adrian-wang/jdbcrdd and squashes the following commits:
cc11900
Repository: spark
Updated Branches:
refs/heads/branch-1.4 91ce13109 - 84ee348bc
[SPARK-7330] [SQL] avoid NPE at jdbc rdd
Thank nadavoosh point this out in #5590
Author: Daoyuan Wang daoyuan.w...@intel.com
Closes #5877 from adrian-wang/jdbcrdd and squashes the following commits:
cc11900
Repository: spark
Updated Branches:
refs/heads/branch-1.3 cbf232daa - edcd3643a
[SPARK-7330] [SQL] avoid NPE at jdbc rdd
Thank nadavoosh point this out in #5590
Author: Daoyuan Wang daoyuan.w...@intel.com
Closes #5877 from adrian-wang/jdbcrdd and squashes the following commits:
cc11900
Repository: spark
Updated Branches:
refs/heads/branch-1.4 84ee348bc - 6b9737a83
[SPARK-7388] [SPARK-7383] wrapper for VectorAssembler in Python
The wrapper required the implementation of the `ArrayParam`, because `Array[T]`
is hard to obtain from Python. `ArrayParam` has an extra function
36 matches
Mail list logo