spark git commit: [SPARK-8683] [BUILD] Depend on mockito-core instead of mockito-all

2015-06-28 Thread joshrosen
Repository: spark
Updated Branches:
  refs/heads/master 42db3a1c2 - f51004519


[SPARK-8683] [BUILD] Depend on mockito-core instead of mockito-all

Spark's tests currently depend on `mockito-all`, which bundles Hamcrest and 
Objenesis classes. Instead, it should depend on `mockito-core`, which declares 
those libraries as Maven dependencies. This is necessary in order to fix a 
dependency conflict that leads to a NoSuchMethodError when using certain 
Hamcrest matchers.

See https://github.com/mockito/mockito/wiki/Declaring-mockito-dependency for 
more details.

Author: Josh Rosen joshro...@databricks.com

Closes #7061 from JoshRosen/mockito-core-instead-of-all and squashes the 
following commits:

70eccbe [Josh Rosen] Depend on mockito-core instead of mockito-all.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f5100451
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f5100451
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f5100451

Branch: refs/heads/master
Commit: f51004519c4c4915711fb9992e3aa4f05fd143ec
Parents: 42db3a1
Author: Josh Rosen joshro...@databricks.com
Authored: Sat Jun 27 23:27:52 2015 -0700
Committer: Josh Rosen joshro...@databricks.com
Committed: Sat Jun 27 23:27:52 2015 -0700

--
 LICENSE| 2 +-
 core/pom.xml   | 2 +-
 extras/kinesis-asl/pom.xml | 2 +-
 launcher/pom.xml   | 2 +-
 mllib/pom.xml  | 2 +-
 network/common/pom.xml | 2 +-
 network/shuffle/pom.xml| 2 +-
 pom.xml| 2 +-
 repl/pom.xml   | 2 +-
 unsafe/pom.xml | 2 +-
 yarn/pom.xml   | 2 +-
 11 files changed, 11 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f5100451/LICENSE
--
diff --git a/LICENSE b/LICENSE
index 42010d9..8672be5 100644
--- a/LICENSE
+++ b/LICENSE
@@ -948,6 +948,6 @@ The following components are provided under the MIT 
License. See project link fo
  (MIT License) SLF4J LOG4J-12 Binding (org.slf4j:slf4j-log4j12:1.7.5 - 
http://www.slf4j.org)
  (MIT License) pyrolite (org.spark-project:pyrolite:2.0.1 - 
http://pythonhosted.org/Pyro4/)
  (MIT License) scopt (com.github.scopt:scopt_2.10:3.2.0 - 
https://github.com/scopt/scopt)
- (The MIT License) Mockito (org.mockito:mockito-all:1.8.5 - 
http://www.mockito.org)
+ (The MIT License) Mockito (org.mockito:mockito-core:1.8.5 - 
http://www.mockito.org)
  (MIT License) jquery (https://jquery.org/license/)
  (MIT License) AnchorJS (https://github.com/bryanbraun/anchorjs)

http://git-wip-us.apache.org/repos/asf/spark/blob/f5100451/core/pom.xml
--
diff --git a/core/pom.xml b/core/pom.xml
index 40a64be..565437c 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -354,7 +354,7 @@
 /dependency
 dependency
   groupIdorg.mockito/groupId
-  artifactIdmockito-all/artifactId
+  artifactIdmockito-core/artifactId
   scopetest/scope
 /dependency
 dependency

http://git-wip-us.apache.org/repos/asf/spark/blob/f5100451/extras/kinesis-asl/pom.xml
--
diff --git a/extras/kinesis-asl/pom.xml b/extras/kinesis-asl/pom.xml
index c6f60bc..c242e7a 100644
--- a/extras/kinesis-asl/pom.xml
+++ b/extras/kinesis-asl/pom.xml
@@ -66,7 +66,7 @@
 /dependency
 dependency
   groupIdorg.mockito/groupId
-  artifactIdmockito-all/artifactId
+  artifactIdmockito-core/artifactId
   scopetest/scope
 /dependency
 dependency

http://git-wip-us.apache.org/repos/asf/spark/blob/f5100451/launcher/pom.xml
--
diff --git a/launcher/pom.xml b/launcher/pom.xml
index 48dd0d5..a853e67 100644
--- a/launcher/pom.xml
+++ b/launcher/pom.xml
@@ -49,7 +49,7 @@
 /dependency
 dependency
   groupIdorg.mockito/groupId
-  artifactIdmockito-all/artifactId
+  artifactIdmockito-core/artifactId
   scopetest/scope
 /dependency
 dependency

http://git-wip-us.apache.org/repos/asf/spark/blob/f5100451/mllib/pom.xml
--
diff --git a/mllib/pom.xml b/mllib/pom.xml
index b16058d..a5db144 100644
--- a/mllib/pom.xml
+++ b/mllib/pom.xml
@@ -106,7 +106,7 @@
 /dependency
 dependency
   groupIdorg.mockito/groupId
-  artifactIdmockito-all/artifactId
+  artifactIdmockito-core/artifactId
   scopetest/scope
 /dependency
 dependency

http://git-wip-us.apache.org/repos/asf/spark/blob/f5100451/network/common/pom.xml
--
diff --git 

spark git commit: [SPARK-8649] [BUILD] Mapr repository is not defined properly

2015-06-28 Thread pwendell
Repository: spark
Updated Branches:
  refs/heads/master f51004519 - 52d128180


[SPARK-8649] [BUILD] Mapr repository is not defined properly

The previous commiter on this part was pwendell

The previous url gives 404, the new one seems to be OK.

This patch is added under the Apache License 2.0.

The JIRA link: https://issues.apache.org/jira/browse/SPARK-8649

Author: Thomas Szymanski deve...@tszymanski.com

Closes #7054 from tszym/SPARK-8649 and squashes the following commits:

bfda9c4 [Thomas Szymanski] [SPARK-8649] [BUILD] Mapr repository is not defined 
properly


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/52d12818
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/52d12818
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/52d12818

Branch: refs/heads/master
Commit: 52d128180166280af443fae84ac61386f3d6c500
Parents: f510045
Author: Thomas Szymanski deve...@tszymanski.com
Authored: Sun Jun 28 01:06:49 2015 -0700
Committer: Patrick Wendell patr...@databricks.com
Committed: Sun Jun 28 01:06:49 2015 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/52d12818/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 1aa7024..00f5016 100644
--- a/pom.xml
+++ b/pom.xml
@@ -248,7 +248,7 @@
 repository
   idmapr-repo/id
   nameMapR Repository/name
-  urlhttp://repository.mapr.com/maven/url
+  urlhttp://repository.mapr.com/maven//url
   releases
 enabledtrue/enabled
   /releases


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [HOTFIX] Fix pull request builder bug in #6967

2015-06-28 Thread joshrosen
Repository: spark
Updated Branches:
  refs/heads/master 40648c56c - 42db3a1c2


[HOTFIX] Fix pull request builder bug in #6967


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/42db3a1c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/42db3a1c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/42db3a1c

Branch: refs/heads/master
Commit: 42db3a1c2fb6db61e01756be7fe88c4110ae638e
Parents: 40648c5
Author: Josh Rosen joshro...@databricks.com
Authored: Sat Jun 27 23:07:20 2015 -0700
Committer: Josh Rosen joshro...@databricks.com
Committed: Sat Jun 27 23:07:20 2015 -0700

--
 dev/run-tests.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/42db3a1c/dev/run-tests.py
--
diff --git a/dev/run-tests.py b/dev/run-tests.py
index c51b0d3..3533e0c 100755
--- a/dev/run-tests.py
+++ b/dev/run-tests.py
@@ -365,7 +365,7 @@ def run_python_tests(test_modules):
 
 command = [os.path.join(SPARK_HOME, python, run-tests)]
 if test_modules != [modules.root]:
-command.append(--modules=%s % ','.join(m.name for m in modules))
+command.append(--modules=%s % ','.join(m.name for m in test_modules))
 run_cmd(command)
 
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark git commit: [SPARK-8610] [SQL] Separate Row and InternalRow (part 2)

2015-06-28 Thread davies
Repository: spark
Updated Branches:
  refs/heads/master 52d128180 - 77da5be6f


http://git-wip-us.apache.org/repos/asf/spark/blob/77da5be6/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveNativeCommand.scala
--
diff --git 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveNativeCommand.scala
 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveNativeCommand.scala
index 87f8e3f..41b645b 100644
--- 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveNativeCommand.scala
+++ 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveNativeCommand.scala
@@ -17,11 +17,11 @@
 
 package org.apache.spark.sql.hive.execution
 
-import org.apache.spark.sql.catalyst.expressions.{AttributeReference, 
InternalRow}
+import org.apache.spark.sql.catalyst.expressions.AttributeReference
 import org.apache.spark.sql.execution.RunnableCommand
 import org.apache.spark.sql.hive.HiveContext
-import org.apache.spark.sql.SQLContext
 import org.apache.spark.sql.types.StringType
+import org.apache.spark.sql.{Row, SQLContext}
 
 private[hive]
 case class HiveNativeCommand(sql: String) extends RunnableCommand {
@@ -29,6 +29,6 @@ case class HiveNativeCommand(sql: String) extends 
RunnableCommand {
   override def output: Seq[AttributeReference] =
 Seq(AttributeReference(result, StringType, nullable = false)())
 
-  override def run(sqlContext: SQLContext): Seq[InternalRow] =
-sqlContext.asInstanceOf[HiveContext].runSqlHive(sql).map(InternalRow(_))
+  override def run(sqlContext: SQLContext): Seq[Row] =
+sqlContext.asInstanceOf[HiveContext].runSqlHive(sql).map(Row(_))
 }

http://git-wip-us.apache.org/repos/asf/spark/blob/77da5be6/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala
--
diff --git 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala
 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala
index 1f5e4af..f4c8c9a 100644
--- 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala
+++ 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveTableScan.scala
@@ -123,7 +123,7 @@ case class HiveTableScan(
 
 // Only partitioned values are needed here, since the predicate has 
already been bound to
 // partition key attribute references.
-val row = new GenericRow(castedValues.toArray)
+val row = InternalRow.fromSeq(castedValues)
 shouldKeep.eval(row).asInstanceOf[Boolean]
   }
 }

http://git-wip-us.apache.org/repos/asf/spark/blob/77da5be6/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
--
diff --git 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
index 9d8872a..6118880 100644
--- 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
+++ 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/ScriptTransformation.scala
@@ -129,11 +129,11 @@ case class ScriptTransformation(
 val prevLine = curLine
 curLine = reader.readLine()
 if (!ioschema.schemaLess) {
-  new GenericRow(CatalystTypeConverters.convertToCatalyst(
+  new GenericInternalRow(CatalystTypeConverters.convertToCatalyst(
 
prevLine.split(ioschema.outputRowFormatMap(TOK_TABLEROWFORMATFIELD)))
 .asInstanceOf[Array[Any]])
 } else {
-  new GenericRow(CatalystTypeConverters.convertToCatalyst(
+  new GenericInternalRow(CatalystTypeConverters.convertToCatalyst(
 
prevLine.split(ioschema.outputRowFormatMap(TOK_TABLEROWFORMATFIELD), 2))
 .asInstanceOf[Array[Any]])
 }
@@ -167,7 +167,8 @@ case class ScriptTransformation(
 
   outputStream.write(data)
 } else {
-  val writable = 
inputSerde.serialize(row.asInstanceOf[GenericRow].values, inputSoi)
+  val writable = inputSerde.serialize(
+row.asInstanceOf[GenericInternalRow].values, inputSoi)
   prepareWritable(writable).write(dataOutputStream)
 }
   }

http://git-wip-us.apache.org/repos/asf/spark/blob/77da5be6/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/commands.scala
--
diff --git 
a/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/commands.scala 
b/sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/commands.scala
index aad58bf..71fa3e9 100644
--- 

[2/2] spark git commit: [SPARK-8610] [SQL] Separate Row and InternalRow (part 2)

2015-06-28 Thread davies
[SPARK-8610] [SQL] Separate Row and InternalRow (part 2)

Currently, we use GenericRow both for Row and InternalRow, which is confusing 
because it could contain Scala type also Catalyst types.

This PR changes to use GenericInternalRow for InternalRow (contains catalyst 
types), GenericRow for Row (contains Scala types).

Also fixes some incorrect use of InternalRow or Row.

Author: Davies Liu dav...@databricks.com

Closes #7003 from davies/internalrow and squashes the following commits:

d05866c [Davies Liu] fix test: rollback changes for pyspark
72878dd [Davies Liu] Merge branch 'master' of github.com:apache/spark into 
internalrow
efd0b25 [Davies Liu] fix copy of MutableRow
87b13cf [Davies Liu] fix test
d2ebd72 [Davies Liu] fix style
eb4b473 [Davies Liu] mark expensive API as final
bd4e99c [Davies Liu] Merge branch 'master' of github.com:apache/spark into 
internalrow
bdfb78f [Davies Liu] remove BaseMutableRow
6f99a97 [Davies Liu] fix catalyst test
defe931 [Davies Liu] remove BaseRow
288b31f [Davies Liu] Merge branch 'master' of github.com:apache/spark into 
internalrow
9d24350 [Davies Liu] separate Row and InternalRow (part 2)


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/77da5be6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/77da5be6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/77da5be6

Branch: refs/heads/master
Commit: 77da5be6f11a7e9cb1d44f7fb97b93481505afe8
Parents: 52d1281
Author: Davies Liu dav...@databricks.com
Authored: Sun Jun 28 08:03:58 2015 -0700
Committer: Davies Liu dav...@databricks.com
Committed: Sun Jun 28 08:03:58 2015 -0700

--
 .../org/apache/spark/sql/BaseMutableRow.java|  68 ---
 .../main/java/org/apache/spark/sql/BaseRow.java | 197 ---
 .../sql/catalyst/expressions/UnsafeRow.java |  19 +-
 .../main/scala/org/apache/spark/sql/Row.scala   |  41 ++--
 .../sql/catalyst/CatalystTypeConverters.scala   |   4 +-
 .../apache/spark/sql/catalyst/InternalRow.scala |  40 ++--
 .../sql/catalyst/expressions/Projection.scala   |  50 +
 .../expressions/SpecificMutableRow.scala|   2 +-
 .../codegen/GenerateMutableProjection.scala |   2 +-
 .../codegen/GenerateProjection.scala|  16 +-
 .../sql/catalyst/expressions/generators.scala   |  12 +-
 .../spark/sql/catalyst/expressions/rows.scala   | 149 +++---
 .../expressions/ExpressionEvalHelper.scala  |   4 +-
 .../UnsafeFixedWidthAggregationMapSuite.scala   |   6 +-
 .../scala/org/apache/spark/sql/SQLContext.scala |  24 ++-
 .../apache/spark/sql/columnar/ColumnType.scala  |  70 +++
 .../columnar/InMemoryColumnarTableScan.scala|   3 +-
 .../sql/execution/SparkSqlSerializer.scala  |  21 +-
 .../sql/execution/SparkSqlSerializer2.scala |   5 +-
 .../spark/sql/execution/SparkStrategies.scala   |   3 +-
 .../sql/execution/joins/HashOuterJoin.scala |   4 +-
 .../apache/spark/sql/execution/pythonUdfs.scala |   4 +-
 .../sql/execution/stat/StatFunctions.scala  |   3 +-
 .../org/apache/spark/sql/jdbc/JDBCRDD.scala |   2 +-
 .../spark/sql/parquet/ParquetConverter.scala|   8 +-
 .../org/apache/spark/sql/sources/commands.scala |   6 +-
 .../sql/ScalaReflectionRelationSuite.scala  |   7 +-
 .../apache/spark/sql/sources/DDLTestSuite.scala |   2 +-
 .../spark/sql/sources/TableScanSuite.scala  |   4 +-
 .../apache/spark/sql/hive/HiveInspectors.scala  |   5 +-
 .../org/apache/spark/sql/hive/TableReader.scala |   3 +-
 .../hive/execution/CreateTableAsSelect.scala|  14 +-
 .../execution/DescribeHiveTableCommand.scala|   8 +-
 .../sql/hive/execution/HiveNativeCommand.scala  |   8 +-
 .../sql/hive/execution/HiveTableScan.scala  |   2 +-
 .../hive/execution/ScriptTransformation.scala   |   7 +-
 .../spark/sql/hive/execution/commands.scala |  37 ++--
 .../apache/spark/sql/hive/orc/OrcRelation.scala |  10 +-
 .../spark/sql/hive/HiveInspectorSuite.scala |   4 +-
 39 files changed, 299 insertions(+), 575 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/77da5be6/sql/catalyst/src/main/java/org/apache/spark/sql/BaseMutableRow.java
--
diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/BaseMutableRow.java 
b/sql/catalyst/src/main/java/org/apache/spark/sql/BaseMutableRow.java
deleted file mode 100644
index acec2bf..000
--- a/sql/catalyst/src/main/java/org/apache/spark/sql/BaseMutableRow.java
+++ /dev/null
@@ -1,68 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the 

spark git commit: [SPARK-8596] [EC2] Added port for Rstudio

2015-06-28 Thread shivaram
Repository: spark
Updated Branches:
  refs/heads/master ec7843819 - 9ce78b434


[SPARK-8596] [EC2] Added port for Rstudio

This would otherwise need to be set manually by R users in AWS.

https://issues.apache.org/jira/browse/SPARK-8596

Author: Vincent D. Warmerdam vincentwarmer...@gmail.com
Author: vincent vincentwarmer...@gmail.com

Closes #7068 from koaning/rstudio-port-number and squashes the following 
commits:

ac8100d [vincent] Update spark_ec2.py
ce6ad88 [Vincent D. Warmerdam] added port number for rstudio


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9ce78b43
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9ce78b43
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9ce78b43

Branch: refs/heads/master
Commit: 9ce78b4343febe87c4edd650c698cc20d38f615d
Parents: ec78438
Author: Vincent D. Warmerdam vincentwarmer...@gmail.com
Authored: Sun Jun 28 13:33:33 2015 -0700
Committer: Shivaram Venkataraman shiva...@cs.berkeley.edu
Committed: Sun Jun 28 13:33:33 2015 -0700

--
 ec2/spark_ec2.py | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/9ce78b43/ec2/spark_ec2.py
--
diff --git a/ec2/spark_ec2.py b/ec2/spark_ec2.py
index e4932cf..18ccbc0 100755
--- a/ec2/spark_ec2.py
+++ b/ec2/spark_ec2.py
@@ -505,6 +505,8 @@ def launch_cluster(conn, opts, cluster_name):
 master_group.authorize('tcp', 50070, 50070, authorized_address)
 master_group.authorize('tcp', 60070, 60070, authorized_address)
 master_group.authorize('tcp', 4040, 4045, authorized_address)
+# Rstudio (GUI for R) needs port 8787 for web access
+master_group.authorize('tcp', 8787, 8787, authorized_address)
 # HDFS NFS gateway requires 111,2049,4242 for tcp  udp
 master_group.authorize('tcp', 111, 111, authorized_address)
 master_group.authorize('udp', 111, 111, authorized_address)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-8677] [SQL] Fix non-terminating decimal expansion for decimal divide operation

2015-06-28 Thread davies
Repository: spark
Updated Branches:
  refs/heads/master 9ce78b434 - 24fda7381


[SPARK-8677] [SQL] Fix non-terminating decimal expansion for decimal divide 
operation

JIRA: https://issues.apache.org/jira/browse/SPARK-8677

Author: Liang-Chi Hsieh vii...@gmail.com

Closes #7056 from viirya/fix_decimal3 and squashes the following commits:

34d7419 [Liang-Chi Hsieh] Fix Non-terminating decimal expansion for decimal 
divide operation.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/24fda738
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/24fda738
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/24fda738

Branch: refs/heads/master
Commit: 24fda7381171738cbbbacb5965393b660763e562
Parents: 9ce78b4
Author: Liang-Chi Hsieh vii...@gmail.com
Authored: Sun Jun 28 14:48:44 2015 -0700
Committer: Davies Liu dav...@databricks.com
Committed: Sun Jun 28 14:48:44 2015 -0700

--
 .../main/scala/org/apache/spark/sql/types/Decimal.scala  | 11 +--
 .../apache/spark/sql/types/decimal/DecimalSuite.scala|  5 +
 2 files changed, 14 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/24fda738/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
--
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
index bd9823b..5a16948 100644
--- a/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
+++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/types/Decimal.scala
@@ -265,8 +265,15 @@ final class Decimal extends Ordered[Decimal] with 
Serializable {
 
   def * (that: Decimal): Decimal = Decimal(toBigDecimal * that.toBigDecimal)
 
-  def / (that: Decimal): Decimal =
-if (that.isZero) null else Decimal(toBigDecimal / that.toBigDecimal)
+  def / (that: Decimal): Decimal = {
+if (that.isZero) {
+  null
+} else {
+  // To avoid non-terminating decimal expansion problem, we turn to Java 
BigDecimal's divide
+  // with specified ROUNDING_MODE.
+  Decimal(toJavaBigDecimal.divide(that.toJavaBigDecimal, ROUNDING_MODE.id))
+}
+  }
 
   def % (that: Decimal): Decimal =
 if (that.isZero) null else Decimal(toBigDecimal % that.toBigDecimal)

http://git-wip-us.apache.org/repos/asf/spark/blob/24fda738/sql/catalyst/src/test/scala/org/apache/spark/sql/types/decimal/DecimalSuite.scala
--
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/types/decimal/DecimalSuite.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/types/decimal/DecimalSuite.scala
index ccc29c0..5f31296 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/types/decimal/DecimalSuite.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/types/decimal/DecimalSuite.scala
@@ -167,4 +167,9 @@ class DecimalSuite extends SparkFunSuite with 
PrivateMethodTester {
 val decimal = (Decimal(Long.MaxValue, 38, 0) * Decimal(Long.MaxValue, 38, 
0)).toJavaBigDecimal
 assert(decimal.unscaledValue.toString === 
85070591730234615847396907784232501249)
   }
+
+  test(fix non-terminating decimal expansion problem) {
+val decimal = Decimal(1.0, 10, 3) / Decimal(3.0, 10, 3)
+assert(decimal.toString === 0.333)
+  }
 }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-8575] [SQL] Deprecate callUDF in favor of udf

2015-06-28 Thread meng
Repository: spark
Updated Branches:
  refs/heads/master dfde31da5 - 0b10662fe


[SPARK-8575] [SQL] Deprecate callUDF in favor of udf

Follow up of [SPARK-8356](https://issues.apache.org/jira/browse/SPARK-8356) and 
#6902.
Removes the unit test for the now deprecated ```callUdf```
Unit test in SQLQuerySuite now uses ```udf``` instead of ```callUDF```
Replaced ```callUDF``` by ```udf``` where possible in mllib

Author: BenFradet benjamin.fra...@gmail.com

Closes #6993 from BenFradet/SPARK-8575 and squashes the following commits:

26f5a7a [BenFradet] 2 spaces instead of 1
1ddb452 [BenFradet] renamed initUDF in order to be consistent in OneVsRest
48ca15e [BenFradet] used vector type tag for udf call in VectorIndexer
0ebd0da [BenFradet] replace the now deprecated callUDF by udf in VectorIndexer
8013409 [BenFradet] replaced the now deprecated callUDF by udf in Predictor
94345b5 [BenFradet] unifomized udf calls in ProbabilisticClassifier
1305492 [BenFradet] uniformized udf calls in Classifier
a672228 [BenFradet] uniformized udf calls in OneVsRest
49e4904 [BenFradet] Revert removal of the unit test for the now deprecated 
callUdf
bbdeaf3 [BenFradet] fixed syntax for init udf in OneVsRest
fe2a10b [BenFradet] callUDF = udf in ProbabilisticClassifier
0ea30b3 [BenFradet] callUDF = udf in Classifier where possible
197ec82 [BenFradet] callUDF = udf in OneVsRest
84d6780 [BenFradet] modified unit test in SQLQuerySuite to use udf instead of 
callUDF
477709f [BenFradet] removal of the unit test for the now deprecated callUdf


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/0b10662f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/0b10662f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/0b10662f

Branch: refs/heads/master
Commit: 0b10662fef11a56f82144b4953d457738e6961ae
Parents: dfde31d
Author: BenFradet benjamin.fra...@gmail.com
Authored: Sun Jun 28 22:43:47 2015 -0700
Committer: Xiangrui Meng m...@databricks.com
Committed: Sun Jun 28 22:43:47 2015 -0700

--
 .../scala/org/apache/spark/ml/Predictor.scala   |  9 ---
 .../spark/ml/classification/Classifier.scala| 13 +++---
 .../spark/ml/classification/OneVsRest.scala | 27 +---
 .../ProbabilisticClassifier.scala   | 22 +++-
 .../apache/spark/ml/feature/VectorIndexer.scala |  5 ++--
 .../org/apache/spark/sql/SQLQuerySuite.scala|  5 ++--
 6 files changed, 46 insertions(+), 35 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/0b10662f/mllib/src/main/scala/org/apache/spark/ml/Predictor.scala
--
diff --git a/mllib/src/main/scala/org/apache/spark/ml/Predictor.scala 
b/mllib/src/main/scala/org/apache/spark/ml/Predictor.scala
index edaa2af..333b427 100644
--- a/mllib/src/main/scala/org/apache/spark/ml/Predictor.scala
+++ b/mllib/src/main/scala/org/apache/spark/ml/Predictor.scala
@@ -122,9 +122,7 @@ abstract class Predictor[
*/
   protected def extractLabeledPoints(dataset: DataFrame): RDD[LabeledPoint] = {
 dataset.select($(labelCol), $(featuresCol))
-  .map { case Row(label: Double, features: Vector) =
-  LabeledPoint(label, features)
-}
+  .map { case Row(label: Double, features: Vector) = LabeledPoint(label, 
features) }
   }
 }
 
@@ -171,7 +169,10 @@ abstract class PredictionModel[FeaturesType, M : 
PredictionModel[FeaturesType,
   override def transform(dataset: DataFrame): DataFrame = {
 transformSchema(dataset.schema, logging = true)
 if ($(predictionCol).nonEmpty) {
-  dataset.withColumn($(predictionCol), callUDF(predict _, DoubleType, 
col($(featuresCol
+  val predictUDF = udf { (features: Any) =
+predict(features.asInstanceOf[FeaturesType])
+  }
+  dataset.withColumn($(predictionCol), predictUDF(col($(featuresCol
 } else {
   this.logWarning(s$uid: Predictor.transform() was called as NOOP +
  since no output columns were set.)

http://git-wip-us.apache.org/repos/asf/spark/blob/0b10662f/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala
--
diff --git 
a/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala 
b/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala
index 14c285d..85c097b 100644
--- a/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala
+++ b/mllib/src/main/scala/org/apache/spark/ml/classification/Classifier.scala
@@ -102,15 +102,20 @@ abstract class ClassificationModel[FeaturesType, M : 
ClassificationModel[Featur
 var outputData = dataset
 var numColsOutput = 0
 if (getRawPredictionCol != ) {
-  outputData = outputData.withColumn(getRawPredictionCol,
-

spark git commit: [SPARK-5962] [MLLIB] Python support for Power Iteration Clustering

2015-06-28 Thread meng
Repository: spark
Updated Branches:
  refs/heads/master 25f574eb9 - dfde31da5


[SPARK-5962] [MLLIB] Python support for Power Iteration Clustering

Python support for Power Iteration Clustering
https://issues.apache.org/jira/browse/SPARK-5962

Author: Yanbo Liang yblia...@gmail.com

Closes #6992 from yanboliang/pyspark-pic and squashes the following commits:

6b03d82 [Yanbo Liang] address comments
4be4423 [Yanbo Liang] Python support for Power Iteration Clustering


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/dfde31da
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/dfde31da
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/dfde31da

Branch: refs/heads/master
Commit: dfde31da5ce30e0d44cad4fb6618b44d5353d946
Parents: 25f574e
Author: Yanbo Liang yblia...@gmail.com
Authored: Sun Jun 28 22:38:04 2015 -0700
Committer: Xiangrui Meng m...@databricks.com
Committed: Sun Jun 28 22:38:04 2015 -0700

--
 .../PowerIterationClusteringModelWrapper.scala  | 32 +++
 .../spark/mllib/api/python/PythonMLLibAPI.scala | 27 ++
 python/pyspark/mllib/clustering.py  | 98 +++-
 3 files changed, 154 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/dfde31da/mllib/src/main/scala/org/apache/spark/mllib/api/python/PowerIterationClusteringModelWrapper.scala
--
diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/api/python/PowerIterationClusteringModelWrapper.scala
 
b/mllib/src/main/scala/org/apache/spark/mllib/api/python/PowerIterationClusteringModelWrapper.scala
new file mode 100644
index 000..bc6041b
--- /dev/null
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/api/python/PowerIterationClusteringModelWrapper.scala
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the License); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.mllib.api.python
+
+import org.apache.spark.rdd.RDD
+import org.apache.spark.mllib.clustering.PowerIterationClusteringModel
+
+/**
+ * A Wrapper of PowerIterationClusteringModel to provide helper method for 
Python
+ */
+private[python] class PowerIterationClusteringModelWrapper(model: 
PowerIterationClusteringModel)
+  extends PowerIterationClusteringModel(model.k, model.assignments) {
+
+  def getAssignments: RDD[Array[Any]] = {
+model.assignments.map(x = Array(x.id, x.cluster))
+  }
+}

http://git-wip-us.apache.org/repos/asf/spark/blob/dfde31da/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
--
diff --git 
a/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala 
b/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
index b16903a..a66a404 100644
--- 
a/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
+++ 
b/mllib/src/main/scala/org/apache/spark/mllib/api/python/PythonMLLibAPI.scala
@@ -407,6 +407,33 @@ private[python] class PythonMLLibAPI extends Serializable {
   }
 
   /**
+   * Java stub for Python mllib PowerIterationClustering.run(). This stub 
returns a
+   * handle to the Java object instead of the content of the Java object.  
Extra care
+   * needs to be taken in the Python code to ensure it gets freed on exit; see 
the
+   * Py4J documentation.
+   * @param data an RDD of (i, j, s,,ij,,) tuples representing the affinity 
matrix.
+   * @param k number of clusters.
+   * @param maxIterations maximum number of iterations of the power iteration 
loop.
+   * @param initMode the initialization mode. This can be either random to 
use
+   * a random vector as vertex properties, or degree to use
+   * normalized sum similarities. Default: random.
+   */
+  def trainPowerIterationClusteringModel(
+  data: JavaRDD[Vector],
+  k: Int,
+  maxIterations: Int,
+  initMode: String): PowerIterationClusteringModel = {
+
+val pic = new PowerIterationClustering()
+  .setK(k)
+  

spark git commit: [SPARK-7212] [MLLIB] Add sequence learning flag

2015-06-28 Thread meng
Repository: spark
Updated Branches:
  refs/heads/master 00a9d22bd - 25f574eb9


[SPARK-7212] [MLLIB] Add sequence learning flag

Support mining of ordered frequent item sequences.

Author: Feynman Liang fli...@databricks.com

Closes #6997 from feynmanliang/fp-sequence and squashes the following commits:

7c14e15 [Feynman Liang] Improve scalatests with R code and Seq
0d3e4b6 [Feynman Liang] Fix python test
ce987cb [Feynman Liang] Backwards compatibility aux constructor
34ef8f2 [Feynman Liang] Fix failing test due to reverse orderering
f04bd50 [Feynman Liang] Naming, add ordered to FreqItemsets, test ordering 
using Seq
648d4d4 [Feynman Liang] Test case for frequent item sequences
252a36a [Feynman Liang] Add sequence learning flag


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/25f574eb
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/25f574eb
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/25f574eb

Branch: refs/heads/master
Commit: 25f574eb9a3cb9b93b7d9194a8ec16e00ce2c036
Parents: 00a9d22
Author: Feynman Liang fli...@databricks.com
Authored: Sun Jun 28 22:26:07 2015 -0700
Committer: Xiangrui Meng m...@databricks.com
Committed: Sun Jun 28 22:26:07 2015 -0700

--
 .../org/apache/spark/mllib/fpm/FPGrowth.scala   | 38 +++---
 .../apache/spark/mllib/fpm/FPGrowthSuite.scala  | 52 +++-
 python/pyspark/mllib/fpm.py |  4 +-
 3 files changed, 82 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/25f574eb/mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala
--
diff --git a/mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala 
b/mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala
index efa8459..abac080 100644
--- a/mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala
+++ b/mllib/src/main/scala/org/apache/spark/mllib/fpm/FPGrowth.scala
@@ -36,7 +36,7 @@ import org.apache.spark.storage.StorageLevel
  * :: Experimental ::
  *
  * Model trained by [[FPGrowth]], which holds frequent itemsets.
- * @param freqItemsets frequent itemset, which is an RDD of [[FreqItemset]]
+ * @param freqItemsets frequent itemsets, which is an RDD of [[FreqItemset]]
  * @tparam Item item type
  */
 @Experimental
@@ -62,13 +62,14 @@ class FPGrowthModel[Item: ClassTag](val freqItemsets: 
RDD[FreqItemset[Item]]) ex
 @Experimental
 class FPGrowth private (
 private var minSupport: Double,
-private var numPartitions: Int) extends Logging with Serializable {
+private var numPartitions: Int,
+private var ordered: Boolean) extends Logging with Serializable {
 
   /**
* Constructs a default instance with default parameters {minSupport: `0.3`, 
numPartitions: same
-   * as the input data}.
+   * as the input data, ordered: `false`}.
*/
-  def this() = this(0.3, -1)
+  def this() = this(0.3, -1, false)
 
   /**
* Sets the minimal support level (default: `0.3`).
@@ -87,6 +88,15 @@ class FPGrowth private (
   }
 
   /**
+   * Indicates whether to mine itemsets (unordered) or sequences (ordered) 
(default: false, mine
+   * itemsets).
+   */
+  def setOrdered(ordered: Boolean): this.type = {
+this.ordered = ordered
+this
+  }
+
+  /**
* Computes an FP-Growth model that contains frequent itemsets.
* @param data input data set, each element contains a transaction
* @return an [[FPGrowthModel]]
@@ -155,7 +165,7 @@ class FPGrowth private (
 .flatMap { case (part, tree) =
   tree.extract(minCount, x = partitioner.getPartition(x) == part)
 }.map { case (ranks, count) =
-  new FreqItemset(ranks.map(i = freqItems(i)).toArray, count)
+  new FreqItemset(ranks.map(i = freqItems(i)).reverse.toArray, count, 
ordered)
 }
   }
 
@@ -171,9 +181,12 @@ class FPGrowth private (
   itemToRank: Map[Item, Int],
   partitioner: Partitioner): mutable.Map[Int, Array[Int]] = {
 val output = mutable.Map.empty[Int, Array[Int]]
-// Filter the basket by frequent items pattern and sort their ranks.
+// Filter the basket by frequent items pattern
 val filtered = transaction.flatMap(itemToRank.get)
-ju.Arrays.sort(filtered)
+if (!this.ordered) {
+  ju.Arrays.sort(filtered)
+}
+// Generate conditional transactions
 val n = filtered.length
 var i = n - 1
 while (i = 0) {
@@ -198,9 +211,18 @@ object FPGrowth {
* Frequent itemset.
* @param items items in this itemset. Java users should call 
[[FreqItemset#javaItems]] instead.
* @param freq frequency
+   * @param ordered indicates if items represents an itemset (false) or 
sequence (true)
* @tparam Item item type
*/
-  class FreqItemset[Item](val items: Array[Item], val freq: Long) extends 

svn commit: r1688083 [2/2] - in /spark: news/_posts/ site/ site/graphx/ site/mllib/ site/news/ site/releases/ site/screencasts/ site/sql/ site/streaming/

2015-06-28 Thread pwendell
Modified: spark/site/releases/spark-release-0-3.html
URL: 
http://svn.apache.org/viewvc/spark/site/releases/spark-release-0-3.html?rev=1688083r1=1688082r2=1688083view=diff
==
--- spark/site/releases/spark-release-0-3.html (original)
+++ spark/site/releases/spark-release-0-3.html Mon Jun 29 05:00:10 2015
@@ -134,6 +134,9 @@
   h5Latest News/h5
   ul class=list-unstyled
 
+  lia href=/news/spark-summit-2015-videos-posted.htmlSpark 
Summit 2015 Videos Posted/a
+  span class=small(Jun 29, 2015)/span/li
+
   lia href=/news/spark-1-4-0-released.htmlSpark 1.4.0 
released/a
   span class=small(Jun 11, 2015)/span/li
 
@@ -143,9 +146,6 @@
   lia href=/news/spark-summit-europe.htmlAnnouncing Spark Summit 
Europe/a
   span class=small(May 15, 2015)/span/li
 
-  lia href=/news/spark-summit-east-2015-videos-posted.htmlSpark 
Summit East 2015 Videos Posted/a
-  span class=small(Apr 20, 2015)/span/li
-
   /ul
   p class=small style=text-align: right;a 
href=/news/index.htmlArchive/a/p
 /div

Modified: spark/site/releases/spark-release-0-5-0.html
URL: 
http://svn.apache.org/viewvc/spark/site/releases/spark-release-0-5-0.html?rev=1688083r1=1688082r2=1688083view=diff
==
--- spark/site/releases/spark-release-0-5-0.html (original)
+++ spark/site/releases/spark-release-0-5-0.html Mon Jun 29 05:00:10 2015
@@ -134,6 +134,9 @@
   h5Latest News/h5
   ul class=list-unstyled
 
+  lia href=/news/spark-summit-2015-videos-posted.htmlSpark 
Summit 2015 Videos Posted/a
+  span class=small(Jun 29, 2015)/span/li
+
   lia href=/news/spark-1-4-0-released.htmlSpark 1.4.0 
released/a
   span class=small(Jun 11, 2015)/span/li
 
@@ -143,9 +146,6 @@
   lia href=/news/spark-summit-europe.htmlAnnouncing Spark Summit 
Europe/a
   span class=small(May 15, 2015)/span/li
 
-  lia href=/news/spark-summit-east-2015-videos-posted.htmlSpark 
Summit East 2015 Videos Posted/a
-  span class=small(Apr 20, 2015)/span/li
-
   /ul
   p class=small style=text-align: right;a 
href=/news/index.htmlArchive/a/p
 /div

Modified: spark/site/releases/spark-release-0-5-1.html
URL: 
http://svn.apache.org/viewvc/spark/site/releases/spark-release-0-5-1.html?rev=1688083r1=1688082r2=1688083view=diff
==
--- spark/site/releases/spark-release-0-5-1.html (original)
+++ spark/site/releases/spark-release-0-5-1.html Mon Jun 29 05:00:10 2015
@@ -134,6 +134,9 @@
   h5Latest News/h5
   ul class=list-unstyled
 
+  lia href=/news/spark-summit-2015-videos-posted.htmlSpark 
Summit 2015 Videos Posted/a
+  span class=small(Jun 29, 2015)/span/li
+
   lia href=/news/spark-1-4-0-released.htmlSpark 1.4.0 
released/a
   span class=small(Jun 11, 2015)/span/li
 
@@ -143,9 +146,6 @@
   lia href=/news/spark-summit-europe.htmlAnnouncing Spark Summit 
Europe/a
   span class=small(May 15, 2015)/span/li
 
-  lia href=/news/spark-summit-east-2015-videos-posted.htmlSpark 
Summit East 2015 Videos Posted/a
-  span class=small(Apr 20, 2015)/span/li
-
   /ul
   p class=small style=text-align: right;a 
href=/news/index.htmlArchive/a/p
 /div

Modified: spark/site/releases/spark-release-0-5-2.html
URL: 
http://svn.apache.org/viewvc/spark/site/releases/spark-release-0-5-2.html?rev=1688083r1=1688082r2=1688083view=diff
==
--- spark/site/releases/spark-release-0-5-2.html (original)
+++ spark/site/releases/spark-release-0-5-2.html Mon Jun 29 05:00:10 2015
@@ -134,6 +134,9 @@
   h5Latest News/h5
   ul class=list-unstyled
 
+  lia href=/news/spark-summit-2015-videos-posted.htmlSpark 
Summit 2015 Videos Posted/a
+  span class=small(Jun 29, 2015)/span/li
+
   lia href=/news/spark-1-4-0-released.htmlSpark 1.4.0 
released/a
   span class=small(Jun 11, 2015)/span/li
 
@@ -143,9 +146,6 @@
   lia href=/news/spark-summit-europe.htmlAnnouncing Spark Summit 
Europe/a
   span class=small(May 15, 2015)/span/li
 
-  lia href=/news/spark-summit-east-2015-videos-posted.htmlSpark 
Summit East 2015 Videos Posted/a
-  span class=small(Apr 20, 2015)/span/li
-
   /ul
   p class=small style=text-align: right;a 
href=/news/index.htmlArchive/a/p
 /div

Modified: spark/site/releases/spark-release-0-6-0.html
URL: 
http://svn.apache.org/viewvc/spark/site/releases/spark-release-0-6-0.html?rev=1688083r1=1688082r2=1688083view=diff

svn commit: r1688083 [1/2] - in /spark: news/_posts/ site/ site/graphx/ site/mllib/ site/news/ site/releases/ site/screencasts/ site/sql/ site/streaming/

2015-06-28 Thread pwendell
Author: pwendell
Date: Mon Jun 29 05:00:10 2015
New Revision: 1688083

URL: http://svn.apache.org/r1688083
Log:
Adding news item for Spark Summit videos

Added:
spark/news/_posts/2015-06-29-spark-summit-2015-videos-posted.md
spark/site/news/spark-summit-2015-videos-posted.html
Modified:
spark/site/community.html
spark/site/documentation.html
spark/site/downloads.html
spark/site/examples.html
spark/site/faq.html
spark/site/graphx/index.html
spark/site/index.html
spark/site/mailing-lists.html
spark/site/mllib/index.html
spark/site/news/amp-camp-2013-registration-ope.html
spark/site/news/announcing-the-first-spark-summit.html
spark/site/news/fourth-spark-screencast-published.html
spark/site/news/index.html
spark/site/news/nsdi-paper.html
spark/site/news/one-month-to-spark-summit-2015.html
spark/site/news/proposals-open-for-spark-summit-east.html
spark/site/news/registration-open-for-spark-summit-east.html
spark/site/news/run-spark-and-shark-on-amazon-emr.html
spark/site/news/spark-0-6-1-and-0-5-2-released.html
spark/site/news/spark-0-6-2-released.html
spark/site/news/spark-0-7-0-released.html
spark/site/news/spark-0-7-2-released.html
spark/site/news/spark-0-7-3-released.html
spark/site/news/spark-0-8-0-released.html
spark/site/news/spark-0-8-1-released.html
spark/site/news/spark-0-9-0-released.html
spark/site/news/spark-0-9-1-released.html
spark/site/news/spark-0-9-2-released.html
spark/site/news/spark-1-0-0-released.html
spark/site/news/spark-1-0-1-released.html
spark/site/news/spark-1-0-2-released.html
spark/site/news/spark-1-1-0-released.html
spark/site/news/spark-1-1-1-released.html
spark/site/news/spark-1-2-0-released.html
spark/site/news/spark-1-2-1-released.html
spark/site/news/spark-1-2-2-released.html
spark/site/news/spark-1-3-0-released.html
spark/site/news/spark-1-4-0-released.html
spark/site/news/spark-accepted-into-apache-incubator.html
spark/site/news/spark-and-shark-in-the-news.html
spark/site/news/spark-becomes-tlp.html
spark/site/news/spark-featured-in-wired.html
spark/site/news/spark-mailing-lists-moving-to-apache.html
spark/site/news/spark-meetups.html
spark/site/news/spark-screencasts-published.html
spark/site/news/spark-summit-2013-is-a-wrap.html
spark/site/news/spark-summit-2014-videos-posted.html
spark/site/news/spark-summit-agenda-posted.html
spark/site/news/spark-summit-east-2015-videos-posted.html
spark/site/news/spark-summit-east-agenda-posted.html
spark/site/news/spark-summit-europe.html
spark/site/news/spark-tips-from-quantifind.html
spark/site/news/spark-user-survey-and-powered-by-page.html
spark/site/news/spark-version-0-6-0-released.html
spark/site/news/spark-wins-daytona-gray-sort-100tb-benchmark.html
spark/site/news/strata-exercises-now-available-online.html
spark/site/news/submit-talks-to-spark-summit-2014.html
spark/site/news/two-weeks-to-spark-summit-2014.html
spark/site/news/video-from-first-spark-development-meetup.html
spark/site/releases/spark-release-0-3.html
spark/site/releases/spark-release-0-5-0.html
spark/site/releases/spark-release-0-5-1.html
spark/site/releases/spark-release-0-5-2.html
spark/site/releases/spark-release-0-6-0.html
spark/site/releases/spark-release-0-6-1.html
spark/site/releases/spark-release-0-6-2.html
spark/site/releases/spark-release-0-7-0.html
spark/site/releases/spark-release-0-7-2.html
spark/site/releases/spark-release-0-7-3.html
spark/site/releases/spark-release-0-8-0.html
spark/site/releases/spark-release-0-8-1.html
spark/site/releases/spark-release-0-9-0.html
spark/site/releases/spark-release-0-9-1.html
spark/site/releases/spark-release-0-9-2.html
spark/site/releases/spark-release-1-0-0.html
spark/site/releases/spark-release-1-0-1.html
spark/site/releases/spark-release-1-0-2.html
spark/site/releases/spark-release-1-1-0.html
spark/site/releases/spark-release-1-1-1.html
spark/site/releases/spark-release-1-2-0.html
spark/site/releases/spark-release-1-2-1.html
spark/site/releases/spark-release-1-2-2.html
spark/site/releases/spark-release-1-3-0.html
spark/site/releases/spark-release-1-3-1.html
spark/site/releases/spark-release-1-4-0.html
spark/site/research.html
spark/site/screencasts/1-first-steps-with-spark.html
spark/site/screencasts/2-spark-documentation-overview.html
spark/site/screencasts/3-transformations-and-caching.html
spark/site/screencasts/4-a-standalone-job-in-spark.html
spark/site/screencasts/index.html
spark/site/sql/index.html
spark/site/streaming/index.html

Added: spark/news/_posts/2015-06-29-spark-summit-2015-videos-posted.md
URL: 
http://svn.apache.org/viewvc/spark/news/_posts/2015-06-29-spark-summit-2015-videos-posted.md?rev=1688083view=auto