spark-website git commit: Sync ASF git repo and Github

2018-04-17 Thread sameerag
Repository: spark-website
Updated Branches:
  refs/heads/asf-site f050f7e3d -> 0f049fd2e


Sync ASF git repo and Github


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/0f049fd2
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/0f049fd2
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/0f049fd2

Branch: refs/heads/asf-site
Commit: 0f049fd2e0d5d5f21f8f0f6a2b9584f1666f5cee
Parents: f050f7e
Author: Sameer Agarwal 
Authored: Tue Apr 17 11:00:08 2018 -0700
Committer: Sameer Agarwal 
Committed: Tue Apr 17 11:00:08 2018 -0700

--

--



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25547 - in /dev/spark: 2.3.1-SNAPSHOT-2018_02_28_10_01-a4eb1e4-docs/ 2.3.1-SNAPSHOT-2018_03_01_18_01-2aa66eb-docs/ 2.3.1-SNAPSHOT-2018_03_02_02_01-56cfbd9-docs/ 2.3.1-SNAPSHOT-2018_03_02_

2018-03-06 Thread sameerag
Author: sameerag
Date: Wed Mar  7 06:56:31 2018
New Revision: 25547

Log:
delete doc snapshot

Removed:
dev/spark/2.3.1-SNAPSHOT-2018_02_28_10_01-a4eb1e4-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_01_18_01-2aa66eb-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_02_02_01-56cfbd9-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_02_10_01-8fe20e1-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_02_18_01-f12fa13-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_04_18_01-26a8a67-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_04_22_01-88dd335-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_05_10_01-232b9f8-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_05_14_01-911b83d-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_05_18_01-b9ea2e8-docs/
dev/spark/2.3.1-SNAPSHOT-2018_03_06_10_01-66c1978-docs/
dev/spark/2.4.0-SNAPSHOT-2018_02_28_08_02-fab563b-docs/
dev/spark/2.4.0-SNAPSHOT-2018_02_28_16_01-25c2776-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_01_00_01-22f3d33-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_01_04_01-ff14801-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_01_12_01-cdcccd7-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_01_20_01-34811e0-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_02_00_01-119f6a0-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_02_12_01-3a4d15e-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_02_16_01-487377e-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_02_20_01-486f99e-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_04_16_01-a89cdf5-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_05_00_01-269cd53-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_05_05_24-2ce37b5-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_05_08_01-947b4e6-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_05_12_01-ba622f4-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_05_16_01-7706eea-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_05_20_01-8c5b34c-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_06_00_01-ad640a5-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_06_08_01-8bceb89-docs/
dev/spark/2.4.0-SNAPSHOT-2018_03_06_12_01-4c587eb-docs/


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark-website git commit: Apache Spark 2.3.0 Release: Update Site Menu

2018-02-28 Thread sameerag
Repository: spark-website
Updated Branches:
  refs/heads/asf-site d1ea0db8e -> fefc3ba29


http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/news/submit-talks-to-spark-summit-eu-2016.html
--
diff --git a/site/news/submit-talks-to-spark-summit-eu-2016.html 
b/site/news/submit-talks-to-spark-summit-eu-2016.html
index 74c8bf0..58ba7a0 100644
--- a/site/news/submit-talks-to-spark-summit-eu-2016.html
+++ b/site/news/submit-talks-to-spark-summit-eu-2016.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/news/two-weeks-to-spark-summit-2014.html
--
diff --git a/site/news/two-weeks-to-spark-summit-2014.html 
b/site/news/two-weeks-to-spark-summit-2014.html
index 4300166..dd83cad 100644
--- a/site/news/two-weeks-to-spark-summit-2014.html
+++ b/site/news/two-weeks-to-spark-summit-2014.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/news/video-from-first-spark-development-meetup.html
--
diff --git a/site/news/video-from-first-spark-development-meetup.html 
b/site/news/video-from-first-spark-development-meetup.html
index 16dae0d..c7e1d3a 100644
--- a/site/news/video-from-first-spark-development-meetup.html
+++ b/site/news/video-from-first-spark-development-meetup.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/powered-by.html
--
diff --git a/site/powered-by.html b/site/powered-by.html
index 49634b3..3ed1c03 100644
--- a/site/powered-by.html
+++ b/site/powered-by.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/release-process.html
--
diff --git a/site/release-process.html b/site/release-process.html
index dc2aff5..e24b6a3 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/releases/spark-release-0-3.html
--
diff --git a/site/releases/spark-release-0-3.html 
b/site/releases/spark-release-0-3.html
index 4e534b6..cc13166 100644
--- a/site/releases/spark-release-0-3.html
+++ b/site/releases/spark-release-0-3.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/releases/spark-release-0-5-0.html
--
diff --git a/site/releases/spark-release-0-5-0.html 
b/site/releases/spark-release-0-5-0.html
index 81236dd..6804373 100644
--- a/site/releases/spark-release-0-5-0.html
+++ b/site/releases/spark-release-0-5-0.html
@@ -106,7 +106,7 @@
   Documentation 
 
 
-  Latest Release (Spark 2.2.1)
+  Latest Release (Spark 2.3.0)
   Older Versions and Other 
Resources
   Frequently Asked Questions
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/fefc3ba2/site/releases/spark-release-0-5-1.html
--
diff --git a/site/releases/spark-release-0-5-1.html 
b/site/releases/spark-release-0-5-1.html
index 6278c31..9c832c2 100644
--- a/site/releases/spark-release-0-5-1.html
+++ b/site/releases/spark-release-0-5-1.html
@@ -106,7 +106,7 @@
   Documentation 
 
   

[2/2] spark-website git commit: Apache Spark 2.3.0 Release: Update Site Menu

2018-02-28 Thread sameerag
Apache Spark 2.3.0 Release: Update Site Menu


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/fefc3ba2
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/fefc3ba2
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/fefc3ba2

Branch: refs/heads/asf-site
Commit: fefc3ba290df48118d7c8494624e5cd27386261d
Parents: d1ea0db
Author: Sameer Agarwal 
Authored: Wed Feb 28 13:46:02 2018 -0800
Committer: Sameer Agarwal 
Committed: Wed Feb 28 13:54:19 2018 -0800

--
 _layouts/global.html| 2 +-
 downloads.md| 6 +++---
 releases/_posts/2018-02-28-spark-release-2-3-0.md   | 2 +-
 site/committers.html| 2 +-
 site/community.html | 2 +-
 site/contributing.html  | 2 +-
 site/developer-tools.html   | 2 +-
 site/documentation.html | 2 +-
 site/downloads.html | 8 
 site/examples.html  | 2 +-
 site/faq.html   | 2 +-
 site/graphx/index.html  | 2 +-
 site/improvement-proposals.html | 2 +-
 site/index.html | 2 +-
 site/mailing-lists.html | 2 +-
 site/mllib/index.html   | 2 +-
 site/news/amp-camp-2013-registration-ope.html   | 2 +-
 site/news/announcing-the-first-spark-summit.html| 2 +-
 site/news/fourth-spark-screencast-published.html| 2 +-
 site/news/index.html| 2 +-
 site/news/nsdi-paper.html   | 2 +-
 site/news/one-month-to-spark-summit-2015.html   | 2 +-
 site/news/proposals-open-for-spark-summit-east.html | 2 +-
 site/news/registration-open-for-spark-summit-east.html  | 2 +-
 site/news/run-spark-and-shark-on-amazon-emr.html| 2 +-
 site/news/spark-0-6-1-and-0-5-2-released.html   | 2 +-
 site/news/spark-0-6-2-released.html | 2 +-
 site/news/spark-0-7-0-released.html | 2 +-
 site/news/spark-0-7-2-released.html | 2 +-
 site/news/spark-0-7-3-released.html | 2 +-
 site/news/spark-0-8-0-released.html | 2 +-
 site/news/spark-0-8-1-released.html | 2 +-
 site/news/spark-0-9-0-released.html | 2 +-
 site/news/spark-0-9-1-released.html | 2 +-
 site/news/spark-0-9-2-released.html | 2 +-
 site/news/spark-1-0-0-released.html | 2 +-
 site/news/spark-1-0-1-released.html | 2 +-
 site/news/spark-1-0-2-released.html | 2 +-
 site/news/spark-1-1-0-released.html | 2 +-
 site/news/spark-1-1-1-released.html | 2 +-
 site/news/spark-1-2-0-released.html | 2 +-
 site/news/spark-1-2-1-released.html | 2 +-
 site/news/spark-1-2-2-released.html | 2 +-
 site/news/spark-1-3-0-released.html | 2 +-
 site/news/spark-1-4-0-released.html | 2 +-
 site/news/spark-1-4-1-released.html | 2 +-
 site/news/spark-1-5-0-released.html | 2 +-
 site/news/spark-1-5-1-released.html | 2 +-
 site/news/spark-1-5-2-released.html | 2 +-
 site/news/spark-1-6-0-released.html | 2 +-
 site/news/spark-1-6-1-released.html | 2 +-
 site/news/spark-1-6-2-released.html | 2 +-
 site/news/spark-1-6-3-released.html | 2 +-
 site/news/spark-2-0-0-released.html | 2 +-
 site/news/spark-2-0-1-released.html | 2 +-
 site/news/spark-2-0-2-released.html | 2 +-
 site/news/spark-2-1-0-released.html | 2 +-
 site/news/spark-2-1-1-released.html | 2 +-
 site/news/spark-2-1-2-released.html | 2 +-
 site/news/spark-2-2-0-released.html | 2 +-
 site/news/spark-2-2-1-released.html | 2 +-
 site/news/spark-2-3-0-released.html | 2 +-
 site/news/spark-2.0.0-preview.html  | 2 +-
 

[1/4] spark-website git commit: Apache Spark 2.3.0 Release: Announcements & Release Notes

2018-02-28 Thread sameerag
Repository: spark-website
Updated Branches:
  refs/heads/asf-site 26c57a24a -> d1ea0db8e


http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/screencasts/index.html
--
diff --git a/site/screencasts/index.html b/site/screencasts/index.html
index e865ea1..60fd80f 100644
--- a/site/screencasts/index.html
+++ b/site/screencasts/index.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/security.html
--
diff --git a/site/security.html b/site/security.html
index cf789dd..b2b7060 100644
--- a/site/security.html
+++ b/site/security.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/sitemap.xml
--
diff --git a/site/sitemap.xml b/site/sitemap.xml
index 00ce7ef..9662385 100644
--- a/site/sitemap.xml
+++ b/site/sitemap.xml
@@ -139,6 +139,14 @@
 
 
 
+  https://spark.apache.org/releases/spark-release-2-3-0.html
+  weekly
+
+
+  https://spark.apache.org/news/spark-2-3-0-released.html
+  weekly
+
+
   https://spark.apache.org/releases/spark-release-2-2-1.html
   weekly
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/sql/index.html
--
diff --git a/site/sql/index.html b/site/sql/index.html
index d57efb9..a154d1f 100644
--- a/site/sql/index.html
+++ b/site/sql/index.html
@@ -164,6 +164,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -173,9 +176,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/streaming/index.html
--
diff --git a/site/streaming/index.html b/site/streaming/index.html
index ca6af2e..652c0f3 100644
--- a/site/streaming/index.html
+++ b/site/streaming/index.html
@@ -164,6 +164,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -173,9 +176,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/third-party-projects.html
--
diff --git a/site/third-party-projects.html b/site/third-party-projects.html
index 6661c7a..30d59f1 100644
--- a/site/third-party-projects.html
+++ b/site/third-party-projects.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/trademarks.html
--
diff --git a/site/trademarks.html b/site/trademarks.html
index 8be189b..98c9a30 100644
--- a/site/trademarks.html
+++ b/site/trademarks.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda 

[2/4] spark-website git commit: Apache Spark 2.3.0 Release: Announcements & Release Notes

2018-02-28 Thread sameerag
http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/releases/spark-release-0-7-2.html
--
diff --git a/site/releases/spark-release-0-7-2.html 
b/site/releases/spark-release-0-7-2.html
index f35fd59..7e2073d 100644
--- a/site/releases/spark-release-0-7-2.html
+++ b/site/releases/spark-release-0-7-2.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/releases/spark-release-0-7-3.html
--
diff --git a/site/releases/spark-release-0-7-3.html 
b/site/releases/spark-release-0-7-3.html
index 9668943..81d9192 100644
--- a/site/releases/spark-release-0-7-3.html
+++ b/site/releases/spark-release-0-7-3.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/releases/spark-release-0-8-0.html
--
diff --git a/site/releases/spark-release-0-8-0.html 
b/site/releases/spark-release-0-8-0.html
index 4163667..c11bf36 100644
--- a/site/releases/spark-release-0-8-0.html
+++ b/site/releases/spark-release-0-8-0.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/releases/spark-release-0-8-1.html
--
diff --git a/site/releases/spark-release-0-8-1.html 
b/site/releases/spark-release-0-8-1.html
index 7828f41..ebaa099 100644
--- a/site/releases/spark-release-0-8-1.html
+++ b/site/releases/spark-release-0-8-1.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/releases/spark-release-0-9-0.html
--
diff --git a/site/releases/spark-release-0-9-0.html 
b/site/releases/spark-release-0-9-0.html
index d2218ed..153f635 100644
--- a/site/releases/spark-release-0-9-0.html
+++ b/site/releases/spark-release-0-9-0.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/releases/spark-release-0-9-1.html
--
diff --git a/site/releases/spark-release-0-9-1.html 
b/site/releases/spark-release-0-9-1.html
index adcfa08..a927494 100644
--- a/site/releases/spark-release-0-9-1.html
+++ b/site/releases/spark-release-0-9-1.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 


[4/4] spark-website git commit: Apache Spark 2.3.0 Release: Announcements & Release Notes

2018-02-28 Thread sameerag
Apache Spark 2.3.0 Release: Announcements & Release Notes


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/d1ea0db8
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/d1ea0db8
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/d1ea0db8

Branch: refs/heads/asf-site
Commit: d1ea0db8ef6055ddbc24df687b1c1c2a01e002df
Parents: 26c57a2
Author: Sameer Agarwal 
Authored: Wed Feb 28 03:59:14 2018 -0800
Committer: Sameer Agarwal 
Committed: Wed Feb 28 13:19:48 2018 -0800

--
 documentation.md|   1 +
 js/downloads.js |   1 +
 news/_posts/2018-02-28-spark-2-3-0-released.md  |  14 +
 .../_posts/2018-02-28-spark-release-2-3-0.md| 169 +++
 site/committers.html|   6 +-
 site/community.html |   6 +-
 site/contributing.html  |   6 +-
 site/developer-tools.html   |   6 +-
 site/documentation.html |   7 +-
 site/downloads.html |   6 +-
 site/examples.html  |   6 +-
 site/faq.html   |   6 +-
 site/graphx/index.html  |   6 +-
 site/improvement-proposals.html |   6 +-
 site/index.html |   6 +-
 site/js/downloads.js|   1 +
 site/mailing-lists.html |   6 +-
 site/mllib/index.html   |   6 +-
 site/news/amp-camp-2013-registration-ope.html   |   6 +-
 .../news/announcing-the-first-spark-summit.html |   6 +-
 .../news/fourth-spark-screencast-published.html |   6 +-
 site/news/index.html|  15 +-
 site/news/nsdi-paper.html   |   6 +-
 site/news/one-month-to-spark-summit-2015.html   |   6 +-
 .../proposals-open-for-spark-summit-east.html   |   6 +-
 ...registration-open-for-spark-summit-east.html |   6 +-
 .../news/run-spark-and-shark-on-amazon-emr.html |   6 +-
 site/news/spark-0-6-1-and-0-5-2-released.html   |   6 +-
 site/news/spark-0-6-2-released.html |   6 +-
 site/news/spark-0-7-0-released.html |   6 +-
 site/news/spark-0-7-2-released.html |   6 +-
 site/news/spark-0-7-3-released.html |   6 +-
 site/news/spark-0-8-0-released.html |   6 +-
 site/news/spark-0-8-1-released.html |   6 +-
 site/news/spark-0-9-0-released.html |   6 +-
 site/news/spark-0-9-1-released.html |   6 +-
 site/news/spark-0-9-2-released.html |   6 +-
 site/news/spark-1-0-0-released.html |   6 +-
 site/news/spark-1-0-1-released.html |   6 +-
 site/news/spark-1-0-2-released.html |   6 +-
 site/news/spark-1-1-0-released.html |   6 +-
 site/news/spark-1-1-1-released.html |   6 +-
 site/news/spark-1-2-0-released.html |   6 +-
 site/news/spark-1-2-1-released.html |   6 +-
 site/news/spark-1-2-2-released.html |   6 +-
 site/news/spark-1-3-0-released.html |   6 +-
 site/news/spark-1-4-0-released.html |   6 +-
 site/news/spark-1-4-1-released.html |   6 +-
 site/news/spark-1-5-0-released.html |   6 +-
 site/news/spark-1-5-1-released.html |   6 +-
 site/news/spark-1-5-2-released.html |   6 +-
 site/news/spark-1-6-0-released.html |   6 +-
 site/news/spark-1-6-1-released.html |   6 +-
 site/news/spark-1-6-2-released.html |   6 +-
 site/news/spark-1-6-3-released.html |   6 +-
 site/news/spark-2-0-0-released.html |   6 +-
 site/news/spark-2-0-1-released.html |   6 +-
 site/news/spark-2-0-2-released.html |   6 +-
 site/news/spark-2-1-0-released.html |   6 +-
 site/news/spark-2-1-1-released.html |   6 +-
 site/news/spark-2-1-2-released.html |   6 +-
 site/news/spark-2-2-0-released.html |   6 +-
 site/news/spark-2-2-1-released.html |   6 +-
 site/news/spark-2-3-0-released.html | 224 +
 site/news/spark-2.0.0-preview.html  |   6 +-
 .../spark-accepted-into-apache-incubator.html   |   6 +-
 site/news/spark-and-shark-in-the-news.html  |   6 +-
 site/news/spark-becomes-tlp.html|   6 +-
 site/news/spark-featured-in-wired.html  |   6 +-
 .../spark-mailing-lists-moving-to-apache.html   |   6 +-
 site/news/spark-meetups.html|   6 +-
 site/news/spark-screencasts-published.html  |   6 +-
 site/news/spark-summit-2013-is-a-wrap.html  |   6 +-
 site/news/spark-summit-2014-videos-posted.html  |   6 +-
 

[3/4] spark-website git commit: Apache Spark 2.3.0 Release: Announcements & Release Notes

2018-02-28 Thread sameerag
http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-2-2-released.html
--
diff --git a/site/news/spark-1-2-2-released.html 
b/site/news/spark-1-2-2-released.html
index ef2cd3c..ca320b2 100644
--- a/site/news/spark-1-2-2-released.html
+++ b/site/news/spark-1-2-2-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-3-0-released.html
--
diff --git a/site/news/spark-1-3-0-released.html 
b/site/news/spark-1-3-0-released.html
index 5fed5f6..9ddf3e4 100644
--- a/site/news/spark-1-3-0-released.html
+++ b/site/news/spark-1-3-0-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-4-0-released.html
--
diff --git a/site/news/spark-1-4-0-released.html 
b/site/news/spark-1-4-0-released.html
index e302a95..316bd42 100644
--- a/site/news/spark-1-4-0-released.html
+++ b/site/news/spark-1-4-0-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-4-1-released.html
--
diff --git a/site/news/spark-1-4-1-released.html 
b/site/news/spark-1-4-1-released.html
index 7872b14..264d5e2 100644
--- a/site/news/spark-1-4-1-released.html
+++ b/site/news/spark-1-4-1-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-5-0-released.html
--
diff --git a/site/news/spark-1-5-0-released.html 
b/site/news/spark-1-5-0-released.html
index fde95bd..81918b1 100644
--- a/site/news/spark-1-5-0-released.html
+++ b/site/news/spark-1-5-0-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-5-1-released.html
--
diff --git a/site/news/spark-1-5-1-released.html 
b/site/news/spark-1-5-1-released.html
index 3c5ff57..f8ebf52 100644
--- a/site/news/spark-1-5-1-released.html
+++ b/site/news/spark-1-5-1-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.3.0 
released
+  (Feb 28, 2018)
+
   Spark 2.2.1 
released
   (Dec 01, 2017)
 
@@ -170,9 +173,6 @@
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
-  Spark 2.2.0 
released
-  (Jul 11, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d1ea0db8/site/news/spark-1-5-2-released.html

[spark] Git Push Summary

2018-02-27 Thread sameerag
Repository: spark
Updated Tags:  refs/tags/v2.3.0-rc5 [deleted] 992447fb3

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] Git Push Summary

2018-02-27 Thread sameerag
Repository: spark
Updated Tags:  refs/tags/v2.3.0 [created] 992447fb3

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25325 - in /dev/spark: 2.3.0-SNAPSHOT-2017_12_09_08_01-ab1b6ee-docs/ 2.3.0-SNAPSHOT-2017_12_11_00_01-4289ac9-docs/ 2.3.0-SNAPSHOT-2017_12_11_08_01-6cc7021-docs/ 2.3.0-SNAPSHOT-2017_12_11_

2018-02-27 Thread sameerag
Author: sameerag
Date: Wed Feb 28 07:28:47 2018
New Revision: 25325

Log:
Removing extraneous doc snapshot uploads

Removed:
dev/spark/2.3.0-SNAPSHOT-2017_12_09_08_01-ab1b6ee-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_11_00_01-4289ac9-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_11_08_01-6cc7021-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_11_12_01-bf20abb-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_11_16_01-3d82f6e-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_11_20_01-a400265-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_12_00_01-ecc179e-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_12_04_01-bc8933f-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_12_12_01-7a51e71-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_12_16_01-17cdabb-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_12_20_01-c7d0148-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_13_00_01-682eb4f-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_13_04_01-7453ab0-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_13_08_02-58f7c82-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_13_12_01-1abcbed-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_13_16_01-ef92999-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_13_20_01-2a29a60-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_14_00_01-c3dd2a2-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_14_04_01-7d8e2ca-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_14_12_01-6d99940-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_14_16_01-0ea2d8c-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_15_00_01-3775dd3-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_15_12_01-4677623-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_15_20_01-0c8fca4-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_17_00_01-c2aeddf-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_17_12_01-7f6d10a-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_18_16_01-0609dcc-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_18_20_01-d4e6959-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_19_08_01-b779c93-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_19_12_01-6129ffa-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_20_00_01-9962390-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_20_12_02-c89b431-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_20_16_01-b176014-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_20_20_01-d3ae3e1-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_21_00_01-cb9fc8d-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_21_04_01-59d5263-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_21_08_01-0abaf31-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_21_12_01-fe65361-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_21_16_01-7beb375-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_21_20_01-a36b78b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_22_00_01-8df1da3-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_22_04_01-13190a4-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_22_16_01-d23dc5b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_23_00_01-8941a4a-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_23_12_01-1219d7a-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_24_12_01-0bf1a74-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_25_00_01-fba0313-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_25_04_39-12d20dd-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_25_20_01-be03d3a-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_26_00_01-0e68330-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_26_08_01-ff48b1b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_26_12_01-91d1b30-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_26_20_01-6674acd-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_27_04_01-b8bfce5-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_27_08_01-774715d-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_27_20_01-753793b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_28_00_01-ded6d27-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_28_04_01-1eebfbe-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_28_08_01-5536f31-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_28_12_01-8f6d573-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_28_16_01-ffe6fd7-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_28_20_01-c745730-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_29_00_01-224375c-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_29_04_01-cc30ef8-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_29_08_01-11a849b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_29_12_01-66a7d6b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_29_16_01-ccda75b-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_29_20_01-8169630-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_30_00_01-14c4a62-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_30_08_01-234d943-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_30_12_01-ea0a5ee-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_31_00_01-cfbe11e-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_31_08_01-028ee40-docs/
dev/spark/2.3.0-SNAPSHOT-2017_12_31_16_01-994065d-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_01_08_01-f5b7714-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_01_20_01-e0c090f-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_02_08_01-a6fc300-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_03_08_01-a66fe36-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_03_12_01-9a2b65a-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_03_13_34-79f7263-docs/
dev/spark/2.3.0-SNAPSHOT-2018_01_03_16_01-b297029-docs/
dev/spark

svn commit: r25227 - /dev/spark/v2.3.0-rc5-bin/spark-parent_2.11.iml

2018-02-22 Thread sameerag
Author: sameerag
Date: Thu Feb 22 21:25:59 2018
New Revision: 25227

Log:
remove iml file

Removed:
dev/spark/v2.3.0-rc5-bin/spark-parent_2.11.iml


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25225 - in /dev/spark/v2.3.0-rc5-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark

2018-02-22 Thread sameerag
Author: sameerag
Date: Thu Feb 22 20:12:29 2018
New Revision: 25225

Log:
Apache Spark v2.3.0-rc5 docs


[This commit notification would consist of 1446 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25224 - /dev/spark/v2.3.0-rc5-bin/

2018-02-22 Thread sameerag
Author: sameerag
Date: Thu Feb 22 19:54:10 2018
New Revision: 25224

Log:
Apache Spark v2.3.0-rc5

Added:
dev/spark/v2.3.0-rc5-bin/
dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.6.tgz   (with props)
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.6.tgz.asc
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.6.tgz.md5
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.6.tgz.sha512
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.7.tgz   (with props)
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.7.tgz.asc
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.7.tgz.md5
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-hadoop2.7.tgz.sha512
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-without-hadoop.tgz   (with props)
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-without-hadoop.tgz.asc
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-without-hadoop.tgz.md5
dev/spark/v2.3.0-rc5-bin/spark-2.3.0-bin-without-hadoop.tgz.sha512
dev/spark/v2.3.0-rc5-bin/spark-2.3.0.tgz   (with props)
dev/spark/v2.3.0-rc5-bin/spark-2.3.0.tgz.asc
dev/spark/v2.3.0-rc5-bin/spark-2.3.0.tgz.md5
dev/spark/v2.3.0-rc5-bin/spark-2.3.0.tgz.sha512
dev/spark/v2.3.0-rc5-bin/spark-parent_2.11.iml

Added: dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.asc Thu Feb 22 19:54:10 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlqPHhQACgkQ3OS/2AdG
+HpZEhg//UoO4iDZFLKxlxuLQtKg3Vfa4laoY1/8TVZMj7GAOA9TIT1qDYoVHIEFx
+5X6+MrvjskgmWFNJL0cB+KK86n5/ZgmJmM7gV6DKYl4MsDG+EQQI3GOKuXeJbvlh
+7gNtKhM1Gz2nQFyyg/6E6+m4XKDUdlg5MnkEDgHetjgl4zR6PDDAGxrRbJFVaZeJ
+aKhusnXPLMlRdLKZPcRVLN5DN3BLyHbQRyeHUY8OJYhQjIP431gPA+1ULeb9SzKW
+PJ/zX+WcosB1o9fv+rDcaAvYr/1WZkW+r4uUWWWTlivTZPwb0sPuUd1xxzfLtb/M
+MpcraXpNIliIQgAKXKmAm+fAWbRpu7W71saEB5rofO39sXJDY9w6iJ33AqYxcRuh
++IBFcnxViBB5yPOpHMfSPaLXCeeeMoPmxfnYA8+hLYM54yrFK0EQMLWpROSMe4ZT
+V2k3YfI4HwQgWy6rD2Qv9iKEkDb8UXDPbZnElel0qzcYhvjIJ/bfglIhmVUEtRYx
+2ZJ1corXCf6rQ8gP9LQ61WuY3NkNMKRj9N+IhPrO9QxVPve5V0KigAUUb4CvtvkJ
+dJiApsjbvMqc0DbAv4AvXYmlIFCSSTeBBA5aNiPw3zUBcLXofCS52aSgYDhTIJ3c
+iSwCsKEANi8QIeBx4o5uvXclGlPz14STA6D3q7ycl7ACiz5KkCQ=
+=O08n
+-END PGP SIGNATURE-

Added: dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.md5
==
--- dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.md5 (added)
+++ dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.md5 Thu Feb 22 19:54:10 2018
@@ -0,0 +1 @@
+SparkR_2.3.0.tar.gz: 65 0D A7 D2 99 32 90 A7  BF 6D 7E 05 C6 B9 5E 7D

Added: dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.sha512
==
--- dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.sha512 (added)
+++ dev/spark/v2.3.0-rc5-bin/SparkR_2.3.0.tar.gz.sha512 Thu Feb 22 19:54:10 2018
@@ -0,0 +1,3 @@
+SparkR_2.3.0.tar.gz: BC8B59FF A0A18B29 92B02794 4A9E21B6 A914D4F2 E01D5D4A
+ FB2A6C01 5B2152C5 C11E8240 5E0E3A02 C8719E99 AF3FC722
+ E3D7AD3A E303BDB1 505DFB84 B265CF22

Added: dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc5-bin/pyspark-2.3.0.tar.gz.asc Thu Feb 22 19:54:10 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlqPHRIACgkQ3OS/2AdG
+HpZfNRAAkf6SmmtFJ9C5tKIYrOSE47zIfdLe4DTKMaN+mac3iDo+uUM5HQbiE5eE
+vD7tsRWG6fHcObLbPLqQCXAapLwt1m1pHmJXVns7pUhkSoZ+aGcsiqcL0KE7liFW
+Ed+OBGzgurp3ORd01W5nUf/TbRdserxjjUs6rImJIrkYA4Ba8aUuLKgMZVpWKGVO

[2/2] spark git commit: Preparing development version 2.3.1-SNAPSHOT

2018-02-22 Thread sameerag
Preparing development version 2.3.1-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/285b841f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/285b841f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/285b841f

Branch: refs/heads/branch-2.3
Commit: 285b841ffbfb21c0af3f83800f7815fb0bfe3627
Parents: 992447f
Author: Sameer Agarwal 
Authored: Thu Feb 22 09:57:03 2018 -0800
Committer: Sameer Agarwal 
Committed: Thu Feb 22 09:57:03 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/285b841f/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6d46c31..29a8a00 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.0
+Version: 2.3.1
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/285b841f/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 2ca9ab6..5c5a8e9 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/285b841f/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 404c744..2a625da 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/285b841f/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 3c0b528..adb1890 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/285b841f/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index fe3bcfd..4cdcfa2 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 

[spark] Git Push Summary

2018-02-22 Thread sameerag
Repository: spark
Updated Tags:  refs/tags/v2.3.0-rc5 [created] 992447fb3

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[1/2] spark git commit: Preparing Spark release v2.3.0-rc5

2018-02-22 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 a0d794989 -> 285b841ff


Preparing Spark release v2.3.0-rc5


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/992447fb
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/992447fb
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/992447fb

Branch: refs/heads/branch-2.3
Commit: 992447fb30ee9ebb3cf794f2d06f4d63a2d792db
Parents: a0d7949
Author: Sameer Agarwal 
Authored: Thu Feb 22 09:56:57 2018 -0800
Committer: Sameer Agarwal 
Committed: Thu Feb 22 09:56:57 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/992447fb/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 29a8a00..6d46c31 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.1
+Version: 2.3.0
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/992447fb/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 5c5a8e9..2ca9ab6 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/992447fb/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 2a625da..404c744 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/992447fb/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index adb1890..3c0b528 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/992447fb/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 4cdcfa2..fe3bcfd 100644
--- 

spark git commit: [SPARK-23470][UI] Use first attempt of last stage to define job description.

2018-02-20 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 c7a0dea46 -> a1ee6f1fc


[SPARK-23470][UI] Use first attempt of last stage to define job description.

This is much faster than finding out what the last attempt is, and the
data should be the same.

There's room for improvement in this page (like only loading data for
the jobs being shown, instead of loading all available jobs and sorting
them), but this should bring performance on par with the 2.2 version.

Author: Marcelo Vanzin 

Closes #20644 from vanzin/SPARK-23470.

(cherry picked from commit 2ba77ed9e51922303e3c3533e368b95788bd7de5)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a1ee6f1f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a1ee6f1f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a1ee6f1f

Branch: refs/heads/branch-2.3
Commit: a1ee6f1fc543120763f1b373bb31bc6d84004318
Parents: c7a0dea
Author: Marcelo Vanzin 
Authored: Tue Feb 20 17:54:06 2018 -0800
Committer: Sameer Agarwal 
Committed: Tue Feb 20 17:54:17 2018 -0800

--
 core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/a1ee6f1f/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
--
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala 
b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
index a4710f6..08a927a 100644
--- a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
+++ b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
@@ -1040,7 +1040,7 @@ private[ui] object ApiHelper {
   }
 
   def lastStageNameAndDescription(store: AppStatusStore, job: JobData): 
(String, String) = {
-val stage = store.asOption(store.lastStageAttempt(job.stageIds.max))
+val stage = store.asOption(store.stageAttempt(job.stageIds.max, 0))
 (stage.map(_.name).getOrElse(""), 
stage.flatMap(_.description).getOrElse(job.name))
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23470][UI] Use first attempt of last stage to define job description.

2018-02-20 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 3e48f3b9e -> 2ba77ed9e


[SPARK-23470][UI] Use first attempt of last stage to define job description.

This is much faster than finding out what the last attempt is, and the
data should be the same.

There's room for improvement in this page (like only loading data for
the jobs being shown, instead of loading all available jobs and sorting
them), but this should bring performance on par with the 2.2 version.

Author: Marcelo Vanzin 

Closes #20644 from vanzin/SPARK-23470.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2ba77ed9
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2ba77ed9
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2ba77ed9

Branch: refs/heads/master
Commit: 2ba77ed9e51922303e3c3533e368b95788bd7de5
Parents: 3e48f3b
Author: Marcelo Vanzin 
Authored: Tue Feb 20 17:54:06 2018 -0800
Committer: Sameer Agarwal 
Committed: Tue Feb 20 17:54:06 2018 -0800

--
 core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/2ba77ed9/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
--
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala 
b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
index a9265d4..ac83de1 100644
--- a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
+++ b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
@@ -1048,7 +1048,7 @@ private[ui] object ApiHelper {
   }
 
   def lastStageNameAndDescription(store: AppStatusStore, job: JobData): 
(String, String) = {
-val stage = store.asOption(store.lastStageAttempt(job.stageIds.max))
+val stage = store.asOption(store.stageAttempt(job.stageIds.max, 0))
 (stage.map(_.name).getOrElse(""), 
stage.flatMap(_.description).getOrElse(job.name))
   }
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25154 - /dev/spark/v2.3.0-rc4-bin/spark-parent_2.11.iml

2018-02-19 Thread sameerag
Author: sameerag
Date: Tue Feb 20 04:47:53 2018
New Revision: 25154

Log:
remove iml file

Removed:
dev/spark/v2.3.0-rc4-bin/spark-parent_2.11.iml


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25119 - in /dev/spark/v2.3.0-rc4-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark

2018-02-17 Thread sameerag
Author: sameerag
Date: Sat Feb 17 21:14:05 2018
New Revision: 25119

Log:
Apache Spark v2.3.0-rc4 docs


[This commit notification would consist of 1446 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r25118 - /dev/spark/v2.3.0-rc4-bin/

2018-02-17 Thread sameerag
Author: sameerag
Date: Sat Feb 17 20:56:56 2018
New Revision: 25118

Log:
Apache Spark v2.3.0-rc4

Added:
dev/spark/v2.3.0-rc4-bin/
dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.6.tgz   (with props)
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.6.tgz.asc
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.6.tgz.md5
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.6.tgz.sha512
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.7.tgz   (with props)
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.7.tgz.asc
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.7.tgz.md5
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-hadoop2.7.tgz.sha512
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-without-hadoop.tgz   (with props)
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-without-hadoop.tgz.asc
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-without-hadoop.tgz.md5
dev/spark/v2.3.0-rc4-bin/spark-2.3.0-bin-without-hadoop.tgz.sha512
dev/spark/v2.3.0-rc4-bin/spark-2.3.0.tgz   (with props)
dev/spark/v2.3.0-rc4-bin/spark-2.3.0.tgz.asc
dev/spark/v2.3.0-rc4-bin/spark-2.3.0.tgz.md5
dev/spark/v2.3.0-rc4-bin/spark-2.3.0.tgz.sha512
dev/spark/v2.3.0-rc4-bin/spark-parent_2.11.iml

Added: dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.asc Sat Feb 17 20:56:56 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlqIlZwACgkQ3OS/2AdG
+Hpa6kw//YxpgAOoqwXnHeEcRoYYjaSPiVdTFYWFRXC3xrDlc9iIzaRQ+gNukuLj9
+qFusWHyAMnjeRiBNA+jR478Y00syHIEiZJ1E36+2IbnozDGXCdLkCPeTowU+6cRU
+B9pPvLWgtWi6pI7IWCVA1iO4mqtikwd9afZPOsOhXiZo2wPGZ/BRSIhXPYNjYA5y
+vvimXQuB163YWDmpu67KT71G8lmNa+mOOTg7vBDhf0qTU520yWtfdOLjJdmyrNWj
+VECN9pl1ear4PsAKnclLg3UOyQIjyCmAeHEEVSZABQtuaw2pVsOwUyFK1zozPRR4
+lGXMGkbG0Q02vhV5yKoaCS8vJzg1nWOXS3/Auf2cFpyhK56biXDBDIy5e/NghCMd
+Tenetz6iBR5MzDEsD7G1sz1KI4H0oYoQBFKynw2AurdB+gfUpUs1vK3oAYy1ihH7
+ASMcKvYuJVJpDAcNJHu/yWVOOmhiESQovocKByDFujTgbVk4EjrrkJ/4r4cQxZOC
+mEPJQ3Z1fOp3vj9gfTNdS7cARGU4UhkUvg81ZyxTgJ3urD+CD07qQB74YWqnL6T+
+FGmLLijGiCdBwhE/IxKPGs4gE3hWWMPAnVMQR8ECiV5PFlEUda+I38zVZzqdpu00
+rHtVmMCutcL/0rmt3FkqB15nTtGvg7BvFb+jKYrwRXWDDDp3Ue8=
+=DtbG
+-END PGP SIGNATURE-

Added: dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.md5
==
--- dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.md5 (added)
+++ dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.md5 Sat Feb 17 20:56:56 2018
@@ -0,0 +1 @@
+SparkR_2.3.0.tar.gz: 9E 4D A1 B3 8E 56 73 12  95 20 54 CF 87 9F 6C 19

Added: dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.sha512
==
--- dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.sha512 (added)
+++ dev/spark/v2.3.0-rc4-bin/SparkR_2.3.0.tar.gz.sha512 Sat Feb 17 20:56:56 2018
@@ -0,0 +1,3 @@
+SparkR_2.3.0.tar.gz: 7A3842A4 2C3E450B 5B5AB3F4 39B0E2AE 8CFA6E10 7C8A73AA
+ F382B104 F8C0A227 EEA15F40 EF76A9A1 84BF3176 63D68992
+ F5221944 CE58D7BD B8290FF4 450B9F3C

Added: dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc4-bin/pyspark-2.3.0.tar.gz.asc Sat Feb 17 20:56:56 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlqIlLoACgkQ3OS/2AdG
+HpaZ9w/6As105Iae5GJ09dfF9dkYGic49pEpahZwCn3Ho97QlJIhvMs5IoJ0gC8V
+M/GoxxEoht1Mm7foqsZKPfeD46hcyo7mcSoliE0r2zLbMMeRheoDebimt7Bvkegq
+GwqmIeFhol8DJzBfbIzGf2Ixoc4YXKH733F/55RC7epNcaZZ813kuqunNJOajIOb
+dqAt/cxhZonHdKjbZUu1rXh6mZ

[1/2] spark git commit: Preparing Spark release v2.3.0-rc4

2018-02-16 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 8360da071 -> c7a0dea46


Preparing Spark release v2.3.0-rc4


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/44095cb6
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/44095cb6
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/44095cb6

Branch: refs/heads/branch-2.3
Commit: 44095cb65500739695b0324c177c19dfa1471472
Parents: 8360da0
Author: Sameer Agarwal 
Authored: Fri Feb 16 17:29:46 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Feb 16 17:29:46 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/44095cb6/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 29a8a00..6d46c31 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.1
+Version: 2.3.0
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/44095cb6/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 5c5a8e9..2ca9ab6 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/44095cb6/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 2a625da..404c744 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/44095cb6/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index adb1890..3c0b528 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/44095cb6/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 4cdcfa2..fe3bcfd 100644
--- 

[spark] Git Push Summary

2018-02-16 Thread sameerag
Repository: spark
Updated Tags:  refs/tags/v2.3.0-rc4 [created] 44095cb65

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[2/2] spark git commit: Preparing development version 2.3.1-SNAPSHOT

2018-02-16 Thread sameerag
Preparing development version 2.3.1-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c7a0dea4
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c7a0dea4
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c7a0dea4

Branch: refs/heads/branch-2.3
Commit: c7a0dea46a251a27b304ac2ec9f07f97aca4b1d0
Parents: 44095cb
Author: Sameer Agarwal 
Authored: Fri Feb 16 17:29:51 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Feb 16 17:29:51 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c7a0dea4/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6d46c31..29a8a00 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.0
+Version: 2.3.1
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/c7a0dea4/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 2ca9ab6..5c5a8e9 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c7a0dea4/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 404c744..2a625da 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c7a0dea4/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 3c0b528..adb1890 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/c7a0dea4/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index fe3bcfd..4cdcfa2 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 

svn commit: r24996 - in /dev/spark/v2.3.0-rc3-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark

2018-02-12 Thread sameerag
Author: sameerag
Date: Tue Feb 13 05:31:05 2018
New Revision: 24996

Log:
Apache Spark v2.3.0-rc3 docs


[This commit notification would consist of 1446 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r24987 - /dev/spark/v2.3.0-rc3-bin/

2018-02-12 Thread sameerag
Author: sameerag
Date: Tue Feb 13 00:54:47 2018
New Revision: 24987

Log:
Apache Spark v2.3.0-rc3

Added:
dev/spark/v2.3.0-rc3-bin/
dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.6.tgz   (with props)
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.6.tgz.asc
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.6.tgz.md5
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.6.tgz.sha512
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.7.tgz   (with props)
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.7.tgz.asc
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.7.tgz.md5
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-hadoop2.7.tgz.sha512
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-without-hadoop.tgz   (with props)
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-without-hadoop.tgz.asc
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-without-hadoop.tgz.md5
dev/spark/v2.3.0-rc3-bin/spark-2.3.0-bin-without-hadoop.tgz.sha512
dev/spark/v2.3.0-rc3-bin/spark-2.3.0.tgz   (with props)
dev/spark/v2.3.0-rc3-bin/spark-2.3.0.tgz.asc
dev/spark/v2.3.0-rc3-bin/spark-2.3.0.tgz.md5
dev/spark/v2.3.0-rc3-bin/spark-2.3.0.tgz.sha512
dev/spark/v2.3.0-rc3-bin/spark-parent_2.11.iml

Added: dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.asc Tue Feb 13 00:54:47 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlqCNqQACgkQ3OS/2AdG
+Hpa5dA/+LUIl5WF/ks8FLYWM+YTtnzYy9NJsxL0Zk01zr/9UrcFciiuvkaiNFYsE
+fPFD0N+UjHydUnrTz7ysna02+AWuRbq/mlBkrJK+sfOFoT0fl0DMNLOZiPLlvq5S
+tvmv1iNjtZNCe5kFUB5XQ1aFI/9zlp9BgJAm/x7oCUe8uwEKpYfUVvQ+o6y01RvE
+XInst4XgS1ObKKRF1jE9QB+TxMysmvk7c0HFIgvfAi1bd9g2ilyGcyi77iFrjmk7
+riXqDFIF39Zm3sZpQnDn2lqMlfmzW2ymrHy4UrV76FWb6f/ExKHNw3kV7a62pudv
+/ao2TQkxLLnodRuptru+gEk4mLJoc4XkSftg5RL94s2wxCroPx3c05iu0wfsp+DL
+pzxGacJa3tKNKSxyTcrhY8pyq1OefpSrrVPhpsXGwUqpR4X2/6Aql0Cojuu29C4J
+1ZZFtzjq7S82uiv88Stb55XOjCJRL91rTlGYok53c8+FsAK7ofcO0opUGbtJaYMy
+gpLnIddrUisiZoxzdpPmf8R4IGM7Okg+VEz/0LowN9XoL/ck65p+ASW593Wzk0W7
+TrvpZcfAO3M5ELg1CTP9PMcKWTkFJ19DjEeBt0CirIJzP5GJuJX/opItAfaD/opz
+CPMsAcjPpq9x332x0JIgUnTpC3G0WPI575EPhH1DVRHl2EfzCRc=
+=QwEF
+-END PGP SIGNATURE-

Added: dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.md5
==
--- dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.md5 (added)
+++ dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.md5 Tue Feb 13 00:54:47 2018
@@ -0,0 +1 @@
+SparkR_2.3.0.tar.gz: 30 5B 98 63 D3 86 C0 18  A7 32 7C 79 80 FE 19 17

Added: dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.sha512
==
--- dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.sha512 (added)
+++ dev/spark/v2.3.0-rc3-bin/SparkR_2.3.0.tar.gz.sha512 Tue Feb 13 00:54:47 2018
@@ -0,0 +1,3 @@
+SparkR_2.3.0.tar.gz: 69AEFA33 5D355D4C 264D38E2 24F08BE0 7B99CA07 4E2BF424
+ 4F6F0F8A 7BE0ADF1 E279C512 E447C29B E1C697DB 24ADF0BF
+ 92936EF2 8DC1803B 6DC25C0A 1FB3ED71

Added: dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc3-bin/pyspark-2.3.0.tar.gz.asc Tue Feb 13 00:54:47 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlqCNbUACgkQ3OS/2AdG
+Hpb6rhAAvlEn/1aWZBuVqIaunIYTLy+jJqYFw4GrYc/cpJZISuiBC9cCXudUjn4x
+04Rh5/EqlL/hQe7OBjHR0OFFZXVnHYAG+vzRngWO6oi6kzR5Qyo0Ls9mVrj8JDYh
+w4nXJjt6pfYg76hnHViKiwkvCAHQlIHYhgkDByD6AUr+IUuWP/bifJIbXsMKWSBG
+MXm+sZ7EZJiw+b8xDbVSFtX5m

[2/2] spark git commit: Preparing development version 2.3.1-SNAPSHOT

2018-02-12 Thread sameerag
Preparing development version 2.3.1-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/70be6038
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/70be6038
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/70be6038

Branch: refs/heads/branch-2.3
Commit: 70be6038df38d5e80af8565120eedd8242c5a7c5
Parents: 89f6fcb
Author: Sameer Agarwal 
Authored: Mon Feb 12 11:08:34 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Feb 12 11:08:34 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/70be6038/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6d46c31..29a8a00 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.0
+Version: 2.3.1
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/70be6038/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 2ca9ab6..5c5a8e9 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/70be6038/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 404c744..2a625da 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/70be6038/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 3c0b528..adb1890 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/70be6038/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index fe3bcfd..4cdcfa2 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 

[1/2] spark git commit: Preparing Spark release v2.3.0-rc3

2018-02-12 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 d31c4ae7b -> 70be6038d


Preparing Spark release v2.3.0-rc3


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/89f6fcba
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/89f6fcba
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/89f6fcba

Branch: refs/heads/branch-2.3
Commit: 89f6fcbafcfb0a7aeb897fba6036cb085bd35121
Parents: d31c4ae
Author: Sameer Agarwal 
Authored: Mon Feb 12 11:08:28 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Feb 12 11:08:28 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/89f6fcba/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 29a8a00..6d46c31 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.1
+Version: 2.3.0
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/89f6fcba/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 5c5a8e9..2ca9ab6 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/89f6fcba/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 2a625da..404c744 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/89f6fcba/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index adb1890..3c0b528 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/89f6fcba/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 4cdcfa2..fe3bcfd 100644
--- 

[spark] Git Push Summary

2018-02-12 Thread sameerag
Repository: spark
Updated Tags:  refs/tags/v2.3.0-rc3 [created] 89f6fcbaf

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23390][SQL] Flaky Test Suite: FileBasedDataSourceSuite in Spark 2.3/hadoop 2.7

2018-02-11 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 7e2a2b33c -> 79e8650cc


[SPARK-23390][SQL] Flaky Test Suite: FileBasedDataSourceSuite in Spark 
2.3/hadoop 2.7

## What changes were proposed in this pull request?

This test only fails with sbt on Hadoop 2.7, I can't reproduce it locally, but 
here is my speculation by looking at the code:
1. FileSystem.delete doesn't delete the directory entirely, somehow we can 
still open the file as a 0-length empty file.(just speculation)
2. ORC intentionally allow empty files, and the reader fails during reading 
without closing the file stream.

This PR improves the test to make sure all files are deleted and can't be 
opened.

## How was this patch tested?

N/A

Author: Wenchen Fan 

Closes #20584 from cloud-fan/flaky-test.

(cherry picked from commit 6efd5d117e98074d1b16a5c991fbd38df9aa196e)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/79e8650c
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/79e8650c
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/79e8650c

Branch: refs/heads/branch-2.3
Commit: 79e8650cccb00c7886efba6dd691d9733084cb81
Parents: 7e2a2b3
Author: Wenchen Fan 
Authored: Sun Feb 11 23:46:23 2018 -0800
Committer: Sameer Agarwal 
Committed: Sun Feb 11 23:46:43 2018 -0800

--
 .../apache/spark/sql/FileBasedDataSourceSuite.scala   | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/79e8650c/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
--
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala 
b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
index 640d6b1..2e33236 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
@@ -17,6 +17,8 @@
 
 package org.apache.spark.sql
 
+import java.io.FileNotFoundException
+
 import org.apache.hadoop.fs.Path
 
 import org.apache.spark.SparkException
@@ -102,17 +104,27 @@ class FileBasedDataSourceSuite extends QueryTest with 
SharedSQLContext {
   def testIgnoreMissingFiles(): Unit = {
 withTempDir { dir =>
   val basePath = dir.getCanonicalPath
+
   Seq("0").toDF("a").write.format(format).save(new Path(basePath, 
"first").toString)
   Seq("1").toDF("a").write.format(format).save(new Path(basePath, 
"second").toString)
+
   val thirdPath = new Path(basePath, "third")
+  val fs = 
thirdPath.getFileSystem(spark.sparkContext.hadoopConfiguration)
   Seq("2").toDF("a").write.format(format).save(thirdPath.toString)
+  val files = fs.listStatus(thirdPath).filter(_.isFile).map(_.getPath)
+
   val df = spark.read.format(format).load(
 new Path(basePath, "first").toString,
 new Path(basePath, "second").toString,
 new Path(basePath, "third").toString)
 
-  val fs = 
thirdPath.getFileSystem(spark.sparkContext.hadoopConfiguration)
+  // Make sure all data files are deleted and can't be opened.
+  files.foreach(f => fs.delete(f, false))
   assert(fs.delete(thirdPath, true))
+  for (f <- files) {
+intercept[FileNotFoundException](fs.open(f))
+  }
+
   checkAnswer(df, Seq(Row("0"), Row("1")))
 }
   }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23390][SQL] Flaky Test Suite: FileBasedDataSourceSuite in Spark 2.3/hadoop 2.7

2018-02-11 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master c0c902aed -> 6efd5d117


[SPARK-23390][SQL] Flaky Test Suite: FileBasedDataSourceSuite in Spark 
2.3/hadoop 2.7

## What changes were proposed in this pull request?

This test only fails with sbt on Hadoop 2.7, I can't reproduce it locally, but 
here is my speculation by looking at the code:
1. FileSystem.delete doesn't delete the directory entirely, somehow we can 
still open the file as a 0-length empty file.(just speculation)
2. ORC intentionally allow empty files, and the reader fails during reading 
without closing the file stream.

This PR improves the test to make sure all files are deleted and can't be 
opened.

## How was this patch tested?

N/A

Author: Wenchen Fan 

Closes #20584 from cloud-fan/flaky-test.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6efd5d11
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6efd5d11
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6efd5d11

Branch: refs/heads/master
Commit: 6efd5d117e98074d1b16a5c991fbd38df9aa196e
Parents: c0c902a
Author: Wenchen Fan 
Authored: Sun Feb 11 23:46:23 2018 -0800
Committer: Sameer Agarwal 
Committed: Sun Feb 11 23:46:23 2018 -0800

--
 .../apache/spark/sql/FileBasedDataSourceSuite.scala   | 14 +-
 1 file changed, 13 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/6efd5d11/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
--
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala 
b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
index 640d6b1..2e33236 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala
@@ -17,6 +17,8 @@
 
 package org.apache.spark.sql
 
+import java.io.FileNotFoundException
+
 import org.apache.hadoop.fs.Path
 
 import org.apache.spark.SparkException
@@ -102,17 +104,27 @@ class FileBasedDataSourceSuite extends QueryTest with 
SharedSQLContext {
   def testIgnoreMissingFiles(): Unit = {
 withTempDir { dir =>
   val basePath = dir.getCanonicalPath
+
   Seq("0").toDF("a").write.format(format).save(new Path(basePath, 
"first").toString)
   Seq("1").toDF("a").write.format(format).save(new Path(basePath, 
"second").toString)
+
   val thirdPath = new Path(basePath, "third")
+  val fs = 
thirdPath.getFileSystem(spark.sparkContext.hadoopConfiguration)
   Seq("2").toDF("a").write.format(format).save(thirdPath.toString)
+  val files = fs.listStatus(thirdPath).filter(_.isFile).map(_.getPath)
+
   val df = spark.read.format(format).load(
 new Path(basePath, "first").toString,
 new Path(basePath, "second").toString,
 new Path(basePath, "third").toString)
 
-  val fs = 
thirdPath.getFileSystem(spark.sparkContext.hadoopConfiguration)
+  // Make sure all data files are deleted and can't be opened.
+  files.foreach(f => fs.delete(f, false))
   assert(fs.delete(thirdPath, true))
+  for (f <- files) {
+intercept[FileNotFoundException](fs.open(f))
+  }
+
   checkAnswer(df, Seq(Row("0"), Row("1")))
 }
   }


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23310][CORE] Turn off read ahead input stream for unshafe shuffle reader

2018-02-05 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 e688ffee2 -> 173449c2b


[SPARK-23310][CORE] Turn off read ahead input stream for unshafe shuffle reader

To fix regression for TPC-DS queries

Author: Sital Kedia 

Closes #20492 from sitalkedia/turn_off_async_inputstream.

(cherry picked from commit 03b7e120dd7ff7848c936c7a23644da5bd7219ab)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/173449c2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/173449c2
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/173449c2

Branch: refs/heads/branch-2.3
Commit: 173449c2bd454a87487f8eebf7d74ee6ed505290
Parents: e688ffe
Author: Sital Kedia 
Authored: Mon Feb 5 10:19:18 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Feb 5 10:20:02 2018 -0800

--
 .../util/collection/unsafe/sort/UnsafeSorterSpillReader.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/173449c2/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
--
diff --git 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
 
b/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
index e2f48e5..71e7c7a 100644
--- 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
+++ 
b/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
@@ -76,8 +76,10 @@ public final class UnsafeSorterSpillReader extends 
UnsafeSorterIterator implemen
 SparkEnv.get() == null ? 0.5 :
  
SparkEnv.get().conf().getDouble("spark.unsafe.sorter.spill.read.ahead.fraction",
 0.5);
 
+// SPARK-23310: Disable read-ahead input stream, because it is causing 
lock contention and perf regression for
+// TPC-DS queries.
 final boolean readAheadEnabled = SparkEnv.get() != null &&
-
SparkEnv.get().conf().getBoolean("spark.unsafe.sorter.spill.read.ahead.enabled",
 true);
+
SparkEnv.get().conf().getBoolean("spark.unsafe.sorter.spill.read.ahead.enabled",
 false);
 
 final InputStream bs =
 new NioBufferedFileInputStream(file, (int) bufferSizeBytes);


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23310][CORE] Turn off read ahead input stream for unshafe shuffle reader

2018-02-05 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master a6bf3db20 -> 03b7e120d


[SPARK-23310][CORE] Turn off read ahead input stream for unshafe shuffle reader

To fix regression for TPC-DS queries

Author: Sital Kedia 

Closes #20492 from sitalkedia/turn_off_async_inputstream.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/03b7e120
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/03b7e120
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/03b7e120

Branch: refs/heads/master
Commit: 03b7e120dd7ff7848c936c7a23644da5bd7219ab
Parents: a6bf3db
Author: Sital Kedia 
Authored: Mon Feb 5 10:19:18 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Feb 5 10:19:18 2018 -0800

--
 .../util/collection/unsafe/sort/UnsafeSorterSpillReader.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/03b7e120/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
--
diff --git 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
 
b/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
index e2f48e5..71e7c7a 100644
--- 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
+++ 
b/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
@@ -76,8 +76,10 @@ public final class UnsafeSorterSpillReader extends 
UnsafeSorterIterator implemen
 SparkEnv.get() == null ? 0.5 :
  
SparkEnv.get().conf().getDouble("spark.unsafe.sorter.spill.read.ahead.fraction",
 0.5);
 
+// SPARK-23310: Disable read-ahead input stream, because it is causing 
lock contention and perf regression for
+// TPC-DS queries.
 final boolean readAheadEnabled = SparkEnv.get() != null &&
-
SparkEnv.get().conf().getBoolean("spark.unsafe.sorter.spill.read.ahead.enabled",
 true);
+
SparkEnv.get().conf().getBoolean("spark.unsafe.sorter.spill.read.ahead.enabled",
 false);
 
 final InputStream bs =
 new NioBufferedFileInputStream(file, (int) bufferSizeBytes);


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23020] Ignore Flaky Test: SparkLauncherSuite.testInProcessLauncher in Spark 2.3

2018-01-28 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 588b9694c -> 5dda5db12


[SPARK-23020] Ignore Flaky Test: SparkLauncherSuite.testInProcessLauncher in 
Spark 2.3


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/5dda5db1
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/5dda5db1
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/5dda5db1

Branch: refs/heads/branch-2.3
Commit: 5dda5db1229a20d7e3b0caab144af16da0787d56
Parents: 588b969
Author: Sameer Agarwal 
Authored: Sun Jan 28 23:13:30 2018 -0800
Committer: Sameer Agarwal 
Committed: Sun Jan 28 23:16:53 2018 -0800

--
 .../test/java/org/apache/spark/launcher/SparkLauncherSuite.java  | 4 +++-
 .../org/apache/spark/sql/execution/UnsafeExternalRowSorter.java  | 1 -
 2 files changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/5dda5db1/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index 1543f4f..4c85a8b 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -25,6 +25,7 @@ import java.util.List;
 import java.util.Map;
 import java.util.Properties;
 
+import org.junit.Ignore;
 import org.junit.Test;
 import static org.junit.Assert.*;
 import static org.junit.Assume.*;
@@ -121,7 +122,8 @@ public class SparkLauncherSuite extends BaseSuite {
 assertEquals(0, app.waitFor());
   }
 
-  @Test
+  // TODO: [SPARK-23020] Re-enable this
+  @Ignore
   public void testInProcessLauncher() throws Exception {
 // Because this test runs SparkLauncher in process and in client mode, it 
pollutes the system
 // properties, and that can cause test failures down the test pipeline. So 
restore the original

http://git-wip-us.apache.org/repos/asf/spark/blob/5dda5db1/sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
--
diff --git 
a/sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
 
b/sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
index 78647b5..1b2f5ee 100644
--- 
a/sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
+++ 
b/sql/catalyst/src/main/java/org/apache/spark/sql/execution/UnsafeExternalRowSorter.java
@@ -20,7 +20,6 @@ package org.apache.spark.sql.execution;
 import java.io.IOException;
 import java.util.function.Supplier;
 
-import org.apache.spark.sql.catalyst.util.TypeUtils;
 import scala.collection.AbstractIterator;
 import scala.collection.Iterator;
 import scala.math.Ordering;


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23207][SQL] Shuffle+Repartition on a DataFrame could lead to incorrect answers

2018-01-26 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 f5911d489 -> 30d16e116


[SPARK-23207][SQL] Shuffle+Repartition on a DataFrame could lead to incorrect 
answers

## What changes were proposed in this pull request?

Currently shuffle repartition uses RoundRobinPartitioning, the generated result 
is nondeterministic since the sequence of input rows are not determined.

The bug can be triggered when there is a repartition call following a shuffle 
(which would lead to non-deterministic row ordering), as the pattern shows 
below:
upstream stage -> repartition stage -> result stage
(-> indicate a shuffle)
When one of the executors process goes down, some tasks on the repartition 
stage will be retried and generate inconsistent ordering, and some tasks of the 
result stage will be retried generating different data.

The following code returns 931532, instead of 100:
```
import scala.sys.process._

import org.apache.spark.TaskContext
val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
  x
}.repartition(200).map { x =>
  if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) {
throw new Exception("pkill -f java".!!)
  }
  x
}
res.distinct().count()
```

In this PR, we propose a most straight-forward way to fix this problem by 
performing a local sort before partitioning, after we make the input row 
ordering deterministic, the function from rows to partitions is fully 
deterministic too.

The downside of the approach is that with extra local sort inserted, the 
performance of repartition() will go down, so we add a new config named 
`spark.sql.execution.sortBeforeRepartition` to control whether this patch is 
applied. The patch is default enabled to be safe-by-default, but user may 
choose to manually turn it off to avoid performance regression.

This patch also changes the output rows ordering of repartition(), that leads 
to a bunch of test cases failure because they are comparing the results 
directly.

## How was this patch tested?

Add unit test in ExchangeSuite.

With this patch(and `spark.sql.execution.sortBeforeRepartition` set to true), 
the following query returns 100:
```
import scala.sys.process._

import org.apache.spark.TaskContext

spark.conf.set("spark.sql.execution.sortBeforeRepartition", "true")

val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
  x
}.repartition(200).map { x =>
  if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) {
throw new Exception("pkill -f java".!!)
  }
  x
}
res.distinct().count()

res7: Long = 100
```

Author: Xingbo Jiang 

Closes #20393 from jiangxb1987/shuffle-repartition.

(cherry picked from commit 94c67a76ec1fda908a671a47a2a1fa63b3ab1b06)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/30d16e11
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/30d16e11
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/30d16e11

Branch: refs/heads/branch-2.3
Commit: 30d16e116b0ff044ca03974de0f1faf17e497903
Parents: f5911d4
Author: Xingbo Jiang 
Authored: Fri Jan 26 15:01:03 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 26 15:01:15 2018 -0800

--
 .../unsafe/sort/RecordComparator.java   |  4 +-
 .../unsafe/sort/UnsafeInMemorySorter.java   |  7 +-
 .../unsafe/sort/UnsafeSorterSpillMerger.java|  4 +-
 .../main/scala/org/apache/spark/rdd/RDD.scala   |  2 +
 .../unsafe/sort/UnsafeExternalSorterSuite.java  |  4 +-
 .../unsafe/sort/UnsafeInMemorySorterSuite.java  |  8 ++-
 .../apache/spark/ml/feature/Word2VecSuite.scala |  3 +-
 .../sql/execution/RecordBinaryComparator.java   | 70 
 .../sql/execution/UnsafeExternalRowSorter.java  | 44 ++--
 .../org/apache/spark/sql/internal/SQLConf.scala | 14 
 .../sql/execution/UnsafeKVExternalSorter.java   |  8 ++-
 .../apache/spark/sql/execution/SortExec.scala   |  2 +-
 .../exchange/ShuffleExchangeExec.scala  | 52 ++-
 .../spark/sql/execution/ExchangeSuite.scala | 26 +++-
 .../datasources/parquet/ParquetIOSuite.scala|  6 +-
 .../datasources/text/WholeTextFileSuite.scala   |  2 +-
 .../execution/streaming/ForeachSinkSuite.scala  |  6 +-
 17 files changed, 233 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/30d16e11/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
--
diff --git 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
 
b/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
index 

spark git commit: [SPARK-23207][SQL] Shuffle+Repartition on a DataFrame could lead to incorrect answers

2018-01-26 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master a8a3e9b7c -> 94c67a76e


[SPARK-23207][SQL] Shuffle+Repartition on a DataFrame could lead to incorrect 
answers

## What changes were proposed in this pull request?

Currently shuffle repartition uses RoundRobinPartitioning, the generated result 
is nondeterministic since the sequence of input rows are not determined.

The bug can be triggered when there is a repartition call following a shuffle 
(which would lead to non-deterministic row ordering), as the pattern shows 
below:
upstream stage -> repartition stage -> result stage
(-> indicate a shuffle)
When one of the executors process goes down, some tasks on the repartition 
stage will be retried and generate inconsistent ordering, and some tasks of the 
result stage will be retried generating different data.

The following code returns 931532, instead of 100:
```
import scala.sys.process._

import org.apache.spark.TaskContext
val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
  x
}.repartition(200).map { x =>
  if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) {
throw new Exception("pkill -f java".!!)
  }
  x
}
res.distinct().count()
```

In this PR, we propose a most straight-forward way to fix this problem by 
performing a local sort before partitioning, after we make the input row 
ordering deterministic, the function from rows to partitions is fully 
deterministic too.

The downside of the approach is that with extra local sort inserted, the 
performance of repartition() will go down, so we add a new config named 
`spark.sql.execution.sortBeforeRepartition` to control whether this patch is 
applied. The patch is default enabled to be safe-by-default, but user may 
choose to manually turn it off to avoid performance regression.

This patch also changes the output rows ordering of repartition(), that leads 
to a bunch of test cases failure because they are comparing the results 
directly.

## How was this patch tested?

Add unit test in ExchangeSuite.

With this patch(and `spark.sql.execution.sortBeforeRepartition` set to true), 
the following query returns 100:
```
import scala.sys.process._

import org.apache.spark.TaskContext

spark.conf.set("spark.sql.execution.sortBeforeRepartition", "true")

val res = spark.range(0, 1000 * 1000, 1).repartition(200).map { x =>
  x
}.repartition(200).map { x =>
  if (TaskContext.get.attemptNumber == 0 && TaskContext.get.partitionId < 2) {
throw new Exception("pkill -f java".!!)
  }
  x
}
res.distinct().count()

res7: Long = 100
```

Author: Xingbo Jiang 

Closes #20393 from jiangxb1987/shuffle-repartition.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/94c67a76
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/94c67a76
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/94c67a76

Branch: refs/heads/master
Commit: 94c67a76ec1fda908a671a47a2a1fa63b3ab1b06
Parents: a8a3e9b
Author: Xingbo Jiang 
Authored: Fri Jan 26 15:01:03 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 26 15:01:03 2018 -0800

--
 .../unsafe/sort/RecordComparator.java   |  4 +-
 .../unsafe/sort/UnsafeInMemorySorter.java   |  7 +-
 .../unsafe/sort/UnsafeSorterSpillMerger.java|  4 +-
 .../main/scala/org/apache/spark/rdd/RDD.scala   |  2 +
 .../unsafe/sort/UnsafeExternalSorterSuite.java  |  4 +-
 .../unsafe/sort/UnsafeInMemorySorterSuite.java  |  8 ++-
 .../apache/spark/ml/feature/Word2VecSuite.scala |  3 +-
 .../sql/execution/RecordBinaryComparator.java   | 70 
 .../sql/execution/UnsafeExternalRowSorter.java  | 44 ++--
 .../org/apache/spark/sql/internal/SQLConf.scala | 14 
 .../sql/execution/UnsafeKVExternalSorter.java   |  8 ++-
 .../apache/spark/sql/execution/SortExec.scala   |  2 +-
 .../exchange/ShuffleExchangeExec.scala  | 52 ++-
 .../spark/sql/execution/ExchangeSuite.scala | 26 +++-
 .../datasources/parquet/ParquetIOSuite.scala|  6 +-
 .../datasources/text/WholeTextFileSuite.scala   |  2 +-
 .../execution/streaming/ForeachSinkSuite.scala  |  6 +-
 17 files changed, 233 insertions(+), 29 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/94c67a76/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
--
diff --git 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
 
b/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
index 09e4258..02b5de8 100644
--- 
a/core/src/main/java/org/apache/spark/util/collection/unsafe/sort/RecordComparator.java
+++ 

svn commit: r24365 - /dev/spark/KEYS

2018-01-22 Thread sameerag
Author: sameerag
Date: Mon Jan 22 21:25:48 2018
New Revision: 24365

Log:
Update KEYS

Modified:
dev/spark/KEYS

Modified: dev/spark/KEYS
==
--- dev/spark/KEYS (original)
+++ dev/spark/KEYS Mon Jan 22 21:25:48 2018
@@ -403,40 +403,61 @@ dcqbOYBLINwxIMZA6N9qCGrST4DfqbAzGSvZ08oe
 =et2/
 -END PGP PUBLIC KEY BLOCK-
 
-pub   rsa2048/A1CEDBA8AD0C022A 2018-01-11 [SC]
-  FA757B8D64ABBC21FC02BC1CA1CEDBA8AD0C022A
-uid [ultimate] Sameer Agarwal <samee...@apache.org>
-sub   rsa2048/5B0E7FAD797FCBE2 2018-01-11 [E]
+pub   rsa4096 2018-01-17 [SC]
+  F2C64242EC1BEC69EA8FBE35DCE4BFD807461E96
+uid   [ultimate] Sameer Agarwal (CODE SIGNING KEY) 
<samee...@apache.org>
+sub   rsa4096 2018-01-17 [E]
 
 -BEGIN PGP PUBLIC KEY BLOCK-
 
-mQENBFpX9XgBCADGZb9Jywy7gJuoyzX3+8JA7kPnc6Ah/mTbCemzkq+NkrMQ+eXP
-D6IyHH+ktCp8rG0KEZph3BwQ9m/9YpvGpyUjEAl7miWvnYQCoBfhoMdoM+/9R77G
-yaUgV1z85n0rI7+EUmstitb1Q1qu6FJgO0r/YOBImEqD0VID+vuDVEmjg9DPX2K/
-fADhKHvQDbR5car8Oh9lXEdxn6oRdQif9spkX26P75Oa7oLbK5s1PQm/z2Wn0q6/
-9tsh+HNCKU4oNTboTXiuNEI4S3ypjb5zsSL2PMmxw+eSV859lBuL/THRN1xe3+3h
-EK6Ma3UThtNcHpOHx+YJmiWahic9NHvO58jHABEBAAG0JFNhbWVlciBBZ2Fyd2Fs
-IDxzYW1lZXJhZ0BhcGFjaGUub3JnPokBTgQTAQgAOBYhBPp1e41kq7wh/AK8HKHO
-26itDAIqBQJaV/V4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEKHO26it
-DAIqIZYH/AoMHZ27lfK1XfQqEujmz5KSWsSVImgMh/t7F61D9sIvnoiMkrhP9/RG
-R/LJA8bIEIBR906Lto4fcuDboUhNYlGpOsJGSTQeEnGpuonNzNpOssFXYfxrGSRe
-M062/9GwvOer7MthhLbNYSzah6lYnijHe67a5woL3mLEnJj0a8vc0DH0jxpe0d/8
-f0VVQnWe+oZOiFx/Gp+RLfqtnMQ+FrPlGu7WFDseXd9NtMzEVQpoQoBbJ29nBvAU
-4AXjuBZa0dR7cZr4u8C+QMkJOBPEQcyBHYv0/MOT3ggABuLTSdJcGsj7NdCxkSZ2
-NTjjgi+OzLqsdU4srniy8vVDuaIqBhi5AQ0EWlf1eAEIAMk/n66XAoetLEyBHOO7
-wZJNnnCssuGOFh4+xLelOeB4Tx4fKeU9wWGUPaqHbyQJbYxEmVPH0Rq/VTfRYgGl
-XuJXgi7f0A/Q0bhxc5A3DRMl5ifnT6Ame9yOUq9BFoH/VG7qO/GVQ7yRrp+cmj5h
-kTSMUxYrzvHWzozxj9/P1bE5EGGsDjaHkA9t3RuzzV/mKjwpyCep72IxMbmRMfPM
-vD/KaKfNryvyEBmqQpdvJXXremfs3warmvhkYnSpkIeUrRjt32jMO4MHzzC74w+J
-/Cn4+0A/YuvFfU0YnjySRNMqpgT2EFA802QI+Mwj2D6fat8oKhnVvBAY+wHal1c2
-m/UAEQEAAYkBNgQYAQgAIBYhBPp1e41kq7wh/AK8HKHO26itDAIqBQJaV/V4AhsM
-AAoJEKHO26itDAIqMi4IAJ1dyai2f03R1AgzI+W5enp8989vf5KVxwDPv4tJX87o
-sAOSNYmPRXBbj2Hr2N+A+656vx3KkIIozuwuVSDbVDdDnxS6dUqvmA07qtKRXWEO
-da8taStwiaetbCJQkLOr1kyrL6XgL+t5E1jMcDmZxF2Owu4NSaEVERtkovY89V4m
-Ku0fEiDWr/6SWUcPnyPGpwZKccShDGl8JuwM/uRO5HKLeAJp93poqWeOtnpw1Xpw
-RiLNdJXDBol1/+xtV2O3CzX0i4o6Z/hhderuJc/v57LlP/PnOVkGG4/mZA8G/kSC
-jUFFi/fz1oSCMpcpdSOAhCs4oRFv2POgXTCLkpOJNSU=
-=Oc/a
+mQINBFpftRMBEADEsiDSnSg7EBdFoWdRhVrjePjsYyEq4Sxt61vkkwhrH/pZ8r07
+4kVSZV0hdc+7PLa27X400re6OgULDtQ7c3F1hcrcl72VLNo7iE5FcQITSRvXXsf0
+Lb6eHmkUjCrZW8FF5WLdr/XA/aC2YpuXYszCWH3f7It9864M8OjzKznGfR/Q+9kd
+jq2l2d1gLhdMnBwOjxMlyDvU3N3wr1bGNf/s7QAltv5V3yNTPvH9I+iy9FbTuseE
+vnMo3KnopEivmF0yqz2qlN3joVg7yAcMPWG92lRQzkUAkrQXcPvcsEvu22kipcOQ
+SQQMcMQZFQh8E/dLzp4+DA2bRcshHnM5bWG9NZNMnXKRmcJrHmjJDstEN7LR+zwt
+cRj9d0RwCFtS7M9YUX4eCc9Dqgtgg31GVNUZdUcZ1/OHqv+NJUOSZipoKJmAfcBN
+OyEGhlWOGidd/3xJtK1GUtTd9iLqjcbcxHapeTOS3kNdXbAwuvX1ADkQ+CTYw5cd
+jx2CAEKsBCz1r++/sApRPLIWSRBaGoF2HgGv89/33R66EVSmNhGkS3g6W6ICqrdY
+vwhK92NJpapQFwhzk4U3ZrcRwXXktv7PlMFywuSXNbOT7XwkrGOUYqzzi7esV4uF
+TDllNmwuVG7q3K7cvGDn69mbgYH8vULzEfuZQYhT9zYPaRePKaILqWLf6wARAQAB
+tDdTYW1lZXIgQWdhcndhbCAoQ09ERSBTSUdOSU5HIEtFWSkgPHNhbWVlcmFnQGFw
+YWNoZS5vcmc+iQJOBBMBCAA4FiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlpftRMC
+GwMFCwkIBwIGFQoJCAsCBBYCAwECHgECF4AACgkQ3OS/2AdGHpYqtg/+IrcrH66c
+8A6+LurGr0ZDxQzI3Ka016UOkruLGI4oitqyzgJ/j6quGTxLNEcBToeh8IUqQDN0
+VriV9iPntIUarf9b6Yx6aCxSvBwls9k9PMZqWVu0oIAecWGvvniGooxJlrelpp0M
+PJaEPHswH80d8rBDGjktBOrQIq8bak7jLomsFK1zGH6pPkAL9GYo4XK2Ik5OiRs3
+H8bJA/FS4sx17GR0IBWumBvYXtHvAmvfwIEeGtcE+cPj/S438N+fwuXI82c6EGIH
+ubFM7uqylbZMlmDgdKkG6YmEQMqK0Ka84iLzUOzqFyOj/aTrKj9GKLc8bBVLU1DP
+/PfMmJQDiETJGwwcKhRm9tYYH1DiMhWp5j1jyhOKIEKGUVJ8IxgpAkFURyOQaA4e
+5rnPoC65Pp1JzTKXWqmjDm7MRgcP77WqWis7SDgMq56/tdCbjZ2WzyfBQCUlfKU3
+7Iax5qKtdoczZRYhdZGzT8d2pMvQVu9zGuwhiPU/nwFybY1haneZhWpXTKbJkNpc
+Gzi2gE7pqXasjA+fn40tuMa4WZlrlvNhTONatcfVuNv1hGS/G+UJjhJzOo40AX2w
+2TCmaj4jiwiqByc4QZKM/iGfVCN6GlOI3+1O1KzybqoQG2Tg/ug5unmAvc23ZYw7
+uu+BnBSTsCODqQG8fPRiDlYRdZtDyQQC8M25Ag0EWl+1EwEQAJ82cuI/R4StkgBX
+zn7loZmSRZUx08EgsB8vq0s1h8g/pLdBN1h22sj9dnfcW4tFUxIKiwpLK84/Rlj7
+o2W8ZynpaKzR6pelV6Cb3+SMgtWe6DQnKaBRKJ3hzdcdA7Fp6aIjuzMsakOEOx3V
+wmtHkCn5MgN/xQBAB3T65thTOFryYqcmEoKWkd5FegJwG4sjHCCARPjgv8ucY/Vs
+6lZ0cxOB6qMO0jxH+FSMCZ4xmy7gpvQSs7D0/aj73kJ0Xv1sPZYxacf+P9MnF8jr
+mI7jKODvtKNbffRzIK/c2YCcYHvb0PtkLN8hhsmtXcmm4ezQwqA1QZWJhtI7oiCX
+A7AYrDKqsLPY4sgzeIzVmz35P/Y0baFp6Qt2eiHQ58I3Eu2+PG6x897So5j6obKi
+FEfprFKOewjefPmt+yNxhXITXUAuw57uXR7PeIcIb6bynZjyUcK+Rr8+vfI1JPaS
+ZVFaUn6KNFueK/bxDo4dzHMdj4gF9kGE+hPNRGepO7ba90QeaZSA6Bk3EUhovu8H
+eMmN/ZsdgMwIHOO3JZ9aWV7wkak7df6qbNVGDhp/QycBAm6J/iG2xYfncYp9nyw8
+UAkrht5EMAdG14Qm3Vq9GGihUsthl2ehPeD37d2/pitTMfnf2Ac6TieHbye0JgL0
+wC3WvL7cLXGmvtIRfXzNd4oDmjGtABEBAAGJAjYEGAEIACAWIQTyxkJC7BvsaeqP
+vjXc5L/YB0YelgUCWl+1EwIbDAAKCRDc5L/YB0YelrVgEACjcrAN

svn commit: r24364 - in /dev/spark/v2.3.0-rc2-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark

2018-01-22 Thread sameerag
Author: sameerag
Date: Mon Jan 22 20:30:45 2018
New Revision: 24364

Log:
Apache Spark v2.3.0-rc2 docs


[This commit notification would consist of 1444 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r24362 - /dev/spark/v2.3.0-rc2-bin/

2018-01-22 Thread sameerag
Author: sameerag
Date: Mon Jan 22 19:45:22 2018
New Revision: 24362

Log:
Apache Spark v2.3.0-rc2

Added:
dev/spark/v2.3.0-rc2-bin/
dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.6.tgz   (with props)
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.6.tgz.asc
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.6.tgz.md5
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.6.tgz.sha512
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.7.tgz   (with props)
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.7.tgz.asc
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.7.tgz.md5
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-hadoop2.7.tgz.sha512
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-without-hadoop.tgz   (with props)
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-without-hadoop.tgz.asc
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-without-hadoop.tgz.md5
dev/spark/v2.3.0-rc2-bin/spark-2.3.0-bin-without-hadoop.tgz.sha512
dev/spark/v2.3.0-rc2-bin/spark-2.3.0.tgz   (with props)
dev/spark/v2.3.0-rc2-bin/spark-2.3.0.tgz.asc
dev/spark/v2.3.0-rc2-bin/spark-2.3.0.tgz.md5
dev/spark/v2.3.0-rc2-bin/spark-2.3.0.tgz.sha512
dev/spark/v2.3.0-rc2-bin/spark-parent_2.11.iml

Added: dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.asc Mon Jan 22 19:45:22 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlpmPsoACgkQ3OS/2AdG
+Hpb5gg/+P0jEiAZi7FRqfRiVW2O2qBe/Oj24CgwM3wbdxD9OMaywQkWmzAMaFSBJ
+Pqkam/lxL3oy1GE+bQI8gMkfZIwneJK6fJwyCo5zqqLwZO+eDCDc1BWqEYn2sAvR
+xVdOFE5RZ3qahOjH1JPnIsrUQT3aWfVBMMWTJLm+cEUhQ4yTmiABH2nqlqiFdRM4
+Cvw6r7wRo/bvPhnyc9Ly+Cu0UnBZFdV/qHdNqaJD/CoJPpuPEyuEv4Y0QN42MgC4
+RUY3YwaRerBS3wxEbO+zUVgnWZR7KlBQZVy40YjzLRhIjgo4KkiqX6hWIaPL+TlU
+mTRWFvIQEZh/b7gZkCitLoO/t2iHvf2TvJqXFeWpieCDgXghmWdSVdg5UYREcxcY
+gY86E8qfyPxnKquJHlBu/qExESjEzrvfaPgZcY9aQFrLaS9zBzRIr51Evz6dBiD5
+0UcgiQW98cZgDJqgwMqfTNosYB9GEEWlB7llLROy/iWZ9JEpZYNYk52JQieW7gWM
+kUodYkoTOuquBE93TZiFRXEr9Er+ACofESh7kdm+MgPvFlLSYdCeaknf8+JB2Q+M
+aASarUslmgOehCGU5cqRgBXEdvm7PDuLyzNfYOT6onmbMCm6QU/wygCy3DQTR+cp
+75kTNlVqAISMQCC7S/3+8DSZhZffugnqnb6mmxa4uOqSsljczws=
+=Is9J
+-END PGP SIGNATURE-

Added: dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.md5
==
--- dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.md5 (added)
+++ dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.md5 Mon Jan 22 19:45:22 2018
@@ -0,0 +1 @@
+SparkR_2.3.0.tar.gz: 58 7E C4 A4 7E 60 B1 AC  F1 FB 81 96 F7 7E BD A0

Added: dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.sha512
==
--- dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.sha512 (added)
+++ dev/spark/v2.3.0-rc2-bin/SparkR_2.3.0.tar.gz.sha512 Mon Jan 22 19:45:22 2018
@@ -0,0 +1,3 @@
+SparkR_2.3.0.tar.gz: 86A461C9 84324BB0 DC525774 2D4CCCB8 F0F16495 3C147E25
+ 3040DBE3 D2FFBE31 C1596FEB C1905139 92AAF623 C296E3DD
+ 7599955F DFC55EE1 BCF5691A 6FB02759

Added: dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc2-bin/pyspark-2.3.0.tar.gz.asc Mon Jan 22 19:45:22 2018
@@ -0,0 +1,16 @@
+-BEGIN PGP SIGNATURE-
+
+iQIzBAABCAAdFiEE8sZCQuwb7Gnqj7413OS/2AdGHpYFAlpmPkkACgkQ3OS/2AdG
+HpbGZBAAjfAgbQuI1ye/5BBDT5Zd65kT78FD4/E6l6Idu0r4DRVywrUyjp90Vc+3
++g9/cLDF5faWq23KyWSYpkO9rOL96sx0z65KV+spdaSRwNk7z4NOfyvzHyxzHSoy
+723l9coFwG5zD96PzmI2mTfOSrfrXyKs1nn/j8QBSDhkGxNhCEGMhUKYgYICJ34Q

[1/2] spark git commit: Preparing Spark release v2.3.0-rc2

2018-01-22 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 4e75b0cb4 -> 6facc7fb2


Preparing Spark release v2.3.0-rc2


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/489ecb0e
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/489ecb0e
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/489ecb0e

Branch: refs/heads/branch-2.3
Commit: 489ecb0ef23e5d9b705e5e5bae4fa3d871bdac91
Parents: 4e75b0c
Author: Sameer Agarwal 
Authored: Mon Jan 22 10:49:08 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Jan 22 10:49:08 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/489ecb0e/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 29a8a00..6d46c31 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.1
+Version: 2.3.0
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/489ecb0e/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 5c5a8e9..2ca9ab6 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/489ecb0e/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 2a625da..404c744 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/489ecb0e/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index adb1890..3c0b528 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.1-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/489ecb0e/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 4cdcfa2..fe3bcfd 100644
--- 

[spark] Git Push Summary

2018-01-22 Thread sameerag
Repository: spark
Updated Tags:  refs/tags/v2.3.0-rc2 [created] 489ecb0ef

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[2/2] spark git commit: Preparing development version 2.3.1-SNAPSHOT

2018-01-22 Thread sameerag
Preparing development version 2.3.1-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6facc7fb
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6facc7fb
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6facc7fb

Branch: refs/heads/branch-2.3
Commit: 6facc7fb2333cc61409149e2f896bf84dd085fa3
Parents: 489ecb0
Author: Sameer Agarwal 
Authored: Mon Jan 22 10:49:29 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Jan 22 10:49:29 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/6facc7fb/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6d46c31..29a8a00 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.0
+Version: 2.3.1
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/6facc7fb/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 2ca9ab6..5c5a8e9 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6facc7fb/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 404c744..2a625da 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6facc7fb/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 3c0b528..adb1890 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6facc7fb/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index fe3bcfd..4cdcfa2 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 

spark git commit: [SPARK-23135][UI] Fix rendering of accumulators in the stage page.

2018-01-19 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master aa3a1276f -> f6da41b01


[SPARK-23135][UI] Fix rendering of accumulators in the stage page.

This follows the behavior of 2.2: only named accumulators with a
value are rendered.

Screenshot:
![accs](https://user-images.githubusercontent.com/1694083/35065700-df409114-fb82-11e7-87c1-550c3f674371.png)

Author: Marcelo Vanzin 

Closes #20299 from vanzin/SPARK-23135.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f6da41b0
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f6da41b0
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f6da41b0

Branch: refs/heads/master
Commit: f6da41b0150725fe96ccb2ee3b48840b207f47eb
Parents: aa3a127
Author: Marcelo Vanzin 
Authored: Fri Jan 19 13:14:24 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 19 13:14:24 2018 -0800

--
 .../org/apache/spark/ui/jobs/StagePage.scala| 20 +++-
 1 file changed, 15 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f6da41b0/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
--
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala 
b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
index 25bee33..0eb3190 100644
--- a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
+++ b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
@@ -260,7 +260,11 @@ private[ui] class StagePage(parent: StagesTab, store: 
AppStatusStore) extends We
 
 val accumulableHeaders: Seq[String] = Seq("Accumulable", "Value")
 def accumulableRow(acc: AccumulableInfo): Seq[Node] = {
-  {acc.name}{acc.value}
+  if (acc.name != null && acc.value != null) {
+{acc.name}{acc.value}
+  } else {
+Nil
+  }
 }
 val accumulableTable = UIUtils.listingTable(
   accumulableHeaders,
@@ -864,7 +868,7 @@ private[ui] class TaskPagedTable(
 {formatBytes(task.taskMetrics.map(_.peakExecutionMemory))}
   
   {if (hasAccumulators(stage)) {
-accumulatorsInfo(task)
+{accumulatorsInfo(task)}
   }}
   {if (hasInput(stage)) {
 metricInfo(task) { m =>
@@ -920,8 +924,12 @@ private[ui] class TaskPagedTable(
   }
 
   private def accumulatorsInfo(task: TaskData): Seq[Node] = {
-task.accumulatorUpdates.map { acc =>
-  Unparsed(StringEscapeUtils.escapeHtml4(s"${acc.name}: ${acc.update}"))
+task.accumulatorUpdates.flatMap { acc =>
+  if (acc.name != null && acc.update.isDefined) {
+Unparsed(StringEscapeUtils.escapeHtml4(s"${acc.name}: 
${acc.update.get}")) ++ 
+  } else {
+Nil
+  }
 }
   }
 
@@ -985,7 +993,9 @@ private object ApiHelper {
 "Shuffle Spill (Disk)" -> TaskIndexNames.DISK_SPILL,
 "Errors" -> TaskIndexNames.ERROR)
 
-  def hasAccumulators(stageData: StageData): Boolean = 
stageData.accumulatorUpdates.size > 0
+  def hasAccumulators(stageData: StageData): Boolean = {
+stageData.accumulatorUpdates.exists { acc => acc.name != null && acc.value 
!= null }
+  }
 
   def hasInput(stageData: StageData): Boolean = stageData.inputBytes > 0
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23135][UI] Fix rendering of accumulators in the stage page.

2018-01-19 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 d0cb19873 -> f9ad00a5a


[SPARK-23135][UI] Fix rendering of accumulators in the stage page.

This follows the behavior of 2.2: only named accumulators with a
value are rendered.

Screenshot:
![accs](https://user-images.githubusercontent.com/1694083/35065700-df409114-fb82-11e7-87c1-550c3f674371.png)

Author: Marcelo Vanzin 

Closes #20299 from vanzin/SPARK-23135.

(cherry picked from commit f6da41b0150725fe96ccb2ee3b48840b207f47eb)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f9ad00a5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f9ad00a5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f9ad00a5

Branch: refs/heads/branch-2.3
Commit: f9ad00a5aeeecf4b8d261a0dae6c8cb6be8daa67
Parents: d0cb198
Author: Marcelo Vanzin 
Authored: Fri Jan 19 13:14:24 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 19 13:14:38 2018 -0800

--
 .../org/apache/spark/ui/jobs/StagePage.scala| 20 +++-
 1 file changed, 15 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f9ad00a5/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
--
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala 
b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
index af78373..38f7b35 100644
--- a/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
+++ b/core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala
@@ -260,7 +260,11 @@ private[ui] class StagePage(parent: StagesTab, store: 
AppStatusStore) extends We
 
 val accumulableHeaders: Seq[String] = Seq("Accumulable", "Value")
 def accumulableRow(acc: AccumulableInfo): Seq[Node] = {
-  {acc.name}{acc.value}
+  if (acc.name != null && acc.value != null) {
+{acc.name}{acc.value}
+  } else {
+Nil
+  }
 }
 val accumulableTable = UIUtils.listingTable(
   accumulableHeaders,
@@ -856,7 +860,7 @@ private[ui] class TaskPagedTable(
 {formatBytes(task.taskMetrics.map(_.peakExecutionMemory))}
   
   {if (hasAccumulators(stage)) {
-accumulatorsInfo(task)
+{accumulatorsInfo(task)}
   }}
   {if (hasInput(stage)) {
 metricInfo(task) { m =>
@@ -912,8 +916,12 @@ private[ui] class TaskPagedTable(
   }
 
   private def accumulatorsInfo(task: TaskData): Seq[Node] = {
-task.accumulatorUpdates.map { acc =>
-  Unparsed(StringEscapeUtils.escapeHtml4(s"${acc.name}: ${acc.update}"))
+task.accumulatorUpdates.flatMap { acc =>
+  if (acc.name != null && acc.update.isDefined) {
+Unparsed(StringEscapeUtils.escapeHtml4(s"${acc.name}: 
${acc.update.get}")) ++ 
+  } else {
+Nil
+  }
 }
   }
 
@@ -977,7 +985,9 @@ private object ApiHelper {
 "Shuffle Spill (Disk)" -> TaskIndexNames.DISK_SPILL,
 "Errors" -> TaskIndexNames.ERROR)
 
-  def hasAccumulators(stageData: StageData): Boolean = 
stageData.accumulatorUpdates.size > 0
+  def hasAccumulators(stageData: StageData): Boolean = {
+stageData.accumulatorUpdates.exists { acc => acc.name != null && acc.value 
!= null }
+  }
 
   def hasInput(stageData: StageData): Boolean = stageData.inputBytes > 0
 


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [BUILD][MINOR] Fix java style check issues

2018-01-19 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 541dbc00b -> 54c1fae12


[BUILD][MINOR] Fix java style check issues

## What changes were proposed in this pull request?

This patch fixes a few recently introduced java style check errors in master 
and release branch.

As an aside, given that [java linting currently 
fails](https://github.com/apache/spark/pull/10763
) on machines with a clean maven cache, it'd be great to find another 
workaround to [re-enable the java style 
checks](https://github.com/apache/spark/blob/3a07eff5af601511e97a05e6fea0e3d48f74c4f0/dev/run-tests.py#L577)
 as part of Spark PRB.

/cc zsxwing JoshRosen srowen for any suggestions

## How was this patch tested?

Manual Check

Author: Sameer Agarwal 

Closes #20323 from sameeragarwal/java.

(cherry picked from commit 9c4b99861cda3f9ec44ca8c1adc81a293508190c)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/54c1fae1
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/54c1fae1
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/54c1fae1

Branch: refs/heads/branch-2.3
Commit: 54c1fae12df654c7713ac5e7eb4da7bb2f785401
Parents: 541dbc0
Author: Sameer Agarwal 
Authored: Fri Jan 19 01:38:08 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 19 01:38:20 2018 -0800

--
 .../apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java | 6 --
 .../org/apache/spark/sql/vectorized/ArrowColumnVector.java | 5 +++--
 .../org/apache/spark/sql/sources/v2/JavaBatchDataSourceV2.java | 3 ++-
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/54c1fae1/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
--
diff --git 
a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
 
b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
index 317ac45..f1ef411 100644
--- 
a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
+++ 
b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
@@ -28,8 +28,10 @@ import org.apache.spark.sql.types.StructType;
 /**
  * A data source writer that is returned by
  * {@link WriteSupport#createWriter(String, StructType, SaveMode, 
DataSourceV2Options)}/
- * {@link 
org.apache.spark.sql.sources.v2.streaming.MicroBatchWriteSupport#createMicroBatchWriter(String,
 long, StructType, OutputMode, DataSourceV2Options)}/
- * {@link 
org.apache.spark.sql.sources.v2.streaming.ContinuousWriteSupport#createContinuousWriter(String,
 StructType, OutputMode, DataSourceV2Options)}.
+ * {@link 
org.apache.spark.sql.sources.v2.streaming.MicroBatchWriteSupport#createMicroBatchWriter(
+ * String, long, StructType, OutputMode, DataSourceV2Options)}/
+ * {@link 
org.apache.spark.sql.sources.v2.streaming.ContinuousWriteSupport#createContinuousWriter(
+ * String, StructType, OutputMode, DataSourceV2Options)}.
  * It can mix in various writing optimization interfaces to speed up the data 
saving. The actual
  * writing logic is delegated to {@link DataWriter}.
  *

http://git-wip-us.apache.org/repos/asf/spark/blob/54c1fae1/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
--
diff --git 
a/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java 
b/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
index eb69001..bfd1b4c 100644
--- 
a/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
+++ 
b/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
@@ -556,8 +556,9 @@ public final class ArrowColumnVector extends ColumnVector {
   /**
* Any call to "get" method will throw UnsupportedOperationException.
*
-   * Access struct values in a ArrowColumnVector doesn't use this accessor. 
Instead, it uses getStruct() method defined
-   * in the parent class. Any call to "get" method in this class is a bug in 
the code.
+   * Access struct values in a ArrowColumnVector doesn't use this accessor. 
Instead, it uses
+   * getStruct() method defined in the parent class. Any call to "get" method 
in this class is a
+   * bug in the code.
*
*/
   private static class StructAccessor extends ArrowVectorAccessor {

http://git-wip-us.apache.org/repos/asf/spark/blob/54c1fae1/sql/core/src/test/java/test/org/apache/spark/sql/sources/v2/JavaBatchDataSourceV2.java
--
diff --git 

spark git commit: [BUILD][MINOR] Fix java style check issues

2018-01-19 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 568055da9 -> 9c4b99861


[BUILD][MINOR] Fix java style check issues

## What changes were proposed in this pull request?

This patch fixes a few recently introduced java style check errors in master 
and release branch.

As an aside, given that [java linting currently 
fails](https://github.com/apache/spark/pull/10763
) on machines with a clean maven cache, it'd be great to find another 
workaround to [re-enable the java style 
checks](https://github.com/apache/spark/blob/3a07eff5af601511e97a05e6fea0e3d48f74c4f0/dev/run-tests.py#L577)
 as part of Spark PRB.

/cc zsxwing JoshRosen srowen for any suggestions

## How was this patch tested?

Manual Check

Author: Sameer Agarwal 

Closes #20323 from sameeragarwal/java.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9c4b9986
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9c4b9986
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9c4b9986

Branch: refs/heads/master
Commit: 9c4b99861cda3f9ec44ca8c1adc81a293508190c
Parents: 568055d
Author: Sameer Agarwal 
Authored: Fri Jan 19 01:38:08 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 19 01:38:08 2018 -0800

--
 .../apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java | 6 --
 .../org/apache/spark/sql/vectorized/ArrowColumnVector.java | 5 +++--
 .../org/apache/spark/sql/sources/v2/JavaBatchDataSourceV2.java | 3 ++-
 3 files changed, 9 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/9c4b9986/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
--
diff --git 
a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
 
b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
index 317ac45..f1ef411 100644
--- 
a/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
+++ 
b/sql/core/src/main/java/org/apache/spark/sql/sources/v2/writer/DataSourceV2Writer.java
@@ -28,8 +28,10 @@ import org.apache.spark.sql.types.StructType;
 /**
  * A data source writer that is returned by
  * {@link WriteSupport#createWriter(String, StructType, SaveMode, 
DataSourceV2Options)}/
- * {@link 
org.apache.spark.sql.sources.v2.streaming.MicroBatchWriteSupport#createMicroBatchWriter(String,
 long, StructType, OutputMode, DataSourceV2Options)}/
- * {@link 
org.apache.spark.sql.sources.v2.streaming.ContinuousWriteSupport#createContinuousWriter(String,
 StructType, OutputMode, DataSourceV2Options)}.
+ * {@link 
org.apache.spark.sql.sources.v2.streaming.MicroBatchWriteSupport#createMicroBatchWriter(
+ * String, long, StructType, OutputMode, DataSourceV2Options)}/
+ * {@link 
org.apache.spark.sql.sources.v2.streaming.ContinuousWriteSupport#createContinuousWriter(
+ * String, StructType, OutputMode, DataSourceV2Options)}.
  * It can mix in various writing optimization interfaces to speed up the data 
saving. The actual
  * writing logic is delegated to {@link DataWriter}.
  *

http://git-wip-us.apache.org/repos/asf/spark/blob/9c4b9986/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
--
diff --git 
a/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java 
b/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
index eb69001..bfd1b4c 100644
--- 
a/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
+++ 
b/sql/core/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java
@@ -556,8 +556,9 @@ public final class ArrowColumnVector extends ColumnVector {
   /**
* Any call to "get" method will throw UnsupportedOperationException.
*
-   * Access struct values in a ArrowColumnVector doesn't use this accessor. 
Instead, it uses getStruct() method defined
-   * in the parent class. Any call to "get" method in this class is a bug in 
the code.
+   * Access struct values in a ArrowColumnVector doesn't use this accessor. 
Instead, it uses
+   * getStruct() method defined in the parent class. Any call to "get" method 
in this class is a
+   * bug in the code.
*
*/
   private static class StructAccessor extends ArrowVectorAccessor {

http://git-wip-us.apache.org/repos/asf/spark/blob/9c4b9986/sql/core/src/test/java/test/org/apache/spark/sql/sources/v2/JavaBatchDataSourceV2.java
--
diff --git 
a/sql/core/src/test/java/test/org/apache/spark/sql/sources/v2/JavaBatchDataSourceV2.java
 

spark git commit: [SPARK-23020] Ignore Flaky Test: SparkLauncherSuite.testInProcessLauncher

2018-01-17 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 aae73a21a -> 1a6dfaf25


[SPARK-23020] Ignore Flaky Test: SparkLauncherSuite.testInProcessLauncher

## What changes were proposed in this pull request?

Temporarily ignoring flaky test `SparkLauncherSuite.testInProcessLauncher` to 
de-flake the builds. This should be re-enabled when SPARK-23020 is merged.

## How was this patch tested?

N/A (Test Only Change)

Author: Sameer Agarwal 

Closes #20291 from sameeragarwal/disable-test-2.

(cherry picked from commit c132538a164cd8b55dbd7e8ffdc0c0782a0b588c)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1a6dfaf2
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1a6dfaf2
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1a6dfaf2

Branch: refs/heads/branch-2.3
Commit: 1a6dfaf25f507545debdf4cb1d427b9cc78c3cc8
Parents: aae73a2
Author: Sameer Agarwal 
Authored: Wed Jan 17 09:27:49 2018 -0800
Committer: Sameer Agarwal 
Committed: Wed Jan 17 09:28:02 2018 -0800

--
 .../test/java/org/apache/spark/launcher/SparkLauncherSuite.java  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/1a6dfaf2/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index 9d2f563..dffa609 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -25,6 +25,7 @@ import java.util.Map;
 import java.util.Properties;
 import java.util.concurrent.TimeUnit;
 
+import org.junit.Ignore;
 import org.junit.Test;
 import static org.junit.Assert.*;
 import static org.junit.Assume.*;
@@ -120,7 +121,8 @@ public class SparkLauncherSuite extends BaseSuite {
 assertEquals(0, app.waitFor());
   }
 
-  @Test
+  // TODO: [SPARK-23020] Re-enable this
+  @Ignore
   public void testInProcessLauncher() throws Exception {
 // Because this test runs SparkLauncher in process and in client mode, it 
pollutes the system
 // properties, and that can cause test failures down the test pipeline. So 
restore the original


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [SPARK-23020] Ignore Flaky Test: SparkLauncherSuite.testInProcessLauncher

2018-01-17 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 8598a982b -> c132538a1


[SPARK-23020] Ignore Flaky Test: SparkLauncherSuite.testInProcessLauncher

## What changes were proposed in this pull request?

Temporarily ignoring flaky test `SparkLauncherSuite.testInProcessLauncher` to 
de-flake the builds. This should be re-enabled when SPARK-23020 is merged.

## How was this patch tested?

N/A (Test Only Change)

Author: Sameer Agarwal 

Closes #20291 from sameeragarwal/disable-test-2.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/c132538a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/c132538a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/c132538a

Branch: refs/heads/master
Commit: c132538a164cd8b55dbd7e8ffdc0c0782a0b588c
Parents: 8598a98
Author: Sameer Agarwal 
Authored: Wed Jan 17 09:27:49 2018 -0800
Committer: Sameer Agarwal 
Committed: Wed Jan 17 09:27:49 2018 -0800

--
 .../test/java/org/apache/spark/launcher/SparkLauncherSuite.java  | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/c132538a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index 9d2f563..dffa609 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -25,6 +25,7 @@ import java.util.Map;
 import java.util.Properties;
 import java.util.concurrent.TimeUnit;
 
+import org.junit.Ignore;
 import org.junit.Test;
 import static org.junit.Assert.*;
 import static org.junit.Assume.*;
@@ -120,7 +121,8 @@ public class SparkLauncherSuite extends BaseSuite {
 assertEquals(0, app.waitFor());
   }
 
-  @Test
+  // TODO: [SPARK-23020] Re-enable this
+  @Ignore
   public void testInProcessLauncher() throws Exception {
 // Because this test runs SparkLauncher in process and in client mode, it 
pollutes the system
 // properties, and that can cause test failures down the test pipeline. So 
restore the original


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: Revert "[SPARK-23020][CORE] Fix races in launcher code, test."

2018-01-16 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 0a441d2ed -> b9339eee1


Revert "[SPARK-23020][CORE] Fix races in launcher code, test."

This reverts commit 20c69816a63071b82b1035d4b48798c358206421.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/b9339eee
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/b9339eee
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/b9339eee

Branch: refs/heads/branch-2.3
Commit: b9339eee1304c0309be4ea74f8cdc3d37a8048d3
Parents: 0a441d2e
Author: Sameer Agarwal 
Authored: Tue Jan 16 22:17:37 2018 -0800
Committer: Sameer Agarwal 
Committed: Tue Jan 16 22:17:37 2018 -0800

--
 .../spark/launcher/SparkLauncherSuite.java  | 49 +++-
 .../spark/launcher/AbstractAppHandle.java   | 22 ++---
 .../spark/launcher/ChildProcAppHandle.java  | 18 ---
 .../spark/launcher/InProcessAppHandle.java  | 17 ---
 .../spark/launcher/LauncherConnection.java  | 14 +++---
 .../apache/spark/launcher/LauncherServer.java   | 46 +++---
 .../org/apache/spark/launcher/BaseSuite.java| 42 +++--
 .../spark/launcher/LauncherServerSuite.java | 20 +---
 8 files changed, 72 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/b9339eee/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index a042375..9d2f563 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -17,7 +17,6 @@
 
 package org.apache.spark.launcher;
 
-import java.time.Duration;
 import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -32,7 +31,6 @@ import static org.junit.Assume.*;
 import static org.mockito.Mockito.*;
 
 import org.apache.spark.SparkContext;
-import org.apache.spark.SparkContext$;
 import org.apache.spark.internal.config.package$;
 import org.apache.spark.util.Utils;
 
@@ -139,9 +137,7 @@ public class SparkLauncherSuite extends BaseSuite {
   // Here DAGScheduler is stopped, while SparkContext.clearActiveContext 
may not be called yet.
   // Wait for a reasonable amount of time to avoid creating two active 
SparkContext in JVM.
   // See SPARK-23019 and SparkContext.stop() for details.
-  eventually(Duration.ofSeconds(5), Duration.ofMillis(10), () -> {
-assertTrue("SparkContext is still alive.", 
SparkContext$.MODULE$.getActive().isEmpty());
-  });
+  TimeUnit.MILLISECONDS.sleep(500);
 }
   }
 
@@ -150,35 +146,26 @@ public class SparkLauncherSuite extends BaseSuite {
 SparkAppHandle.Listener listener = mock(SparkAppHandle.Listener.class);
 doAnswer(invocation -> {
   SparkAppHandle h = (SparkAppHandle) invocation.getArguments()[0];
-  synchronized (transitions) {
-transitions.add(h.getState());
-  }
+  transitions.add(h.getState());
   return null;
 }).when(listener).stateChanged(any(SparkAppHandle.class));
 
-SparkAppHandle handle = null;
-try {
-  handle = new InProcessLauncher()
-.setMaster("local")
-.setAppResource(SparkLauncher.NO_RESOURCE)
-.setMainClass(InProcessTestApp.class.getName())
-.addAppArgs("hello")
-.startApplication(listener);
-
-  waitFor(handle);
-  assertEquals(SparkAppHandle.State.FINISHED, handle.getState());
-
-  // Matches the behavior of LocalSchedulerBackend.
-  List expected = Arrays.asList(
-SparkAppHandle.State.CONNECTED,
-SparkAppHandle.State.RUNNING,
-SparkAppHandle.State.FINISHED);
-  assertEquals(expected, transitions);
-} finally {
-  if (handle != null) {
-handle.kill();
-  }
-}
+SparkAppHandle handle = new InProcessLauncher()
+  .setMaster("local")
+  .setAppResource(SparkLauncher.NO_RESOURCE)
+  .setMainClass(InProcessTestApp.class.getName())
+  .addAppArgs("hello")
+  .startApplication(listener);
+
+waitFor(handle);
+assertEquals(SparkAppHandle.State.FINISHED, handle.getState());
+
+// Matches the behavior of LocalSchedulerBackend.
+List expected = Arrays.asList(
+  SparkAppHandle.State.CONNECTED,
+  SparkAppHandle.State.RUNNING,
+  SparkAppHandle.State.FINISHED);
+assertEquals(expected, transitions);
   }
 
   public static class SparkLauncherTestApp {

http://git-wip-us.apache.org/repos/asf/spark/blob/b9339eee/launcher/src/main/java/org/apache/spark/launcher/AbstractAppHandle.java

spark git commit: Revert "[SPARK-23020][CORE] Fix races in launcher code, test."

2018-01-16 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 166705785 -> 50345a2aa


Revert "[SPARK-23020][CORE] Fix races in launcher code, test."

This reverts commit 66217dac4f8952a9923625908ad3dcb030763c81.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/50345a2a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/50345a2a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/50345a2a

Branch: refs/heads/master
Commit: 50345a2aa59741c511d555edbbad2da9611e7d16
Parents: 1667057
Author: Sameer Agarwal 
Authored: Tue Jan 16 22:14:47 2018 -0800
Committer: Sameer Agarwal 
Committed: Tue Jan 16 22:14:47 2018 -0800

--
 .../spark/launcher/SparkLauncherSuite.java  | 49 +++-
 .../spark/launcher/AbstractAppHandle.java   | 22 ++---
 .../spark/launcher/ChildProcAppHandle.java  | 18 ---
 .../spark/launcher/InProcessAppHandle.java  | 17 ---
 .../spark/launcher/LauncherConnection.java  | 14 +++---
 .../apache/spark/launcher/LauncherServer.java   | 46 +++---
 .../org/apache/spark/launcher/BaseSuite.java| 42 +++--
 .../spark/launcher/LauncherServerSuite.java | 20 +---
 8 files changed, 72 insertions(+), 156 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/50345a2a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index a042375..9d2f563 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -17,7 +17,6 @@
 
 package org.apache.spark.launcher;
 
-import java.time.Duration;
 import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -32,7 +31,6 @@ import static org.junit.Assume.*;
 import static org.mockito.Mockito.*;
 
 import org.apache.spark.SparkContext;
-import org.apache.spark.SparkContext$;
 import org.apache.spark.internal.config.package$;
 import org.apache.spark.util.Utils;
 
@@ -139,9 +137,7 @@ public class SparkLauncherSuite extends BaseSuite {
   // Here DAGScheduler is stopped, while SparkContext.clearActiveContext 
may not be called yet.
   // Wait for a reasonable amount of time to avoid creating two active 
SparkContext in JVM.
   // See SPARK-23019 and SparkContext.stop() for details.
-  eventually(Duration.ofSeconds(5), Duration.ofMillis(10), () -> {
-assertTrue("SparkContext is still alive.", 
SparkContext$.MODULE$.getActive().isEmpty());
-  });
+  TimeUnit.MILLISECONDS.sleep(500);
 }
   }
 
@@ -150,35 +146,26 @@ public class SparkLauncherSuite extends BaseSuite {
 SparkAppHandle.Listener listener = mock(SparkAppHandle.Listener.class);
 doAnswer(invocation -> {
   SparkAppHandle h = (SparkAppHandle) invocation.getArguments()[0];
-  synchronized (transitions) {
-transitions.add(h.getState());
-  }
+  transitions.add(h.getState());
   return null;
 }).when(listener).stateChanged(any(SparkAppHandle.class));
 
-SparkAppHandle handle = null;
-try {
-  handle = new InProcessLauncher()
-.setMaster("local")
-.setAppResource(SparkLauncher.NO_RESOURCE)
-.setMainClass(InProcessTestApp.class.getName())
-.addAppArgs("hello")
-.startApplication(listener);
-
-  waitFor(handle);
-  assertEquals(SparkAppHandle.State.FINISHED, handle.getState());
-
-  // Matches the behavior of LocalSchedulerBackend.
-  List expected = Arrays.asList(
-SparkAppHandle.State.CONNECTED,
-SparkAppHandle.State.RUNNING,
-SparkAppHandle.State.FINISHED);
-  assertEquals(expected, transitions);
-} finally {
-  if (handle != null) {
-handle.kill();
-  }
-}
+SparkAppHandle handle = new InProcessLauncher()
+  .setMaster("local")
+  .setAppResource(SparkLauncher.NO_RESOURCE)
+  .setMainClass(InProcessTestApp.class.getName())
+  .addAppArgs("hello")
+  .startApplication(listener);
+
+waitFor(handle);
+assertEquals(SparkAppHandle.State.FINISHED, handle.getState());
+
+// Matches the behavior of LocalSchedulerBackend.
+List expected = Arrays.asList(
+  SparkAppHandle.State.CONNECTED,
+  SparkAppHandle.State.RUNNING,
+  SparkAppHandle.State.FINISHED);
+assertEquals(expected, transitions);
   }
 
   public static class SparkLauncherTestApp {

http://git-wip-us.apache.org/repos/asf/spark/blob/50345a2a/launcher/src/main/java/org/apache/spark/launcher/AbstractAppHandle.java

spark git commit: [SPARK-23020][CORE] Fix races in launcher code, test.

2018-01-15 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 e58c4a929 -> 20c69816a


[SPARK-23020][CORE] Fix races in launcher code, test.

The race in the code is because the handle might update
its state to the wrong state if the connection handling
thread is still processing incoming data; so the handle
needs to wait for the connection to finish up before
checking the final state.

The race in the test is because when waiting for a handle
to reach a final state, the waitFor() method needs to wait
until all handle state is updated (which also includes
waiting for the connection thread above to finish).
Otherwise, waitFor() may return too early, which would cause
a bunch of different races (like the listener not being yet
notified of the state change, or being in the middle of
being notified, or the handle not being properly disposed
and causing postChecks() to assert).

On top of that I found, by code inspection, a couple of
potential races that could make a handle end up in the
wrong state when being killed.

Tested by running the existing unit tests a lot (and not
seeing the errors I was seeing before).

Author: Marcelo Vanzin 

Closes #20223 from vanzin/SPARK-23020.

(cherry picked from commit 66217dac4f8952a9923625908ad3dcb030763c81)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/20c69816
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/20c69816
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/20c69816

Branch: refs/heads/branch-2.3
Commit: 20c69816a63071b82b1035d4b48798c358206421
Parents: e58c4a9
Author: Marcelo Vanzin 
Authored: Mon Jan 15 22:40:44 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Jan 15 22:41:26 2018 -0800

--
 .../spark/launcher/SparkLauncherSuite.java  | 49 +---
 .../spark/launcher/AbstractAppHandle.java   | 22 +++--
 .../spark/launcher/ChildProcAppHandle.java  | 18 +++
 .../spark/launcher/InProcessAppHandle.java  | 17 +++
 .../spark/launcher/LauncherConnection.java  | 14 +++---
 .../apache/spark/launcher/LauncherServer.java   | 46 +++---
 .../org/apache/spark/launcher/BaseSuite.java| 42 ++---
 .../spark/launcher/LauncherServerSuite.java | 20 +++-
 8 files changed, 156 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/20c69816/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index 9d2f563..a042375 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -17,6 +17,7 @@
 
 package org.apache.spark.launcher;
 
+import java.time.Duration;
 import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -31,6 +32,7 @@ import static org.junit.Assume.*;
 import static org.mockito.Mockito.*;
 
 import org.apache.spark.SparkContext;
+import org.apache.spark.SparkContext$;
 import org.apache.spark.internal.config.package$;
 import org.apache.spark.util.Utils;
 
@@ -137,7 +139,9 @@ public class SparkLauncherSuite extends BaseSuite {
   // Here DAGScheduler is stopped, while SparkContext.clearActiveContext 
may not be called yet.
   // Wait for a reasonable amount of time to avoid creating two active 
SparkContext in JVM.
   // See SPARK-23019 and SparkContext.stop() for details.
-  TimeUnit.MILLISECONDS.sleep(500);
+  eventually(Duration.ofSeconds(5), Duration.ofMillis(10), () -> {
+assertTrue("SparkContext is still alive.", 
SparkContext$.MODULE$.getActive().isEmpty());
+  });
 }
   }
 
@@ -146,26 +150,35 @@ public class SparkLauncherSuite extends BaseSuite {
 SparkAppHandle.Listener listener = mock(SparkAppHandle.Listener.class);
 doAnswer(invocation -> {
   SparkAppHandle h = (SparkAppHandle) invocation.getArguments()[0];
-  transitions.add(h.getState());
+  synchronized (transitions) {
+transitions.add(h.getState());
+  }
   return null;
 }).when(listener).stateChanged(any(SparkAppHandle.class));
 
-SparkAppHandle handle = new InProcessLauncher()
-  .setMaster("local")
-  .setAppResource(SparkLauncher.NO_RESOURCE)
-  .setMainClass(InProcessTestApp.class.getName())
-  .addAppArgs("hello")
-  .startApplication(listener);
-
-waitFor(handle);
-assertEquals(SparkAppHandle.State.FINISHED, handle.getState());
-
-// Matches the behavior of LocalSchedulerBackend.
-

spark git commit: [SPARK-23020][CORE] Fix races in launcher code, test.

2018-01-15 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 07ae39d0e -> 66217dac4


[SPARK-23020][CORE] Fix races in launcher code, test.

The race in the code is because the handle might update
its state to the wrong state if the connection handling
thread is still processing incoming data; so the handle
needs to wait for the connection to finish up before
checking the final state.

The race in the test is because when waiting for a handle
to reach a final state, the waitFor() method needs to wait
until all handle state is updated (which also includes
waiting for the connection thread above to finish).
Otherwise, waitFor() may return too early, which would cause
a bunch of different races (like the listener not being yet
notified of the state change, or being in the middle of
being notified, or the handle not being properly disposed
and causing postChecks() to assert).

On top of that I found, by code inspection, a couple of
potential races that could make a handle end up in the
wrong state when being killed.

Tested by running the existing unit tests a lot (and not
seeing the errors I was seeing before).

Author: Marcelo Vanzin 

Closes #20223 from vanzin/SPARK-23020.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/66217dac
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/66217dac
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/66217dac

Branch: refs/heads/master
Commit: 66217dac4f8952a9923625908ad3dcb030763c81
Parents: 07ae39d
Author: Marcelo Vanzin 
Authored: Mon Jan 15 22:40:44 2018 -0800
Committer: Sameer Agarwal 
Committed: Mon Jan 15 22:40:44 2018 -0800

--
 .../spark/launcher/SparkLauncherSuite.java  | 49 +---
 .../spark/launcher/AbstractAppHandle.java   | 22 +++--
 .../spark/launcher/ChildProcAppHandle.java  | 18 +++
 .../spark/launcher/InProcessAppHandle.java  | 17 +++
 .../spark/launcher/LauncherConnection.java  | 14 +++---
 .../apache/spark/launcher/LauncherServer.java   | 46 +++---
 .../org/apache/spark/launcher/BaseSuite.java| 42 ++---
 .../spark/launcher/LauncherServerSuite.java | 20 +++-
 8 files changed, 156 insertions(+), 72 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/66217dac/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
--
diff --git 
a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java 
b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
index 9d2f563..a042375 100644
--- a/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
+++ b/core/src/test/java/org/apache/spark/launcher/SparkLauncherSuite.java
@@ -17,6 +17,7 @@
 
 package org.apache.spark.launcher;
 
+import java.time.Duration;
 import java.util.Arrays;
 import java.util.ArrayList;
 import java.util.HashMap;
@@ -31,6 +32,7 @@ import static org.junit.Assume.*;
 import static org.mockito.Mockito.*;
 
 import org.apache.spark.SparkContext;
+import org.apache.spark.SparkContext$;
 import org.apache.spark.internal.config.package$;
 import org.apache.spark.util.Utils;
 
@@ -137,7 +139,9 @@ public class SparkLauncherSuite extends BaseSuite {
   // Here DAGScheduler is stopped, while SparkContext.clearActiveContext 
may not be called yet.
   // Wait for a reasonable amount of time to avoid creating two active 
SparkContext in JVM.
   // See SPARK-23019 and SparkContext.stop() for details.
-  TimeUnit.MILLISECONDS.sleep(500);
+  eventually(Duration.ofSeconds(5), Duration.ofMillis(10), () -> {
+assertTrue("SparkContext is still alive.", 
SparkContext$.MODULE$.getActive().isEmpty());
+  });
 }
   }
 
@@ -146,26 +150,35 @@ public class SparkLauncherSuite extends BaseSuite {
 SparkAppHandle.Listener listener = mock(SparkAppHandle.Listener.class);
 doAnswer(invocation -> {
   SparkAppHandle h = (SparkAppHandle) invocation.getArguments()[0];
-  transitions.add(h.getState());
+  synchronized (transitions) {
+transitions.add(h.getState());
+  }
   return null;
 }).when(listener).stateChanged(any(SparkAppHandle.class));
 
-SparkAppHandle handle = new InProcessLauncher()
-  .setMaster("local")
-  .setAppResource(SparkLauncher.NO_RESOURCE)
-  .setMainClass(InProcessTestApp.class.getName())
-  .addAppArgs("hello")
-  .startApplication(listener);
-
-waitFor(handle);
-assertEquals(SparkAppHandle.State.FINISHED, handle.getState());
-
-// Matches the behavior of LocalSchedulerBackend.
-List expected = Arrays.asList(
-  SparkAppHandle.State.CONNECTED,
-  SparkAppHandle.State.RUNNING,
-  

svn commit: r24178 [6/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/histogram.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/histogram.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/histogram.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,128 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Compute histogram 
statistics for given column
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+histogram 
{SparkR}R Documentation
+
+Compute histogram statistics for given column
+
+Description
+
+This function computes a histogram for a given SparkR Column.
+
+
+
+Usage
+
+
+## S4 method for signature 'SparkDataFrame,characterOrColumn'
+histogram(df, col, nbins = 10)
+
+
+
+Arguments
+
+
+df
+
+the SparkDataFrame containing the Column to build the histogram from.
+
+col
+
+the column as Character string or a Column to build the histogram from.
+
+nbins
+
+the number of bins (optional). Default value is 10.
+
+
+
+
+Value
+
+a data.frame with the histogram statistics, i.e., counts and centroids.
+
+
+
+Note
+
+histogram since 2.0.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, insertInto,
+intersect, isLocal,
+isStreaming, join,
+limit, localCheckpoint,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, rollup,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,
+str, subset,
+summary, take,
+toJSON, unionByName,
+union, unpersist,
+withColumn, withWatermark,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D 
+##D # Create a SparkDataFrame from the Iris dataset
+##D irisDF - createDataFrame(iris)
+##D 
+##D # Compute histogram statistics
+##D histStats - histogram(irisDF, irisDF$Sepal_Length, nbins = 12)
+##D 
+##D # Once SparkR has computed the histogram statistics, the histogram can be
+##D # rendered using the ggplot2 library:
+##D 
+##D require(ggplot2)
+##D plot - ggplot(histStats, aes(x = centroids, y = counts)) +
+##D geom_bar(stat = identity) +
+##D xlab(Sepal_Length) + ylab(Frequency)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/insertInto.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/insertInto.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/insertInto.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,121 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: insertInto
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+insertInto 
{SparkR}R Documentation
+
+insertInto
+
+Description
+
+Insert the contents of a SparkDataFrame into a table registered in the 
current SparkSession.
+
+
+
+Usage
+
+
+insertInto(x, tableName, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+insertInto(x, tableName,
+  overwrite = FALSE)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+tableName
+
+a character vector containing the name of the table.
+
+...
+
+further arguments to be passed to or from other methods.
+the existing rows in the table.
+
+overwrite
+
+a logical argument indicating whether or not to overwrite.
+
+
+
+
+Note
+
+insertInto since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, histogram,
+intersect, isLocal,
+isStreaming, join,
+limit, localCheckpoint,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, rollup,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,

svn commit: r24178 [10/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/spark.glm.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/spark.glm.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/spark.glm.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,234 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Generalized Linear 
Models
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+spark.glm 
{SparkR}R Documentation
+
+Generalized Linear Models
+
+Description
+
+Fits generalized linear model against a SparkDataFrame.
+Users can call summary to print a summary of the fitted model, 
predict to make
+predictions on new data, and write.ml/read.ml to 
save/load fitted models.
+
+
+
+Usage
+
+
+spark.glm(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.glm(data, formula, family = gaussian,
+  tol = 1e-06, maxIter = 25, weightCol = NULL, regParam = 0,
+  var.power = 0, link.power = 1 - var.power,
+  stringIndexerOrderType = c("frequencyDesc", "frequencyAsc", "alphabetDesc",
+  "alphabetAsc"), offsetCol = NULL)
+
+## S4 method for signature 'GeneralizedLinearRegressionModel'
+summary(object)
+
+## S3 method for class 'summary.GeneralizedLinearRegressionModel'
+print(x, ...)
+
+## S4 method for signature 'GeneralizedLinearRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'GeneralizedLinearRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+
+
+Arguments
+
+
+data
+
+a SparkDataFrame for training.
+
+formula
+
+a symbolic description of the model to be fitted. Currently only a few 
formula
+operators are supported, including '~', '.', ':', '+', and '-'.
+
+...
+
+additional arguments passed to the method.
+
+family
+
+a description of the error distribution and link function to be used in the 
model.
+This can be a character string naming a family function, a family function or
+the result of a call to a family function. Refer R family at
+https://stat.ethz.ch/R-manual/R-devel/library/stats/html/family.html;>https://stat.ethz.ch/R-manual/R-devel/library/stats/html/family.html.
+Currently these families are supported: binomial, 
gaussian,
+Gamma, poisson and tweedie.
+
+Note that there are two ways to specify the tweedie family.
+
+
+
+ Set family = "tweedie" and specify the var.power and 
link.power;
+
+
+ When package statmod is loaded, the tweedie family is 
specified
+using the family definition therein, i.e., tweedie(var.power, 
link.power).
+
+
+
+tol
+
+positive convergence tolerance of iterations.
+
+maxIter
+
+integer giving the maximal number of IRLS iterations.
+
+weightCol
+
+the weight column name. If this is not set or NULL, we treat 
all instance
+weights as 1.0.
+
+regParam
+
+regularization parameter for L2 regularization.
+
+var.power
+
+the power in the variance function of the Tweedie distribution which 
provides
+the relationship between the variance and mean of the distribution. Only
+applicable to the Tweedie family.
+
+link.power
+
+the index in the power link function. Only applicable to the Tweedie 
family.
+
+stringIndexerOrderType
+
+how to order categories of a string feature column. This is used to
+decide the base level of a string feature as the last category
+after ordering is dropped when encoding strings. Supported options
+are frequencyDesc, frequencyAsc, 
alphabetDesc, and
+alphabetAsc. The default value is frequencyDesc. When 
the
+ordering is set to alphabetDesc, this drops the same category
+as R when encoding strings.
+
+offsetCol
+
+the offset column name. If this is not set or empty, we treat all instance
+offsets as 0.0. The feature specified as offset has a constant coefficient of
+1.0.
+
+object
+
+a fitted generalized linear model.
+
+x
+
+summary object of fitted generalized linear model returned by 
summary function.
+
+newData
+
+a SparkDataFrame for testing.
+
+path
+
+the directory where the model is saved.
+
+overwrite
+
+overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.
+
+
+
+
+Value
+
+spark.glm returns a fitted generalized linear model.
+
+summary returns summary information of the fitted model, which 
is a list.
+The list of components includes at least the coefficients 
(coefficients matrix,
+which includes coefficients, standard error of coefficients, t value and p 
value),
+null.deviance (null/residual degrees of freedom), 
aic (AIC)
+and iter (number of iterations IRLS takes). If there are 
collinear columns in
+the data, the coefficients matrix only provides coefficients.
+
+predict returns a SparkDataFrame containing predicted labels 
in a column named
+prediction.
+
+
+
+Note
+

svn commit: r24178 [8/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/read.jdbc.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/read.jdbc.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/read.jdbc.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,105 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create a SparkDataFrame 
representing the database table...
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+read.jdbc 
{SparkR}R Documentation
+
+Create a SparkDataFrame representing the database table accessible via 
JDBC URL
+
+Description
+
+Additional JDBC database connection properties can be set (...)
+
+
+
+Usage
+
+
+read.jdbc(url, tableName, partitionColumn = NULL, lowerBound = NULL,
+  upperBound = NULL, numPartitions = 0L, predicates = list(), ...)
+
+
+
+Arguments
+
+
+url
+
+JDBC database url of the form jdbc:subprotocol:subname
+
+tableName
+
+the name of the table in the external database
+
+partitionColumn
+
+the name of a column of integral type that will be used for partitioning
+
+lowerBound
+
+the minimum value of partitionColumn used to decide partition 
stride
+
+upperBound
+
+the maximum value of partitionColumn used to decide partition 
stride
+
+numPartitions
+
+the number of partitions, This, along with lowerBound 
(inclusive),
+upperBound (exclusive), form partition strides for generated WHERE
+clause expressions used to split the column partitionColumn 
evenly.
+This defaults to SparkContext.defaultParallelism when unset.
+
+predicates
+
+a list of conditions in the where clause; each one defines one partition
+
+...
+
+additional JDBC database connection named properties.
+
+
+
+
+Details
+
+Only one of partitionColumn or predicates should be set. Partitions of the 
table will be
+retrieved in parallel based on the numPartitions or by the 
predicates.
+
+Don't create too many partitions in parallel on a large cluster; otherwise 
Spark might crash
+your external database systems.
+
+
+
+Value
+
+SparkDataFrame
+
+
+
+Note
+
+read.jdbc since 2.0.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D jdbcUrl - jdbc:mysql://localhost:3306/databasename
+##D df - read.jdbc(jdbcUrl, table, predicates = 
list(field=123), user = username)
+##D df2 - read.jdbc(jdbcUrl, table2, partitionColumn = 
index, lowerBound = 0,
+##D  upperBound = 1, user = username, password 
= password)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/read.json.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/read.json.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/read.json.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,77 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create a SparkDataFrame 
from a JSON file.
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+read.json 
{SparkR}R Documentation
+
+Create a SparkDataFrame from a JSON file.
+
+Description
+
+Loads a JSON file, returning the result as a SparkDataFrame
+By default, (http://jsonlines.org/;>JSON Lines text format or 
newline-delimited JSON
+) is supported. For JSON (one record per file), set a named property 
multiLine to
+TRUE.
+It goes through the entire dataset once to determine the schema.
+
+
+
+Usage
+
+
+## Default S3 method:
+read.json(path, ...)
+
+## Default S3 method:
+jsonFile(path)
+
+
+
+Arguments
+
+
+path
+
+Path of file to read. A vector of multiple paths is allowed.
+
+...
+
+additional external data source specific named properties.
+
+
+
+
+Value
+
+SparkDataFrame
+
+
+
+Note
+
+read.json since 1.6.0
+
+jsonFile since 1.4.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D df - read.json(path, multiLine = TRUE)
+##D df - jsonFile(path)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/read.ml.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/read.ml.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/read.ml.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,66 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Load a fitted MLlib model 
from the input path.
+
+
+

svn commit: r24178 [5/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/eq_null_safe.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/eq_null_safe.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/eq_null_safe.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,76 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: %=%
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+%=% {SparkR}R 
Documentation
+
+%=%
+
+Description
+
+Equality test that is safe for null values.
+
+
+
+Usage
+
+
+x %=% value
+
+## S4 method for signature 'Column'
+x %=% value
+
+
+
+Arguments
+
+
+x
+
+a Column
+
+value
+
+a value to compare
+
+
+
+
+Details
+
+Can be used, unlike standard equality operator, to perform null-safe joins.
+Equivalent to Scala Column.= and 
Column.eqNullSafe.
+
+
+
+Note
+
+%=% since 2.3.0
+
+
+
+Examples
+
+## Not run: 
+##D df1 - createDataFrame(data.frame(
+##D   x = c(1, NA, 3, NA), y = c(2, 6, 3, NA)
+##D ))
+##D 
+##D head(select(df1, df1$x == df1$y, df1$x %=% df1$y))
+##D 
+##D df2 - createDataFrame(data.frame(y = c(3, NA)))
+##D count(join(df1, df2, df1$y == df2$y))
+##D 
+##D count(join(df1, df2, df1$y %=% df2$y))
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/except.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/except.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/except.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,117 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: except
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+except 
{SparkR}R Documentation
+
+except
+
+Description
+
+Return a new SparkDataFrame containing rows in this SparkDataFrame
+but not in another SparkDataFrame. This is equivalent to EXCEPT 
in SQL.
+
+
+
+Usage
+
+
+except(x, y)
+
+## S4 method for signature 'SparkDataFrame,SparkDataFrame'
+except(x, y)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+y
+
+a SparkDataFrame.
+
+
+
+
+Value
+
+A SparkDataFrame containing the result of the except operation.
+
+
+
+Note
+
+except since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, hint,
+histogram, insertInto,
+intersect, isLocal,
+isStreaming, join,
+limit, localCheckpoint,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, rollup,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,
+str, subset,
+summary, take,
+toJSON, unionByName,
+union, unpersist,
+withColumn, withWatermark,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D df1 - read.json(path)
+##D df2 - read.json(path2)
+##D exceptDF - except(df, df2)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/explain.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/explain.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/explain.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,125 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Explain
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+explain 
{SparkR}R Documentation
+
+Explain
+
+Description
+
+Print the logical and physical Catalyst plans to the console for debugging.
+
+
+
+Usage
+
+
+explain(x, ...)
+
+## S4 method for signature 'SparkDataFrame'
+explain(x, extended = FALSE)
+
+## S4 method for signature 'StreamingQuery'
+explain(x, extended = FALSE)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame or a StreamingQuery.
+
+...
+
+further arguments to be passed to or from other methods.
+

svn commit: r24178 [3/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/column_datetime_functions.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/column_datetime_functions.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/column_datetime_functions.html Sat 
Jan 13 10:29:47 2018
@@ -0,0 +1,402 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Date time functions for 
Column operations
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+column_datetime_functions {SparkR}R Documentation
+
+Date time functions for Column operations
+
+Description
+
+Date time functions defined for Column.
+
+
+
+Usage
+
+
+current_date(x = "missing")
+
+current_timestamp(x = "missing")
+
+date_trunc(format, x)
+
+dayofmonth(x)
+
+dayofweek(x)
+
+dayofyear(x)
+
+from_unixtime(x, ...)
+
+hour(x)
+
+last_day(x)
+
+minute(x)
+
+month(x)
+
+quarter(x)
+
+second(x)
+
+to_date(x, format)
+
+to_timestamp(x, format)
+
+unix_timestamp(x, format)
+
+weekofyear(x)
+
+window(x, ...)
+
+year(x)
+
+## S4 method for signature 'Column'
+dayofmonth(x)
+
+## S4 method for signature 'Column'
+dayofweek(x)
+
+## S4 method for signature 'Column'
+dayofyear(x)
+
+## S4 method for signature 'Column'
+hour(x)
+
+## S4 method for signature 'Column'
+last_day(x)
+
+## S4 method for signature 'Column'
+minute(x)
+
+## S4 method for signature 'Column'
+month(x)
+
+## S4 method for signature 'Column'
+quarter(x)
+
+## S4 method for signature 'Column'
+second(x)
+
+## S4 method for signature 'Column,missing'
+to_date(x, format)
+
+## S4 method for signature 'Column,character'
+to_date(x, format)
+
+## S4 method for signature 'Column,missing'
+to_timestamp(x, format)
+
+## S4 method for signature 'Column,character'
+to_timestamp(x, format)
+
+## S4 method for signature 'Column'
+weekofyear(x)
+
+## S4 method for signature 'Column'
+year(x)
+
+## S4 method for signature 'Column'
+from_unixtime(x, format = "-MM-dd HH:mm:ss")
+
+## S4 method for signature 'Column'
+window(x, windowDuration, slideDuration = NULL,
+  startTime = NULL)
+
+## S4 method for signature 'missing,missing'
+unix_timestamp(x, format)
+
+## S4 method for signature 'Column,missing'
+unix_timestamp(x, format)
+
+## S4 method for signature 'Column,character'
+unix_timestamp(x, format = "-MM-dd HH:mm:ss")
+
+## S4 method for signature 'Column'
+trunc(x, format)
+
+## S4 method for signature 'character,Column'
+date_trunc(format, x)
+
+## S4 method for signature 'missing'
+current_date()
+
+## S4 method for signature 'missing'
+current_timestamp()
+
+
+
+Arguments
+
+
+x
+
+Column to compute on. In window, it must be a time Column of
+TimestampType. This is not used with current_date and
+current_timestamp
+
+format
+
+The format for the given dates or timestamps in Column x. See 
the
+format used in the following methods:
+
+
+
+ to_date and to_timestamp: it is the string 
to use to parse
+Column x to DateType or TimestampType.
+
+
+ trunc: it is the string to use to specify the truncation 
method.
+For example, year, , yy for truncate 
by year, or month, mon,
+mm for truncate by month.
+
+
+ date_trunc: it is similar with trunc's but 
additionally
+supports day, dd, second, 
minute, hour, week and quarter.
+
+
+
+...
+
+additional argument(s).
+
+windowDuration
+
+a string specifying the width of the window, e.g. '1 second',
+'1 day 12 hours', '2 minutes'. Valid interval strings are 'week',
+'day', 'hour', 'minute', 'second', 'millisecond', 'microsecond'. Note that
+the duration is a fixed length of time, and does not vary over time
+according to a calendar. For example, '1 day' always means 86,400,000
+milliseconds, not a calendar day.
+
+slideDuration
+
+a string specifying the sliding interval of the window. Same format as
+windowDuration. A new window will be generated every
+slideDuration. Must be less than or equal to
+the windowDuration. This duration is likewise absolute, and does 
not
+vary according to a calendar.
+
+startTime
+
+the offset with respect to 1970-01-01 00:00:00 UTC with which to start
+window intervals. For example, in order to have hourly tumbling windows
+that start 15 minutes past the hour, e.g. 12:15-13:15, 13:15-14:15... provide
+startTime as "15 minutes".
+
+
+
+
+Details
+
+dayofmonth: Extracts the day of the month as an integer from a
+given date/timestamp/string.
+
+dayofweek: Extracts the day of the week as an integer from a
+given date/timestamp/string.
+
+dayofyear: Extracts the day of the year as an integer from a
+given date/timestamp/string.
+
+hour: Extracts the hour as an integer from a given 
date/timestamp/string.
+
+last_day: Given a date column, returns the last day of the 
month which the
+given date 

svn commit: r24178 [4/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/createOrReplaceTempView.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/createOrReplaceTempView.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/createOrReplaceTempView.html Sat Jan 
13 10:29:47 2018
@@ -0,0 +1,111 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Creates a temporary view 
using the given name.
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+createOrReplaceTempView {SparkR}R Documentation
+
+Creates a temporary view using the given name.
+
+Description
+
+Creates a new temporary view using a SparkDataFrame in the Spark Session. 
If a
+temporary view with the same name already exists, replaces it.
+
+
+
+Usage
+
+
+createOrReplaceTempView(x, viewName)
+
+## S4 method for signature 'SparkDataFrame,character'
+createOrReplaceTempView(x, viewName)
+
+
+
+Arguments
+
+
+x
+
+A SparkDataFrame
+
+viewName
+
+A character vector containing the name of the table
+
+
+
+
+Note
+
+createOrReplaceTempView since 2.0.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes, crossJoin,
+cube, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, hint,
+histogram, insertInto,
+intersect, isLocal,
+isStreaming, join,
+limit, localCheckpoint,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, rollup,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,
+str, subset,
+summary, take,
+toJSON, unionByName,
+union, unpersist,
+withColumn, withWatermark,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D createOrReplaceTempView(df, json_df)
+##D new_df - sql(SELECT * FROM json_df)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/createTable.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/createTable.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/createTable.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,96 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Creates a table based on 
the dataset in a data source
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+createTable {SparkR}R 
Documentation
+
+Creates a table based on the dataset in a data source
+
+Description
+
+Creates a table based on the dataset in a data source. Returns a 
SparkDataFrame associated with
+the table.
+
+
+
+Usage
+
+
+createTable(tableName, path = NULL, source = NULL, schema = NULL, ...)
+
+
+
+Arguments
+
+
+tableName
+
+the qualified or unqualified name that designates a table. If no database
+identifier is provided, it refers to a table in the current database.
+
+path
+
+(optional) the path of files to load.
+
+source
+
+(optional) the name of the data source.
+
+schema
+
+(optional) the schema of the data required for some data sources.
+
+...
+
+additional named parameters as options for the data source.
+
+
+
+
+Details
+
+The data source is specified by the source and a set of 
options(...).
+If source is not specified, the default data source configured by
+spark.sql.sources.default will be used. When a path 
is specified, an external table is
+created from the data at the given path. Otherwise a managed table is created.
+
+
+
+Value
+
+A SparkDataFrame.
+
+
+
+Note
+
+createTable since 2.2.0
+
+
+
+See Also
+
+createExternalTable
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D df - createTable(myjson, path=path/to/json, 
source=json, schema)
+##D 
+##D createTable(people, source = json, schema = schema)
+##D insertInto(df, people)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/crossJoin.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/crossJoin.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/crossJoin.html Sat Jan 13 

svn commit: r24178 [7/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/mutate.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/mutate.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/mutate.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,135 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Mutate
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+mutate 
{SparkR}R Documentation
+
+Mutate
+
+Description
+
+Return a new SparkDataFrame with the specified columns added or replaced.
+
+
+
+Usage
+
+
+mutate(.data, ...)
+
+transform(`_data`, ...)
+
+## S4 method for signature 'SparkDataFrame'
+mutate(.data, ...)
+
+## S4 method for signature 'SparkDataFrame'
+transform(`_data`, ...)
+
+
+
+Arguments
+
+
+.data
+
+a SparkDataFrame.
+
+...
+
+additional column argument(s) each in the form name = col.
+
+_data
+
+a SparkDataFrame.
+
+
+
+
+Value
+
+A new SparkDataFrame with the new columns added or replaced.
+
+
+
+Note
+
+mutate since 1.4.0
+
+transform since 1.5.0
+
+
+
+See Also
+
+rename withColumn
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, histogram,
+insertInto, intersect,
+isLocal, isStreaming,
+join, limit,
+localCheckpoint, merge,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, rollup,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,
+str, subset,
+summary, take,
+toJSON, unionByName,
+union, unpersist,
+withColumn, withWatermark,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D newDF - mutate(df, newCol = df$col1 * 5, newCol2 = df$col1 * 2)
+##D names(newDF) # Will contain newCol, newCol2
+##D newDF2 - transform(df, newCol = df$col1 / 5, newCol2 = df$col1 * 2)
+##D 
+##D df - createDataFrame(list(list(Andy, 30L), 
list(Justin, 19L)), c(name, age))
+##D # Replace the age column
+##D df1 - mutate(df, age = df$age + 1L)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/nafunctions.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/nafunctions.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/nafunctions.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,175 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: A set of SparkDataFrame 
functions working with NA values
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+dropna 
{SparkR}R Documentation
+
+A set of SparkDataFrame functions working with NA values
+
+Description
+
+dropna, na.omit - Returns a new SparkDataFrame omitting rows with null 
values.
+
+fillna - Replace null values.
+
+
+
+Usage
+
+
+dropna(x, how = c("any", "all"), minNonNulls = NULL, cols = NULL)
+
+na.omit(object, ...)
+
+fillna(x, value, cols = NULL)
+
+## S4 method for signature 'SparkDataFrame'
+dropna(x, how = c("any", "all"),
+  minNonNulls = NULL, cols = NULL)
+
+## S4 method for signature 'SparkDataFrame'
+na.omit(object, how = c("any", "all"),
+  minNonNulls = NULL, cols = NULL)
+
+## S4 method for signature 'SparkDataFrame'
+fillna(x, value, cols = NULL)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+how
+
+any or all.
+if any, drop a row if it contains any nulls.
+if all, drop a row only if all its values are null.
+if minNonNulls is specified, how is ignored.
+
+minNonNulls
+
+if specified, drop rows that have less than
+minNonNulls non-null values.
+This overwrites the how parameter.
+
+cols
+
+optional list of column names to consider. In fillna,
+columns specified in cols that do not have matching data
+type are ignored. For example, if value is a character, and
+subset contains a non-character column, then the non-character
+column is simply ignored.
+
+object
+
+a SparkDataFrame.
+
+...
+
+further arguments to be passed to or from other methods.
+
+value
+
+value to replace null values with.
+Should be an integer, 

svn commit: r24178 [9/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/selectExpr.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/selectExpr.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/selectExpr.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,120 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: SelectExpr
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+selectExpr 
{SparkR}R Documentation
+
+SelectExpr
+
+Description
+
+Select from a SparkDataFrame using a set of SQL expressions.
+
+
+
+Usage
+
+
+selectExpr(x, expr, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+selectExpr(x, expr, ...)
+
+
+
+Arguments
+
+
+x
+
+A SparkDataFrame to be selected from.
+
+expr
+
+A string containing a SQL expression
+
+...
+
+Additional expressions
+
+
+
+
+Value
+
+A SparkDataFrame
+
+
+
+Note
+
+selectExpr since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, histogram,
+insertInto, intersect,
+isLocal, isStreaming,
+join, limit,
+localCheckpoint, merge,
+mutate, ncol,
+nrow, persist,
+printSchema, randomSplit,
+rbind, registerTempTable,
+rename, repartition,
+rollup, sample,
+saveAsTable, schema,
+select, showDF,
+show, storageLevel,
+str, subset,
+summary, take,
+toJSON, unionByName,
+union, unpersist,
+withColumn, withWatermark,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D selectExpr(df, col1, (col2 * 5) as newCol)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/setCheckpointDir.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/setCheckpointDir.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/setCheckpointDir.html Sat Jan 13 
10:29:47 2018
@@ -0,0 +1,60 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Set checkpoint 
directory
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+setCheckpointDir {SparkR}R Documentation
+
+Set checkpoint directory
+
+Description
+
+Set the directory under which SparkDataFrame are going to be checkpointed. 
The directory must be
+a HDFS path if running on a cluster.
+
+
+
+Usage
+
+
+setCheckpointDir(directory)
+
+
+
+Arguments
+
+
+directory
+
+Directory path to checkpoint to
+
+
+
+
+Note
+
+setCheckpointDir since 2.2.0
+
+
+
+See Also
+
+checkpoint
+
+
+
+Examples
+
+## Not run: 
+##D setCheckpointDir(/checkpoint)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/setCurrentDatabase.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/setCurrentDatabase.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/setCurrentDatabase.html Sat Jan 13 
10:29:47 2018
@@ -0,0 +1,54 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Sets the current default 
database
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+setCurrentDatabase {SparkR}R Documentation
+
+Sets the current default database
+
+Description
+
+Sets the current default database.
+
+
+
+Usage
+
+
+setCurrentDatabase(databaseName)
+
+
+
+Arguments
+
+
+databaseName
+
+name of the database
+
+
+
+
+Note
+
+since 2.2.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D setCurrentDatabase(default)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/setJobDescription.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/setJobDescription.html (added)
+++ 

svn commit: r24178 [12/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/toJSON.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/toJSON.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/toJSON.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,117 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: toJSON
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+toJSON 
{SparkR}R Documentation
+
+toJSON
+
+Description
+
+Converts a SparkDataFrame into a SparkDataFrame of JSON string.
+
+
+
+Usage
+
+
+## S4 method for signature 'SparkDataFrame'
+toJSON(x)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame
+
+
+
+
+Details
+
+Each row is turned into a JSON document with columns as different fields.
+The returned SparkDataFrame has a single character column with the name 
value
+
+
+
+Value
+
+a SparkDataFrame
+
+
+
+Note
+
+toJSON since 2.2.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, histogram,
+insertInto, intersect,
+isLocal, isStreaming,
+join, limit,
+localCheckpoint, merge,
+mutate, ncol,
+nrow, persist,
+printSchema, randomSplit,
+rbind, registerTempTable,
+rename, repartition,
+rollup, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, summary,
+take, unionByName,
+union, unpersist,
+withColumn, withWatermark,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.parquet
+##D df - read.parquet(path)
+##D df_json - toJSON(df)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/uncacheTable.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/uncacheTable.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/uncacheTable.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,65 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Uncache Table
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+uncacheTable {SparkR}R 
Documentation
+
+Uncache Table
+
+Description
+
+Removes the specified table from the in-memory cache.
+
+
+
+Usage
+
+
+## Default S3 method:
+uncacheTable(tableName)
+
+
+
+Arguments
+
+
+tableName
+
+the qualified or unqualified name that designates a table. If no database
+identifier is provided, it refers to a table in the current database.
+
+
+
+
+Value
+
+SparkDataFrame
+
+
+
+Note
+
+uncacheTable since 1.4.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D createOrReplaceTempView(df, table)
+##D uncacheTable(table)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/union.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/union.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/union.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,137 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Return a new 
SparkDataFrame containing the union of rows
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+union 
{SparkR}R Documentation
+
+Return a new SparkDataFrame containing the union of rows
+
+Description
+
+Return a new SparkDataFrame containing the union of rows in this 
SparkDataFrame
+and another SparkDataFrame. This is equivalent to UNION ALL in 
SQL.
+Input SparkDataFrames can have different schemas (names and data types).
+
+unionAll is deprecated - use union instead
+
+
+
+Usage
+
+
+union(x, y)
+
+unionAll(x, y)
+
+## S4 method for signature 'SparkDataFrame,SparkDataFrame'
+union(x, y)
+
+## S4 method for signature 'SparkDataFrame,SparkDataFrame'

svn commit: r24178 [2/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/awaitTermination.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/awaitTermination.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/awaitTermination.html Sat Jan 13 
10:29:47 2018
@@ -0,0 +1,84 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: awaitTermination
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+awaitTermination {SparkR}R Documentation
+
+awaitTermination
+
+Description
+
+Waits for the termination of the query, either by stopQuery or 
by an error.
+
+
+
+Usage
+
+
+awaitTermination(x, timeout = NULL)
+
+## S4 method for signature 'StreamingQuery'
+awaitTermination(x, timeout = NULL)
+
+
+
+Arguments
+
+
+x
+
+a StreamingQuery.
+
+timeout
+
+time to wait in milliseconds, if omitted, wait indefinitely until 
stopQuery
+is called or an error has occured.
+
+
+
+
+Details
+
+If the query has terminated, then all subsequent calls to this method will 
return TRUE
+immediately.
+
+
+
+Value
+
+TRUE if query has terminated within the timeout period; nothing if timeout 
is not
+specified.
+
+
+
+Note
+
+awaitTermination(StreamingQuery) since 2.2.0
+
+experimental
+
+
+
+See Also
+
+Other StreamingQuery methods: explain,
+isActive, lastProgress,
+queryName, status,
+stopQuery
+
+
+
+Examples
+
+## Not run:  awaitTermination(sq, 1) 
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/between.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/between.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/between.html Sat Jan 13 10:29:47 2018
@@ -0,0 +1,55 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: between
+
+
+
+
+between 
{SparkR}R Documentation
+
+between
+
+Description
+
+Test if the column is between the lower bound and upper bound, inclusive.
+
+
+
+Usage
+
+
+between(x, bounds)
+
+## S4 method for signature 'Column'
+between(x, bounds)
+
+
+
+Arguments
+
+
+x
+
+a Column
+
+bounds
+
+lower and upper bounds
+
+
+
+
+Note
+
+between since 1.5.0
+
+
+
+See Also
+
+Other colum_func: alias, cast,
+endsWith, otherwise,
+over, startsWith,
+substr
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/broadcast.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/broadcast.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/broadcast.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,118 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: broadcast
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+broadcast 
{SparkR}R Documentation
+
+broadcast
+
+Description
+
+Return a new SparkDataFrame marked as small enough for use in broadcast 
joins.
+
+
+
+Usage
+
+
+broadcast(x)
+
+## S4 method for signature 'SparkDataFrame'
+broadcast(x)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+
+
+
+Details
+
+Equivalent to hint(x, "broadcast").
+
+
+
+Value
+
+a SparkDataFrame.
+
+
+
+Note
+
+broadcast since 2.3.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+cache, checkpoint,
+coalesce, collect,
+colnames, coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, histogram,
+insertInto, intersect,
+isLocal, isStreaming,
+join, limit,
+localCheckpoint, merge,
+mutate, ncol,
+nrow, persist,
+printSchema, randomSplit,
+rbind, registerTempTable,
+rename, repartition,
+rollup, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, summary,
+take, toJSON,
+unionByName, union,
+unpersist, withColumn,
+withWatermark, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.stream,
+write.text
+
+
+
+Examples
+
+## Not run: 
+##D df - createDataFrame(mtcars)
+##D avg_mpg - mean(groupBy(createDataFrame(mtcars), cyl), 
mpg)
+##D 
+##D head(join(df, broadcast(avg_mpg), df$cyl == avg_mpg$cyl))
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/cache.html

svn commit: r24178 [11/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.conf.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.conf.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.conf.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,68 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Get Runtime Config from 
the current active SparkSession
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sparkR.conf {SparkR}R 
Documentation
+
+Get Runtime Config from the current active SparkSession
+
+Description
+
+Get Runtime Config from the current active SparkSession.
+To change SparkSession Runtime Config, please see 
sparkR.session().
+
+
+
+Usage
+
+
+sparkR.conf(key, defaultValue)
+
+
+
+Arguments
+
+
+key
+
+(optional) The key of the config to get, if omitted, all config is 
returned
+
+defaultValue
+
+(optional) The default value of the config to return if they config is not
+set, if omitted, the call fails if the config key is not set
+
+
+
+
+Value
+
+a list of config values with keys as their names
+
+
+
+Note
+
+sparkR.conf since 2.0.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D allConfigs - sparkR.conf()
+##D masterValue - unlist(sparkR.conf(spark.master))
+##D namedConfig - sparkR.conf(spark.executor.memory, 
0g)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.init-deprecated.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.init-deprecated.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.init-deprecated.html Sat Jan 
13 10:29:47 2018
@@ -0,0 +1,92 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: (Deprecated) Initialize a 
new Spark Context
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sparkR.init {SparkR}R 
Documentation
+
+(Deprecated) Initialize a new Spark Context
+
+Description
+
+This function initializes a new SparkContext.
+
+
+
+Usage
+
+
+sparkR.init(master = "", appName = "SparkR",
+  sparkHome = Sys.getenv("SPARK_HOME"), sparkEnvir = list(),
+  sparkExecutorEnv = list(), sparkJars = "", sparkPackages = "")
+
+
+
+Arguments
+
+
+master
+
+The Spark master URL
+
+appName
+
+Application name to register with cluster manager
+
+sparkHome
+
+Spark Home directory
+
+sparkEnvir
+
+Named list of environment variables to set on worker nodes
+
+sparkExecutorEnv
+
+Named list of environment variables to be used when launching executors
+
+sparkJars
+
+Character vector of jar files to pass to the worker nodes
+
+sparkPackages
+
+Character vector of package coordinates
+
+
+
+
+Note
+
+sparkR.init since 1.4.0
+
+
+
+See Also
+
+sparkR.session
+
+
+
+Examples
+
+## Not run: 
+##D sc - sparkR.init(local[2], SparkR, 
/home/spark)
+##D sc - sparkR.init(local[2], SparkR, 
/home/spark,
+##D  list(spark.executor.memory=1g))
+##D sc - sparkR.init(yarn-client, SparkR, 
/home/spark,
+##D  list(spark.executor.memory=4g),
+##D  list(LD_LIBRARY_PATH=/directory of JVM libraries 
(libjvm.so) on workers/),
+##D  c(one.jar, two.jar, 
three.jar),
+##D  c(com.databricks:spark-avro_2.11:2.0.1))
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.newJObject.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.newJObject.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/sparkR.newJObject.html Sat Jan 13 
10:29:47 2018
@@ -0,0 +1,86 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create Java Objects
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sparkR.newJObject {SparkR}R Documentation
+
+Create Java Objects
+
+Description
+
+Create a new Java object in the JVM running the Spark driver. The return
+value is automatically converted to an R object for simple objects. Other
+values are returned as a jobj which is a reference to an object on 
JVM.
+
+
+
+Usage
+
+
+sparkR.newJObject(x, ...)
+
+
+
+Arguments
+
+
+x

svn commit: r24178 [13/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/write.parquet.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/write.parquet.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/write.parquet.html Sat Jan 13 
10:29:47 2018
@@ -0,0 +1,129 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Save the contents of 
SparkDataFrame as a Parquet file,...
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+write.parquet {SparkR}R 
Documentation
+
+Save the contents of SparkDataFrame as a Parquet file, preserving the 
schema.
+
+Description
+
+Save the contents of a SparkDataFrame as a Parquet file, preserving the 
schema. Files written out
+with this method can be read back in as a SparkDataFrame using read.parquet().
+
+
+
+Usage
+
+
+write.parquet(x, path, ...)
+
+saveAsParquetFile(x, path)
+
+## S4 method for signature 'SparkDataFrame,character'
+write.parquet(x, path, mode = "error",
+  ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+saveAsParquetFile(x, path)
+
+
+
+Arguments
+
+
+x
+
+A SparkDataFrame
+
+path
+
+The directory where the file is saved
+
+...
+
+additional argument(s) passed to the method.
+
+mode
+
+one of 'append', 'overwrite', 'error', 'errorifexists', 'ignore'
+save mode (it is 'error' by default)
+
+
+
+
+Note
+
+write.parquet since 1.6.0
+
+saveAsParquetFile since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, alias,
+arrange, as.data.frame,
+attach,SparkDataFrame-method,
+broadcast, cache,
+checkpoint, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, cube,
+dapplyCollect, dapply,
+describe, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+hint, histogram,
+insertInto, intersect,
+isLocal, isStreaming,
+join, limit,
+localCheckpoint, merge,
+mutate, ncol,
+nrow, persist,
+printSchema, randomSplit,
+rbind, registerTempTable,
+rename, repartition,
+rollup, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, summary,
+take, toJSON,
+unionByName, union,
+unpersist, withColumn,
+withWatermark, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.stream, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D write.parquet(df, /tmp/sparkr-tmp1/)
+##D saveAsParquetFile(df, /tmp/sparkr-tmp2/)
+## End(Not run)
+
+
+
+[Package SparkR version 2.3.0 
Index]
+

Added: dev/spark/v2.3.0-rc1-docs/_site/api/R/write.stream.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/write.stream.html (added)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/write.stream.html Sat Jan 13 10:29:47 
2018
@@ -0,0 +1,181 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Write the streaming 
SparkDataFrame to a data source.
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+write.stream {SparkR}R 
Documentation
+
+Write the streaming SparkDataFrame to a data source.
+
+Description
+
+The data source is specified by the source and a set of 
options (...).
+If source is not specified, the default data source configured by
+spark.sql.sources.default will be used.
+
+
+
+Usage
+
+
+write.stream(df, source = NULL, outputMode = NULL, ...)
+
+## S4 method for signature 'SparkDataFrame'
+write.stream(df, source = NULL,
+  outputMode = NULL, partitionBy = NULL, trigger.processingTime = NULL,
+  trigger.once = NULL, ...)
+
+
+
+Arguments
+
+
+df
+
+a streaming SparkDataFrame.
+
+source
+
+a name for external data source.
+
+outputMode
+
+one of 'append', 'complete', 'update'.
+
+...
+
+additional external data source specific named options.
+
+partitionBy
+
+a name or a list of names of columns to partition the output by on the file
+system. If specified, the output is laid out on the file system similar to 
Hive's
+partitioning scheme.
+
+trigger.processingTime
+
+a processing time interval as a string, e.g. '5 seconds',
+'1 minute'. This is a trigger that runs a query periodically based on the 
processing
+time. If value is '0 seconds', the query will run as fast as possible, this is 
the
+default. Only one trigger can be set.
+
+trigger.once
+
+a logical, must be set to TRUE. This is a trigger that 
processes only
+one batch of data in a 

svn commit: r24178 [1/13] - /dev/spark/v2.3.0-rc1-docs/_site/api/R/

2018-01-13 Thread sameerag
Author: sameerag
Date: Sat Jan 13 10:29:47 2018
New Revision: 24178

Log:
Update R docs

Added:
dev/spark/v2.3.0-rc1-docs/_site/api/R/AFTSurvivalRegressionModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/ALSModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/BisectingKMeansModel-class.html

dev/spark/v2.3.0-rc1-docs/_site/api/R/DecisionTreeClassificationModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/DecisionTreeRegressionModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/FPGrowthModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/GBTClassificationModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/GBTRegressionModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/GaussianMixtureModel-class.html

dev/spark/v2.3.0-rc1-docs/_site/api/R/GeneralizedLinearRegressionModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/GroupedData.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/IsotonicRegressionModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/KMeansModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/KSTest-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/LDAModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/LinearSVCModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/LogisticRegressionModel-class.html

dev/spark/v2.3.0-rc1-docs/_site/api/R/MultilayerPerceptronClassificationModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/NaiveBayesModel-class.html

dev/spark/v2.3.0-rc1-docs/_site/api/R/RandomForestClassificationModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/RandomForestRegressionModel-class.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/SparkDataFrame.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/StreamingQuery.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/WindowSpec.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/alias.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/approxQuantile.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/arrange.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/as.data.frame.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/attach.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/avg.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/awaitTermination.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/between.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/broadcast.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/cache.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/cacheTable.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/cancelJobGroup.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/cast.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/checkpoint.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/clearCache.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/clearJobGroup.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/coalesce.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/collect.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/coltypes.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_aggregate_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_collection_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_datetime_diff_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_datetime_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_math_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_misc_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_nonaggregate_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_string_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/column_window_functions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/columnfunctions.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/columns.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/corr.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/count.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/cov.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/createDataFrame.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/createExternalTable-deprecated.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/createOrReplaceTempView.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/createTable.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/crossJoin.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/crosstab.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/cube.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/currentDatabase.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dapply.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dapplyCollect.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/describe.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dim.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/distinct.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/drop.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dropDuplicates.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dropTempTable-deprecated.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dropTempView.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/dtypes.html
dev/spark/v2.3.0

svn commit: r24176 [14/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListener.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListener.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListener.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListener (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerApplicationEnd.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerApplicationEnd.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerApplicationEnd.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerApplicationEnd (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerApplicationStart.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerApplicationStart.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerApplicationStart.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerApplicationStart (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockManagerAdded.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockManagerAdded.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockManagerAdded.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerBlockManagerAdded (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockManagerRemoved.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockManagerRemoved.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockManagerRemoved.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerBlockManagerRemoved (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockUpdated.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockUpdated.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerBlockUpdated.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerBlockUpdated (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerEnvironmentUpdate.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerEnvironmentUpdate.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerEnvironmentUpdate.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerEnvironmentUpdate (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerEvent.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerEvent.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerEvent.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerEvent (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerExecutorAdded.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerExecutorAdded.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerExecutorAdded.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkListenerExecutorAdded (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/scheduler/SparkListenerExecutorBlacklisted.html
==
--- 

svn commit: r24176 [16/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions.javalang (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions.javalang Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/typed.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/typed.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/javalang/typed.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 typed (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions.scalalang (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions.scalalang (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.expressions.scalalang Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/expressions/scalalang/typed.html

svn commit: r24176 [3/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: dev/spark/v2.3.0-rc1-docs/_site/api/R/00Index.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/00Index.html (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/00Index.html Sat Jan 13 09:42:26 2018
@@ -19,5 +19,1847 @@
 Help Pages
 
 
-There are no help pages in this package
+
+A
+B
+C
+D
+E
+F
+G
+H
+I
+J
+K
+L
+M
+N
+O
+P
+Q
+R
+S
+T
+U
+V
+W
+Y
+misc
+
+
+
+-- A --
+
+
+abs
+Math functions for Column operations
+abs-method
+Math functions for Column operations
+acos
+Math functions for Column operations
+acos-method
+Math functions for Column operations
+add_months
+Date time arithmetic functions for Column operations
+add_months-method
+Date time arithmetic functions for Column operations
+AFTSurvivalRegressionModel-class
+S4 class that represents a AFTSurvivalRegressionModel
+agg
+summarize
+agg-method
+summarize
+alias
+alias
+alias-method
+alias
+ALSModel-class
+S4 class that represents an ALSModel
+approxCountDistinct
+Aggregate functions for Column operations
+approxCountDistinct-method
+Aggregate functions for Column operations
+approxQuantile
+Calculates the approximate quantiles of numerical columns of a 
SparkDataFrame
+approxQuantile-method
+Calculates the approximate quantiles of numerical columns of a 
SparkDataFrame
+arrange
+Arrange Rows by Variables
+arrange-method
+Arrange Rows by Variables
+array_contains
+Collection functions for Column operations
+array_contains-method
+Collection functions for Column operations
+as.data.frame
+Download data from a SparkDataFrame into a R data.frame
+as.data.frame-method
+Download data from a SparkDataFrame into a R data.frame
+as.DataFrame
+Create a SparkDataFrame
+as.DataFrame.default
+Create a SparkDataFrame
+asc
+A set of operations working with SparkDataFrame columns
+ascii
+String functions for Column operations
+ascii-method
+String functions for Column operations
+asin
+Math functions for Column operations
+asin-method
+Math functions for Column operations
+associationRules-method
+FP-growth
+atan
+Math functions for Column operations
+atan-method
+Math functions for Column operations
+atan2
+Math functions for Column operations
+atan2-method
+Math functions for Column operations
+attach
+Attach SparkDataFrame to R search path
+attach-method
+Attach SparkDataFrame to R search path
+avg
+avg
+avg-method
+avg
+awaitTermination
+awaitTermination
+awaitTermination-method
+awaitTermination
+
+
+-- B --
+
+
+base64
+String functions for Column operations
+base64-method
+String functions for Column operations
+between
+between
+between-method
+between
+bin
+Math functions for Column operations
+bin-method
+Math functions for Column operations
+BisectingKMeansModel-class
+S4 class that represents a BisectingKMeansModel
+bitwiseNOT
+Non-aggregate functions for Column operations
+bitwiseNOT-method
+Non-aggregate functions for Column operations
+broadcast
+broadcast
+broadcast-method
+broadcast
+bround
+Math functions for Column operations
+bround-method
+Math functions for Column operations
+
+
+-- C --
+
+
+cache
+Cache
+cache-method
+Cache
+cacheTable
+Cache Table
+cacheTable.default
+Cache Table
+cancelJobGroup
+Cancel active jobs for the specified group
+cancelJobGroup.default
+Cancel active jobs for the specified group
+cast
+Casts the column to a different data type.
+cast-method
+Casts the column to a different data type.
+cbrt
+Math functions for Column operations
+cbrt-method
+Math functions for Column operations
+ceil
+Math functions for Column operations
+ceil-method
+Math functions for Column operations
+ceiling
+Math functions for Column operations
+ceiling-method
+Math functions for Column operations
+checkpoint
+checkpoint
+checkpoint-method
+checkpoint
+clearCache
+Clear Cache
+clearCache.default
+Clear Cache
+clearJobGroup
+Clear current job group ID and its description
+clearJobGroup.default
+Clear current job group ID and its description
+coalesce
+Coalesce
+coalesce-method
+Coalesce
+coalesce-method
+Non-aggregate functions for Column operations
+collect
+Collects all the elements of a SparkDataFrame and coerces them into an R 
data.frame.
+collect-method
+Collects all the elements of a SparkDataFrame and coerces them into an R 
data.frame.
+collect_list
+Aggregate functions for Column operations
+collect_list-method
+Aggregate functions for Column operations
+collect_set
+Aggregate functions for Column operations
+collect_set-method
+Aggregate functions for Column operations
+colnames
+Column Names of SparkDataFrame
+colnames-method
+Column Names of SparkDataFrame
+colnames-
+Column Names of SparkDataFrame
+colnames--method
+Column Names of SparkDataFrame
+coltypes
+coltypes
+coltypes-method
+coltypes
+coltypes-
+coltypes
+coltypes--method
+coltypes
+column
+S4 class that represents a SparkDataFrame column
+Column-class
+S4 class that represents a SparkDataFrame column
+column-method
+S4 class that represents a SparkDataFrame column

svn commit: r24176 [7/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/Classifier.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/Classifier.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/Classifier.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Classifier (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/DecisionTreeClassificationModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/DecisionTreeClassificationModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/DecisionTreeClassificationModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DecisionTreeClassificationModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/DecisionTreeClassifier.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/DecisionTreeClassifier.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/DecisionTreeClassifier.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DecisionTreeClassifier (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/GBTClassificationModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/GBTClassificationModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/GBTClassificationModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 GBTClassificationModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/GBTClassifier.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/GBTClassifier.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/GBTClassifier.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 GBTClassifier (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LabelConverter.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LabelConverter.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LabelConverter.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 LabelConverter (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LinearSVC.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LinearSVC.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LinearSVC.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 LinearSVC (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LinearSVCModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LinearSVCModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LinearSVCModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 LinearSVCModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LogisticRegression.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LogisticRegression.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LogisticRegression.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 LogisticRegression (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LogisticRegressionModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/classification/LogisticRegressionModel.html
 (original)
+++ 

svn commit: r24176 [20/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/api/java/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/api/java/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/api/java/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.streaming.api.java Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/ConstantInputDStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/ConstantInputDStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/ConstantInputDStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ConstantInputDStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/DStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/DStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/DStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/InputDStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/InputDStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/InputDStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 InputDStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/MapWithStateDStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/MapWithStateDStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/MapWithStateDStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 MapWithStateDStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/PairDStreamFunctions.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/PairDStreamFunctions.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/PairDStreamFunctions.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 PairDStreamFunctions (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/ReceiverInputDStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/ReceiverInputDStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/ReceiverInputDStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ReceiverInputDStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.streaming.dstream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.streaming.dstream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/streaming/dstream/package-tree.html
 (original)
+++ 

svn commit: r24176 [2/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/DESCRIPTION
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/DESCRIPTION (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/DESCRIPTION Sat Jan 13 09:42:26 2018
@@ -57,6 +57,6 @@ Collate:
 'types.R'
 'utils.R'
 'window.R'
-RoxygenNote: 5.0.1
+RoxygenNote: 6.0.1
 VignetteBuilder: knitr
 NeedsCompilation: no



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r24176 [9/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasTol.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasTol.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasTol.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 HasTol (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasVarianceCol.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasVarianceCol.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasVarianceCol.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 HasVarianceCol (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasWeightCol.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasWeightCol.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/HasWeightCol.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 HasWeightCol (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/SharedParamsCodeGen.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/SharedParamsCodeGen.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/SharedParamsCodeGen.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SharedParamsCodeGen (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.param.shared (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.param.shared (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/param/shared/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.param.shared Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/RWrapperUtils.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/RWrapperUtils.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/RWrapperUtils.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RWrapperUtils (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/RWrappers.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/RWrappers.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/RWrappers.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RWrappers (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.r (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/r/package-summary.html

svn commit: r24176 [12/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/LogNormalGenerator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/LogNormalGenerator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/LogNormalGenerator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 LogNormalGenerator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/PoissonGenerator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/PoissonGenerator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/PoissonGenerator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 PoissonGenerator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/RandomDataGenerator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/RandomDataGenerator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/RandomDataGenerator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RandomDataGenerator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/RandomRDDs.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/RandomRDDs.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/RandomRDDs.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RandomRDDs (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/StandardNormalGenerator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/StandardNormalGenerator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/StandardNormalGenerator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StandardNormalGenerator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/UniformGenerator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/UniformGenerator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/UniformGenerator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 UniformGenerator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/WeibullGenerator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/WeibullGenerator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/WeibullGenerator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 WeibullGenerator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.mllib.random (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.mllib.random (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/random/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.mllib.random Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 


svn commit: r24176 [22/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/internal/Logging.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/internal/Logging.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/internal/Logging.html
 Sat Jan 13 09:42:26 2018
@@ -57,7 +57,7 @@ log level is enabled.
   AnyRef, Any
 
 Known Subclasses
-AFTSurvivalRegression,
 AFTSurvivalRegressionModel,
 ALS, ALS, ALS, ALSModel, AssociationRules, AsyncRDDActions, BasicBlockReplicationPolicy,
 Binarizer, BinaryClassificationMetrics,
 BisectingKMeans, BisectingKMeans, 
BisectingKMeansModel,
 BisectingKMeansModel,
 BlockMatrix, 
Broadcast, Bucketizer, Builder, ChiSqSelector, ChiSqSelectorModel, 
ClassificationModel, Classifier, CoGroupedRDD, Column, ColumnName, ConstantInputDStream,
 CountVectorizer, CountVectorizerModel,
 CrossValidator, CrossValidatorModel, 
CrossValidatorModelWriter,
 DCT, DStream, DataFrameReader, DataStreamReader, DataValidators, DecisionTree, DecisionTree, DecisionTreeClassificationModel,
 DecisionTreeClassifier,
 DecisionTreeModel,
 DecisionTreeRegressionModel,
 DecisionTreeRegressor,
 DefaultTopologyMapper,
 DistributedLDAModel,
 DoubleRDDFunctions, EdgeRDD, EdgeRDDImpl, ElementwiseProduct, 
Estimator
 , ExecutionListenerManager,
 FPGrowth, FPGrowth, FPGrowthModel, FeatureHasher, FileBasedTopologyMapper,
 GBTClassificationModel,
 GBTClassifier, 
GBTRegressionModel,
 GBTRegressor, GaussianMixture, GaussianMixtureModel,
 GeneralizedLinearAlgorithm,
 GeneralizedLinearRegression,
 GeneralizedLinearRegressionModel,
 GradientBoostedTrees,
 GradientBoostedTrees,
 GradientDescent, 
GradientDescent, 
GraphGenerators, GraphLoader, HadoopMapRedCommitProtocol,
 HadoopMapReduceCommitProtocol,
 HadoopRDD, HashingTF, HiveContext, HiveFileFormat, 
IDF, IDFModel, Imputer, ImputerModel, IndexToString, InputDStream, InputFormatInfo, Interaction, IsotonicRegression,
 IsotonicRegressionModel,
 JdbcRDD, KMeans, KMeans, KMeansModel, KryoSerializer, LBFGS, LBFGS, 
 LDA, LDA, LDAModel, LassoWithSGD, LinearRegression, LinearRegressionModel,
 LinearRegressionWithSGD,
 LinearSVC, LinearSVCModel, LocalLDAModel, LogisticRegression,
 LogisticRegressionModel,
 LogisticRegressionWithLBFGS,
 LogisticRegressionWithSGD,
 MLUtils, MLWriter, MapWithStateDStream,
 MatrixFactorizationModel,
 MaxAbsScaler, MaxAbsScalerModel, MinMaxScaler, MinMaxSca
 lerModel, Model, MultilayerPerceptronClassificationModel,
 MultilayerPerceptronClassifier,
 NGram, NaiveBayes, NaiveBayes, NaiveBayesModel, NewHadoopRDD, Node, Normalizer, OneHotEncoder, OneHotEncoderEstimator,
 OneHotEncoderModel, 
OneVsRest, OneVsRestModel, OnlineLDAOptimizer,
 OrderedRDDFunctions, PCA, PCAModel, PageRank, PairRDDFunctions, PartitionPruningRDD, Pipe
 line, PipelineModel, PipelineStage, PolynomialExpansion, 
PowerIterationClustering,
 PredictionModel, Predictor, PrefixSpan, PrefixSpan, Pregel, ProbabilisticClassificationModel,
 ProbabilisticClassifier,
 QuantileDiscretizer, 
QuantileDiscretizer, 
RDD, RFormula, RFormulaModel, RandomBlockReplicationPolicy,
 RandomForest, RandomForestClassificationModel,
 RandomForestClassifier,
 RandomForestRegressionModel,
 RandomForestRegressor, RankingMetrics, ReceiverInputDStream,
 RegexTokenizer, RegressionMetrics,
 RegressionModel, RidgeRegressionWithSGD,
 RowMatrix, SQLContext, SQLTransformer, SVMWithSGD, SequenceFileRDDFunctions,
 ShuffledRDD, SizeEstimator, SparkConf, Spark
 Context, SparkContext, SparkEnv, SparkEnv, SparkHadoopMapRedUtil, 
SparkSession, StandardScaler, StandardScaler, StandardScalerModel, 
StatsReportListener, 
StopWordsRemover, StreamingContext, StreamingContext, StreamingKMeans, 
StreamingKMeansModel,
 StreamingLinearAlgorithm,
 StreamingLinearRegressionWithSGD,
 StreamingLogisticRegressionWithSGD,
 StreamingQueryManager,
 StreamingTest, StringIndexer, StringIndexerModel, 
Summarizer, Tokenizer, TrainValidationSplit,
 TrainValidationSplitModel,
 TrainValidationSplitModelWriter,
 Transformer, TypedColumn, UDFRegistration, UnaryTransformer, UnionRDD, VectorAssembler, VectorIndexer, VectorIndexerModel, 
VectorSizeHint, VectorSlicer, Ve
 rtexRDD, VertexRDDImpl, Word2Vec, Word2Vec, Word2VecModel
+AFTSurvivalRegression,
 AFTSurvivalRegressionModel,
 ALS, ALS, ALS, ALSModel, AssociationRules, AsyncRDDActions, BasicBlockReplicationPolicy,
 Binarizer, BinaryClassificationMetrics,
 BisectingKMeans, 
BisectingKMeans, BisectingKMeansModel,
 BisectingKMeansModel,
 BlockMatrix, 
Broadcast, Bucketizer, Builder, ChiSqSelector, ChiSqSelectorModel, 
ClassificationModel, Classifier, CoGroupedRDD, Column, ColumnName, ConstantInputDStream,
 CountVectorizer, CountVectorizerModel,
 CrossValidator, CrossValidatorModel, 
CrossValidatorModelWriter,
 DCT, DStream, DataFrameReader, DataStreamReader, DataValidators, DecisionTree, DecisionTree, DecisionTreeClassificationModel,
 

svn commit: r24176 [17/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/SessionConfigSupport.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/SessionConfigSupport.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/SessionConfigSupport.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SessionConfigSupport (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/WriteSupport.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/WriteSupport.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/WriteSupport.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 WriteSupport (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.sources.v2 (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.sources.v2 (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.sources.v2 Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/DataReader.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/DataReader.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/DataReader.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DataReader (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/DataSourceV2Reader.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/DataSourceV2Reader.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/DataSourceV2Reader.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DataSourceV2Reader (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/ReadTask.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/ReadTask.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/ReadTask.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ReadTask (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/Statistics.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/Statistics.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/Statistics.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Statistics (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/SupportsPushDownCatalystFilters.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/sources/v2/reader/SupportsPushDownCatalystFilters.html
 (original)
+++ 

svn commit: r24176 [11/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelector.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelector.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelector.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ChiSqSelector (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.SaveLoadV1_0$.Data.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.SaveLoadV1_0$.Data.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.SaveLoadV1_0$.Data.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ChiSqSelectorModel.SaveLoadV1_0$.Data (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.SaveLoadV1_0$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.SaveLoadV1_0$.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.SaveLoadV1_0$.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ChiSqSelectorModel.SaveLoadV1_0$ (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ChiSqSelectorModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ChiSqSelectorModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ElementwiseProduct.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ElementwiseProduct.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/ElementwiseProduct.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ElementwiseProduct (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/HashingTF.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/HashingTF.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/HashingTF.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 HashingTF (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDF.DocumentFrequencyAggregator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDF.DocumentFrequencyAggregator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDF.DocumentFrequencyAggregator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 IDF.DocumentFrequencyAggregator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDF.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDF.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDF.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 IDF (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDFModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDFModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/IDFModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 IDFModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/Normalizer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/Normalizer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/feature/Normalizer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-

svn commit: r24176 [19/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocations$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocations$.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocations$.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetLocations$ (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocations.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocations.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocations.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetLocations (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsAndStatus$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsAndStatus$.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsAndStatus$.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetLocationsAndStatus$ (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsAndStatus.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsAndStatus.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsAndStatus.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetLocationsAndStatus (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsMultipleBlockIds$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsMultipleBlockIds$.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsMultipleBlockIds$.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetLocationsMultipleBlockIds$ (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsMultipleBlockIds.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsMultipleBlockIds.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetLocationsMultipleBlockIds.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetLocationsMultipleBlockIds (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMatchingBlockIds$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMatchingBlockIds$.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMatchingBlockIds$.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetMatchingBlockIds$ (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMatchingBlockIds.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMatchingBlockIds.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMatchingBlockIds.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BlockManagerMessages.GetMatchingBlockIds (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMemoryStatus$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/storage/BlockManagerMessages.GetMemoryStatus$.html
 (original)
+++ 

svn commit: r24176 [1/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Author: sameerag
Date: Sat Jan 13 09:42:26 2018
New Revision: 24176

Log:
Apache Spark v2.3.0-rc1 docs

Modified:
dev/spark/v2.3.0-rc1-docs/_site/api/DESCRIPTION
dev/spark/v2.3.0-rc1-docs/_site/api/R/00Index.html
dev/spark/v2.3.0-rc1-docs/_site/api/R/00frame_toc.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-frame.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-noframe.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/constant-values.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/deprecated-list.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/help-doc.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/index-all.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/index.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Accumulable.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulableParam.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Accumulator.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulatorParam.DoubleAccumulatorParam$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulatorParam.FloatAccumulatorParam$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulatorParam.StringAccumulatorParam$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/AccumulatorParam.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Aggregator.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanAccum.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanBroadcast.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanCheckpoint.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanRDD.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanShuffle.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanupTask.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/CleanupTaskWeakReference.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ComplexFutureAction.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Dependency.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ExceptionFailure.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ExecutorLostFailure.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ExecutorRegistered.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ExecutorRemoved.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ExpireDeadHosts.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/FetchFailed.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/FutureAction.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/HashPartitioner.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/InternalAccumulator.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/InternalAccumulator.input$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/InternalAccumulator.output$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/InternalAccumulator.shuffleRead$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/InternalAccumulator.shuffleWrite$.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/InterruptibleIterator.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/JobExecutionStatus.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/JobSubmitter.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/NarrowDependency.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/OneToOneDependency.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Partition.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Partitioner.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/RangeDependency.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/RangePartitioner.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/Resubmitted.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/SerializableWritable.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ShuffleDependency.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ShuffleStatus.html

dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/SimpleFutureAction.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/SparkConf.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/SparkContext.html
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/SparkEnv.html

dev/spark/v2.3.0-rc1

svn commit: r24176 [15/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/GroupMappingServiceProvider.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/GroupMappingServiceProvider.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/GroupMappingServiceProvider.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 GroupMappingServiceProvider (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.security (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.security (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/security/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.security Class Hierarchy (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/DeserializationStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/DeserializationStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/DeserializationStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DeserializationStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/DummySerializerInstance.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/DummySerializerInstance.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/DummySerializerInstance.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 DummySerializerInstance (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/JavaIterableWrapperSerializer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/JavaIterableWrapperSerializer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/JavaIterableWrapperSerializer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaIterableWrapperSerializer (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/JavaSerializer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/JavaSerializer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/JavaSerializer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaSerializer (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/KryoRegistrator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/KryoRegistrator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/KryoRegistrator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 KryoRegistrator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/KryoSerializer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/KryoSerializer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/serializer/KryoSerializer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 KryoSerializer (Spark 2.3.0 JavaDoc)
-
+
 
 
 


svn commit: r24176 [18/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/ObjectType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/ObjectType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/ObjectType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ObjectType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/SQLUserDefinedType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/SQLUserDefinedType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/SQLUserDefinedType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SQLUserDefinedType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/ShortType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/ShortType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/ShortType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ShortType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StringType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StringType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StringType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StringType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StructField.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StructField.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StructField.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StructField (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StructType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StructType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/StructType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StructType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/TimestampType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/TimestampType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/TimestampType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 TimestampType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/UDTRegistration.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/UDTRegistration.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/UDTRegistration.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 UDTRegistration (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/VarcharType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/VarcharType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/VarcharType.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 VarcharType (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.sql.types (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/sql/types/package-summary.html
 (original)
+++ 

svn commit: r24176 [6/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/BytecodeUtils.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/BytecodeUtils.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/BytecodeUtils.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 BytecodeUtils (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/GraphGenerators.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/GraphGenerators.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/GraphGenerators.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 GraphGenerators (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.graphx.util (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.graphx.util (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/graphx/util/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.graphx.util Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/PortableDataStream.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/PortableDataStream.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/PortableDataStream.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 PortableDataStream (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.input (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.input (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/input/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.input Class Hierarchy (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/internal/Logging.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/internal/Logging.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/internal/Logging.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Logging (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/internal/config/ConfigEntryWithDefault.html

svn commit: r24176 [23/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/SparkSession$implicits$.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/SparkSession$implicits$.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/SparkSession$implicits$.html
 Sat Jan 13 09:42:26 2018
@@ -1109,7 +1109,7 @@ When we have a Catalyst array which cont
 def
   
   
-newBooleanSeqEncoder: Encoder[Seq[Boolean]]
+newBooleanSeqEncoder: Encoder[Seq[Boolean]]
   
   
   
@@ -1126,7 +1126,7 @@ When we have a Catalyst array which cont
 def
   
   
-newByteSeqEncoder: Encoder[Seq[Byte]]
+newByteSeqEncoder: Encoder[Seq[Byte]]
   
   
   
@@ -1143,7 +1143,7 @@ When we have a Catalyst array which cont
 def
   
   
-newDoubleSeqEncoder: Encoder[Seq[Double]]
+newDoubleSeqEncoder: Encoder[Seq[Double]]
   
   
   
@@ -1160,7 +1160,7 @@ When we have a Catalyst array which cont
 def
   
   
-newFloatSeqEncoder: Encoder[Seq[Float]]
+newFloatSeqEncoder: Encoder[Seq[Float]]
   
   
   
@@ -1177,7 +1177,7 @@ When we have a Catalyst array which cont
 def
   
   
-newIntSeqEncoder: Encoder[Seq[Int]]
+newIntSeqEncoder: Encoder[Seq[Int]]
   
   
   
@@ -1194,7 +1194,7 @@ When we have a Catalyst array which cont
 def
   
   
-newLongSeqEncoder: Encoder[Seq[Long]]
+newLongSeqEncoder: Encoder[Seq[Long]]
   
   
   
@@ -1211,7 +1211,7 @@ When we have a Catalyst array which cont
 def
   
   
-newProductSeqEncoder[A : Product](implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): Encoder[Seq[A]]
+newProductSeqEncoder[A : Product](implicit arg0: scala.reflect.api.JavaUniverse.TypeTag[A]): Encoder[Seq[A]]
   
   
   
@@ -1228,7 +1228,7 @@ When we have a Catalyst array which cont
 def
   
   
-newShortSeqEncoder: Encoder[Seq[Short]]
+newShortSeqEncoder: Encoder[Seq[Short]]
   
   
   
@@ -1245,7 +1245,7 @@ When we have a Catalyst array which cont
 def
   
   
-newStringSeqEncoder: Encoder[Seq[String]]
+newStringSeqEncoder: Encoder[Seq[String]]
   
   
   

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/types/AtomicType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/types/AtomicType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/types/AtomicType.html
 Sat Jan 13 09:42:26 2018
@@ -57,7 +57,7 @@
   DataType, AbstractDataType, 
AnyRef, Any
 
 Known Subclasses
-BinaryType, BinaryType, BooleanType, BooleanType, CharType, DateType, DateType, HiveStringType, NumericType, StringType, StringType, TimestampType, TimestampType, VarcharType
+BinaryType, BinaryType, BooleanType, BooleanType, CharType, DateType, DateType, HiveStringType, NumericType, StringType, 
StringType, TimestampType, TimestampType, VarcharType
   
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/types/DataType.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/types/DataType.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/scala/org/apache/spark/sql/types/DataType.html
 Sat Jan 13 09:42:26 2018
@@ -60,7 +60,7 @@
   AbstractDataType, 
AnyRef, Any
 
 Known Subclasses
-ArrayType, AtomicType, BinaryType, BinaryType, BooleanType, BooleanType, CalendarIntervalType,
 CalendarIntervalType,
 CharType, DateType, DateType, HiveStringType, MapType, NullType, NullType, NumericType, ObjectType, StringType, StringType, StructType, TimestampType, TimestampType, VarcharType
+ArrayType, AtomicType, BinaryType, BinaryType, BooleanType, BooleanType, CalendarIntervalType,
 CalendarIntervalType,
 CharType, DateType, DateType, HiveStringType, MapType, NullType, NullType, NumericType, ObjectType, StringType, StringType, StructType, TimestampType, TimestampType, VarcharType
   
 
 

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/sql/index.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/sql/index.html (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/sql/index.html Sat Jan 13 09:42:26 2018
@@ -3152,5 +3152,5 @@ a timestamp if the fmt is o
 
 

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/sql/sitemap.xml
==

svn commit: r24176 [13/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Impurities.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Impurities.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Impurities.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Impurities (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Impurity.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Impurity.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Impurity.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Impurity (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Variance.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Variance.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/Variance.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Variance (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.mllib.tree.impurity (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.mllib.tree.impurity (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/impurity/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.mllib.tree.impurity Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/AbsoluteError.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/AbsoluteError.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/AbsoluteError.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 AbsoluteError (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/LogLoss.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/LogLoss.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/LogLoss.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 LogLoss (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/Loss.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/Loss.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/Loss.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Loss (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/Losses.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/Losses.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/mllib/tree/loss/Losses.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Losses (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 

svn commit: r24176 [10/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/impl/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/impl/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/impl/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.tree.impl Class Hierarchy (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-frame.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-frame.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-frame.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.tree (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-summary.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-summary.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-summary.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.tree (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-tree.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-tree.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tree/package-tree.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 org.apache.spark.ml.tree Class Hierarchy (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 CrossValidator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidatorModel.CrossValidatorModelWriter.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidatorModel.CrossValidatorModelWriter.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidatorModel.CrossValidatorModelWriter.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 CrossValidatorModel.CrossValidatorModelWriter (Spark 2.3.0 
JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidatorModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidatorModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/CrossValidatorModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 CrossValidatorModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/ParamGridBuilder.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/ParamGridBuilder.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/ParamGridBuilder.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ParamGridBuilder (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/TrainValidationSplit.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/TrainValidationSplit.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/TrainValidationSplit.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 TrainValidationSplit (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/TrainValidationSplitModel.TrainValidationSplitModelWriter.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/tuning/TrainValidationSplitModel.TrainValidationSplitModelWriter.html
 (original)
+++ 

svn commit: r24176 [21/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MemoryParam.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MemoryParam.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MemoryParam.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 MemoryParam (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MethodIdentifier.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MethodIdentifier.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MethodIdentifier.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 MethodIdentifier (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MutablePair.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MutablePair.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/MutablePair.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 MutablePair (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/ReturnStatementFinder.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/ReturnStatementFinder.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/ReturnStatementFinder.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ReturnStatementFinder (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/RpcUtils.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/RpcUtils.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/RpcUtils.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RpcUtils (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/ShutdownHookManager.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/ShutdownHookManager.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/ShutdownHookManager.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 ShutdownHookManager (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SignalUtils.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SignalUtils.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SignalUtils.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SignalUtils (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SizeEstimator.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SizeEstimator.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SizeEstimator.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SizeEstimator (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SparkExitCode.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SparkExitCode.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SparkExitCode.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkExitCode (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SparkShutdownHook.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SparkShutdownHook.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/SparkShutdownHook.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SparkShutdownHook (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/StatCounter.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/StatCounter.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/util/StatCounter.html 
Sat Jan 13 09:42:26 2018

svn commit: r24176 [4/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: dev/spark/v2.3.0-rc1-docs/_site/api/R/00frame_toc.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/R/00frame_toc.html (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/R/00frame_toc.html Sat Jan 13 09:42:26 
2018
@@ -86,6 +86,236 @@ dt, p code {
 
 SparkR
 
+
+AFTSurvivalRegressionModel-class
+ALSModel-class
+BisectingKMeansModel-class
+DecisionTreeClassificationModel-class
+DecisionTreeRegressionModel-class
+FPGrowthModel-class
+GBTClassificationModel-class
+GBTRegressionModel-class
+GaussianMixtureModel-class
+GeneralizedLinearRegressionModel-class
+GroupedData
+IsotonicRegressionModel-class
+KMeansModel-class
+KSTest-class
+LDAModel-class
+LinearSVCModel-class
+LogisticRegressionModel-class
+MultilayerPerceptronClassificationModel-class
+NaiveBayesModel-class
+RandomForestClassificationModel-class
+RandomForestRegressionModel-class
+SparkDataFrame
+StreamingQuery
+WindowSpec
+alias
+approxQuantile
+arrange
+as.data.frame
+attach
+avg
+awaitTermination
+between
+broadcast
+cache
+cacheTable
+cancelJobGroup
+cast
+checkpoint
+clearCache
+clearJobGroup
+coalesce
+collect
+coltypes
+column
+columnaggregatefunctions
+columncollectionfunctions
+columndatetimediff_functions
+columndatetimefunctions
+columnmathfunctions
+columnmiscfunctions
+columnnonaggregatefunctions
+columnstringfunctions
+columnwindowfunctions
+columnfunctions
+columns
+corr
+count
+cov
+createDataFrame
+createExternalTable-deprecated
+createOrReplaceTempView
+createTable
+crossJoin
+crosstab
+cube
+currentDatabase
+dapply
+dapplyCollect
+describe
+dim
+distinct
+drop
+dropDuplicates
+dropTempTable-deprecated
+dropTempView
+dtypes
+endsWith
+eqnullsafe
+except
+explain
+filter
+first
+fitted
+freqItems
+gapply
+gapplyCollect
+getLocalProperty
+getNumPartitions
+glm
+groupBy
+hashCode
+head
+hint
+histogram
+insertInto
+install.spark
+intersect
+isActive
+isLocal
+isStreaming
+join
+last
+lastProgress
+limit
+listColumns
+listDatabases
+listFunctions
+listTables
+localCheckpoint
+match
+merge
+mutate
+nafunctions
+ncol
+not
+nrow
+orderBy
+otherwise
+over
+partitionBy
+persist
+pivot
+predict
+print.jobj
+print.structField
+print.structType
+printSchema
+queryName
+randomSplit
+rangeBetween
+rbind
+read.df
+read.jdbc
+read.json
+read.ml
+read.orc
+read.parquet
+read.stream
+read.text
+recoverPartitions
+refreshByPath
+refreshTable
+registerTempTable-deprecated
+rename
+repartition
+rollup
+rowsBetween
+sample
+sampleBy
+saveAsTable
+schema
+select
+selectExpr
+setCheckpointDir
+setCurrentDatabase
+setJobDescription
+setJobGroup
+setLocalProperty
+setLogLevel
+show
+showDF
+spark.addFile
+spark.als
+spark.bisectingKmeans
+spark.decisionTree
+spark.fpGrowth
+spark.gaussianMixture
+spark.gbt
+spark.getSparkFiles
+spark.getSparkFilesRootDirectory
+spark.glm
+spark.isoreg
+spark.kmeans
+spark.kstest
+spark.lapply
+spark.lda
+spark.logit
+spark.mlp
+spark.naiveBayes
+spark.randomForest
+spark.survreg
+spark.svmLinear
+sparkR.callJMethod
+sparkR.callJStatic
+sparkR.conf
+sparkR.init-deprecated
+sparkR.newJObject
+sparkR.session
+sparkR.session.stop
+sparkR.uiWebUrl
+sparkR.version
+sparkRHive.init-deprecated
+sparkRSQL.init-deprecated
+sql
+startsWith
+status
+stopQuery
+storageLevel
+str
+structField
+structType
+subset
+substr
+summarize
+summary
+tableNames
+tableToDF
+tables
+take
+toJSON
+uncacheTable
+union
+unionByName
+unpersist
+windowOrderBy
+windowPartitionBy
+with
+withColumn
+withWatermark
+write.df
+write.jdbc
+write.json
+write.ml
+write.orc
+write.parquet
+write.stream
+write.text
+
+
 Generated with https://yihui.name/knitr;>knitr  1.18
 
 

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-frame.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-frame.html (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-frame.html Sat Jan 13 
09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 All Classes (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-noframe.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-noframe.html (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/java/allclasses-noframe.html Sat Jan 13 
09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 All Classes (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/java/constant-values.html
==
--- dev/spark/v2.3.0-rc1-docs/_site/api/java/constant-values.html (original)
+++ dev/spark/v2.3.0-rc1-docs/_site/api/java/constant-values.html Sat Jan 13 
09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Constant Field Values (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: dev/spark/v2.3.0-rc1-docs/_site/api/java/deprecated-list.html

svn commit: r24176 [8/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/RFormulaParser.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/RFormulaParser.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/RFormulaParser.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RFormulaParser (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/RegexTokenizer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/RegexTokenizer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/RegexTokenizer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 RegexTokenizer (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/SQLTransformer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/SQLTransformer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/SQLTransformer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 SQLTransformer (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StandardScaler.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StandardScaler.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StandardScaler.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StandardScaler (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StandardScalerModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StandardScalerModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StandardScalerModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StandardScalerModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StopWordsRemover.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StopWordsRemover.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StopWordsRemover.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StopWordsRemover (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StringIndexer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StringIndexer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StringIndexer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StringIndexer (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StringIndexerModel.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StringIndexerModel.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/StringIndexerModel.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 StringIndexerModel (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/Tokenizer.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/Tokenizer.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/Tokenizer.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Tokenizer (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/VectorAssembler.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/VectorAssembler.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/VectorAssembler.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 VectorAssembler (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/ml/feature/VectorAttributeRewriter.html

svn commit: r24176 [5/23] - in /dev/spark/v2.3.0-rc1-docs/_site/api: ./ R/ java/ java/org/apache/spark/ java/org/apache/spark/api/java/ java/org/apache/spark/api/java/function/ java/org/apache/spark/a

2018-01-13 Thread sameerag
Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaHadoopRDD (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaNewHadoopRDD (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaPairRDD.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaPairRDD.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaPairRDD.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaPairRDD (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaRDD.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaRDD.html 
(original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaRDD.html 
Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaRDD (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaRDDLike.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaRDDLike.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaRDDLike.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaRDDLike (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaSparkContext.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaSparkContext.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaSparkContext.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaSparkContext (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaSparkStatusTracker (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaUtils.SerializableMapWrapper.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaUtils.SerializableMapWrapper.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaUtils.SerializableMapWrapper.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaUtils.SerializableMapWrapper (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaUtils.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaUtils.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/JavaUtils.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 JavaUtils (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/Optional.html
==
--- 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/Optional.html
 (original)
+++ 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/Optional.html
 Sat Jan 13 09:42:26 2018
@@ -2,9 +2,9 @@
 
 
 
-
+
 Optional (Spark 2.3.0 JavaDoc)
-
+
 
 
 

Modified: 
dev/spark/v2.3.0-rc1-docs/_site/api/java/org/apache/spark/api/java/StorageLevels.html
==
--- 

[1/2] spark git commit: Revert "[SPARK-22908] Add kafka source and sink for continuous processing."

2018-01-12 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 02176f4c2 -> 60bcb4685


http://git-wip-us.apache.org/repos/asf/spark/blob/60bcb468/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
index a4a857f..f0bdf84 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
@@ -81,11 +81,9 @@ case class WriteToDataSourceV2Exec(writer: 
DataSourceV2Writer, query: SparkPlan)
 (index, message: WriterCommitMessage) => messages(index) = message
   )
 
-  if (!writer.isInstanceOf[ContinuousWriter]) {
-logInfo(s"Data source writer $writer is committing.")
-writer.commit(messages)
-logInfo(s"Data source writer $writer committed.")
-  }
+  logInfo(s"Data source writer $writer is committing.")
+  writer.commit(messages)
+  logInfo(s"Data source writer $writer committed.")
 } catch {
   case _: InterruptedException if writer.isInstanceOf[ContinuousWriter] =>
 // Interruption is how continuous queries are ended, so accept and 
ignore the exception.

http://git-wip-us.apache.org/repos/asf/spark/blob/60bcb468/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
index cf27e1a..24a8b00 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
@@ -142,8 +142,7 @@ abstract class StreamExecution(
 
   override val id: UUID = UUID.fromString(streamMetadata.id)
 
-  override def runId: UUID = currentRunId
-  protected var currentRunId = UUID.randomUUID
+  override val runId: UUID = UUID.randomUUID
 
   /**
* Pretty identified string of printing in logs. Format is
@@ -419,17 +418,11 @@ abstract class StreamExecution(
* Blocks the current thread until processing for data from the given 
`source` has reached at
* least the given `Offset`. This method is intended for use primarily when 
writing tests.
*/
-  private[sql] def awaitOffset(sourceIndex: Int, newOffset: Offset): Unit = {
+  private[sql] def awaitOffset(source: BaseStreamingSource, newOffset: 
Offset): Unit = {
 assertAwaitThread()
 def notDone = {
   val localCommittedOffsets = committedOffsets
-  if (sources == null) {
-// sources might not be initialized yet
-false
-  } else {
-val source = sources(sourceIndex)
-!localCommittedOffsets.contains(source) || 
localCommittedOffsets(source) != newOffset
-  }
+  !localCommittedOffsets.contains(source) || localCommittedOffsets(source) 
!= newOffset
 }
 
 while (notDone) {
@@ -443,7 +436,7 @@ abstract class StreamExecution(
 awaitProgressLock.unlock()
   }
 }
-logDebug(s"Unblocked at $newOffset for ${sources(sourceIndex)}")
+logDebug(s"Unblocked at $newOffset for $source")
   }
 
   /** A flag to indicate that a batch has completed with no new data 
available. */

http://git-wip-us.apache.org/repos/asf/spark/blob/60bcb468/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
index e700aa4..d79e4bd 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
@@ -77,6 +77,7 @@ class ContinuousDataSourceRDD(
 dataReaderThread.start()
 
 context.addTaskCompletionListener(_ => {
+  reader.close()
   dataReaderThread.interrupt()
   epochPollExecutor.shutdown()
 })
@@ -200,8 +201,6 @@ class DataReaderThread(
 failedFlag.set(true)
 // Don't rethrow the exception in this thread. It's not needed, and 
the default Spark
 // exception handler will kill the executor.
-} finally {
-  reader.close()
 }
   }
 }


[2/2] spark git commit: Revert "[SPARK-22908] Add kafka source and sink for continuous processing."

2018-01-12 Thread sameerag
Revert "[SPARK-22908] Add kafka source and sink for continuous processing."

This reverts commit f891ee3249e04576dd579cbab6f8f1632550e6bd.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/60bcb468
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/60bcb468
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/60bcb468

Branch: refs/heads/branch-2.3
Commit: 60bcb4685022c29a6ddcf707b505369687ec7da6
Parents: 02176f4
Author: Sameer Agarwal 
Authored: Fri Jan 12 15:07:14 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 12 15:07:14 2018 -0800

--
 .../sql/kafka010/KafkaContinuousReader.scala| 232 -
 .../sql/kafka010/KafkaContinuousWriter.scala| 119 -
 .../spark/sql/kafka010/KafkaOffsetReader.scala  |  21 +-
 .../apache/spark/sql/kafka010/KafkaSource.scala |  17 +-
 .../spark/sql/kafka010/KafkaSourceOffset.scala  |   7 +-
 .../sql/kafka010/KafkaSourceProvider.scala  | 105 +---
 .../spark/sql/kafka010/KafkaWriteTask.scala |  71 ++-
 .../apache/spark/sql/kafka010/KafkaWriter.scala |   5 +-
 .../sql/kafka010/KafkaContinuousSinkSuite.scala | 474 ---
 .../kafka010/KafkaContinuousSourceSuite.scala   |  96 
 .../sql/kafka010/KafkaContinuousTest.scala  |  64 ---
 .../spark/sql/kafka010/KafkaSourceSuite.scala   | 470 +-
 .../org/apache/spark/sql/DataFrameReader.scala  |  32 +-
 .../org/apache/spark/sql/DataFrameWriter.scala  |  25 +-
 .../datasources/v2/WriteToDataSourceV2.scala|   8 +-
 .../execution/streaming/StreamExecution.scala   |  15 +-
 .../ContinuousDataSourceRDDIter.scala   |   3 +-
 .../continuous/ContinuousExecution.scala|  67 ++-
 .../streaming/continuous/EpochCoordinator.scala |  21 +-
 .../spark/sql/streaming/DataStreamWriter.scala  |  26 +-
 .../apache/spark/sql/streaming/StreamTest.scala |  36 +-
 21 files changed, 383 insertions(+), 1531 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/60bcb468/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
--
diff --git 
a/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
 
b/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
deleted file mode 100644
index 9283795..000
--- 
a/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
+++ /dev/null
@@ -1,232 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.spark.sql.kafka010
-
-import java.{util => ju}
-
-import org.apache.kafka.clients.consumer.ConsumerRecord
-import org.apache.kafka.common.TopicPartition
-
-import org.apache.spark.internal.Logging
-import org.apache.spark.sql.SparkSession
-import org.apache.spark.sql.catalyst.expressions.UnsafeRow
-import org.apache.spark.sql.catalyst.expressions.codegen.{BufferHolder, 
UnsafeRowWriter}
-import org.apache.spark.sql.catalyst.util.DateTimeUtils
-import 
org.apache.spark.sql.kafka010.KafkaSource.{INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_FALSE,
 INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_TRUE}
-import org.apache.spark.sql.sources.v2.reader._
-import org.apache.spark.sql.sources.v2.streaming.reader.{ContinuousDataReader, 
ContinuousReader, Offset, PartitionOffset}
-import org.apache.spark.sql.types.StructType
-import org.apache.spark.unsafe.types.UTF8String
-
-/**
- * A [[ContinuousReader]] for data from kafka.
- *
- * @param offsetReader  a reader used to get kafka offsets. Note that the 
actual data will be
- *  read by per-task consumers generated later.
- * @param kafkaParams   String params for per-task Kafka consumers.
- * @param sourceOptions The 
[[org.apache.spark.sql.sources.v2.DataSourceV2Options]] params which
- *  are not Kafka consumer params.
- * @param metadataPath Path to a directory this reader can use for writing 
metadata.
- * @param 

[2/2] spark git commit: Revert "[SPARK-22908] Add kafka source and sink for continuous processing."

2018-01-12 Thread sameerag
Revert "[SPARK-22908] Add kafka source and sink for continuous processing."

This reverts commit 6f7aaed805070d29dcba32e04ca7a1f581fa54b9.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/55dbfbca
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/55dbfbca
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/55dbfbca

Branch: refs/heads/master
Commit: 55dbfbca37ce4c05f83180777ba3d4fe2d96a02e
Parents: 5427739
Author: Sameer Agarwal 
Authored: Fri Jan 12 15:00:00 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 12 15:00:00 2018 -0800

--
 .../sql/kafka010/KafkaContinuousReader.scala| 232 -
 .../sql/kafka010/KafkaContinuousWriter.scala| 119 -
 .../spark/sql/kafka010/KafkaOffsetReader.scala  |  21 +-
 .../apache/spark/sql/kafka010/KafkaSource.scala |  17 +-
 .../spark/sql/kafka010/KafkaSourceOffset.scala  |   7 +-
 .../sql/kafka010/KafkaSourceProvider.scala  | 105 +---
 .../spark/sql/kafka010/KafkaWriteTask.scala |  71 ++-
 .../apache/spark/sql/kafka010/KafkaWriter.scala |   5 +-
 .../sql/kafka010/KafkaContinuousSinkSuite.scala | 474 ---
 .../kafka010/KafkaContinuousSourceSuite.scala   |  96 
 .../sql/kafka010/KafkaContinuousTest.scala  |  64 ---
 .../spark/sql/kafka010/KafkaSourceSuite.scala   | 470 +-
 .../org/apache/spark/sql/DataFrameReader.scala  |  32 +-
 .../org/apache/spark/sql/DataFrameWriter.scala  |  25 +-
 .../datasources/v2/WriteToDataSourceV2.scala|   8 +-
 .../execution/streaming/StreamExecution.scala   |  15 +-
 .../ContinuousDataSourceRDDIter.scala   |   3 +-
 .../continuous/ContinuousExecution.scala|  67 ++-
 .../streaming/continuous/EpochCoordinator.scala |  21 +-
 .../spark/sql/streaming/DataStreamWriter.scala  |  26 +-
 .../apache/spark/sql/streaming/StreamTest.scala |  36 +-
 21 files changed, 383 insertions(+), 1531 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/55dbfbca/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
--
diff --git 
a/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
 
b/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
deleted file mode 100644
index 9283795..000
--- 
a/external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaContinuousReader.scala
+++ /dev/null
@@ -1,232 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to You under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- *http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package org.apache.spark.sql.kafka010
-
-import java.{util => ju}
-
-import org.apache.kafka.clients.consumer.ConsumerRecord
-import org.apache.kafka.common.TopicPartition
-
-import org.apache.spark.internal.Logging
-import org.apache.spark.sql.SparkSession
-import org.apache.spark.sql.catalyst.expressions.UnsafeRow
-import org.apache.spark.sql.catalyst.expressions.codegen.{BufferHolder, 
UnsafeRowWriter}
-import org.apache.spark.sql.catalyst.util.DateTimeUtils
-import 
org.apache.spark.sql.kafka010.KafkaSource.{INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_FALSE,
 INSTRUCTION_FOR_FAIL_ON_DATA_LOSS_TRUE}
-import org.apache.spark.sql.sources.v2.reader._
-import org.apache.spark.sql.sources.v2.streaming.reader.{ContinuousDataReader, 
ContinuousReader, Offset, PartitionOffset}
-import org.apache.spark.sql.types.StructType
-import org.apache.spark.unsafe.types.UTF8String
-
-/**
- * A [[ContinuousReader]] for data from kafka.
- *
- * @param offsetReader  a reader used to get kafka offsets. Note that the 
actual data will be
- *  read by per-task consumers generated later.
- * @param kafkaParams   String params for per-task Kafka consumers.
- * @param sourceOptions The 
[[org.apache.spark.sql.sources.v2.DataSourceV2Options]] params which
- *  are not Kafka consumer params.
- * @param metadataPath Path to a directory this reader can use for writing 
metadata.
- * @param 

[1/2] spark git commit: Revert "[SPARK-22908] Add kafka source and sink for continuous processing."

2018-01-12 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 54277398a -> 55dbfbca3


http://git-wip-us.apache.org/repos/asf/spark/blob/55dbfbca/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
index a4a857f..f0bdf84 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2.scala
@@ -81,11 +81,9 @@ case class WriteToDataSourceV2Exec(writer: 
DataSourceV2Writer, query: SparkPlan)
 (index, message: WriterCommitMessage) => messages(index) = message
   )
 
-  if (!writer.isInstanceOf[ContinuousWriter]) {
-logInfo(s"Data source writer $writer is committing.")
-writer.commit(messages)
-logInfo(s"Data source writer $writer committed.")
-  }
+  logInfo(s"Data source writer $writer is committing.")
+  writer.commit(messages)
+  logInfo(s"Data source writer $writer committed.")
 } catch {
   case _: InterruptedException if writer.isInstanceOf[ContinuousWriter] =>
 // Interruption is how continuous queries are ended, so accept and 
ignore the exception.

http://git-wip-us.apache.org/repos/asf/spark/blob/55dbfbca/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
index cf27e1a..24a8b00 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
@@ -142,8 +142,7 @@ abstract class StreamExecution(
 
   override val id: UUID = UUID.fromString(streamMetadata.id)
 
-  override def runId: UUID = currentRunId
-  protected var currentRunId = UUID.randomUUID
+  override val runId: UUID = UUID.randomUUID
 
   /**
* Pretty identified string of printing in logs. Format is
@@ -419,17 +418,11 @@ abstract class StreamExecution(
* Blocks the current thread until processing for data from the given 
`source` has reached at
* least the given `Offset`. This method is intended for use primarily when 
writing tests.
*/
-  private[sql] def awaitOffset(sourceIndex: Int, newOffset: Offset): Unit = {
+  private[sql] def awaitOffset(source: BaseStreamingSource, newOffset: 
Offset): Unit = {
 assertAwaitThread()
 def notDone = {
   val localCommittedOffsets = committedOffsets
-  if (sources == null) {
-// sources might not be initialized yet
-false
-  } else {
-val source = sources(sourceIndex)
-!localCommittedOffsets.contains(source) || 
localCommittedOffsets(source) != newOffset
-  }
+  !localCommittedOffsets.contains(source) || localCommittedOffsets(source) 
!= newOffset
 }
 
 while (notDone) {
@@ -443,7 +436,7 @@ abstract class StreamExecution(
 awaitProgressLock.unlock()
   }
 }
-logDebug(s"Unblocked at $newOffset for ${sources(sourceIndex)}")
+logDebug(s"Unblocked at $newOffset for $source")
   }
 
   /** A flag to indicate that a batch has completed with no new data 
available. */

http://git-wip-us.apache.org/repos/asf/spark/blob/55dbfbca/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
index e700aa4..d79e4bd 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/ContinuousDataSourceRDDIter.scala
@@ -77,6 +77,7 @@ class ContinuousDataSourceRDD(
 dataReaderThread.start()
 
 context.addTaskCompletionListener(_ => {
+  reader.close()
   dataReaderThread.interrupt()
   epochPollExecutor.shutdown()
 })
@@ -200,8 +201,6 @@ class DataReaderThread(
 failedFlag.set(true)
 // Don't rethrow the exception in this thread. It's not needed, and 
the default Spark
 // exception handler will kill the executor.
-} finally {
-  reader.close()
 }
   }
 }


svn commit: r24164 - /dev/spark/KEYS

2018-01-12 Thread sameerag
Author: sameerag
Date: Fri Jan 12 20:35:10 2018
New Revision: 24164

Log:
Update Keys

Modified:
dev/spark/KEYS

Modified: dev/spark/KEYS
==
--- dev/spark/KEYS (original)
+++ dev/spark/KEYS Fri Jan 12 20:35:10 2018
@@ -403,3 +403,40 @@ dcqbOYBLINwxIMZA6N9qCGrST4DfqbAzGSvZ08oe
 =et2/
 -END PGP PUBLIC KEY BLOCK-
 
+pub   rsa2048/A1CEDBA8AD0C022A 2018-01-11 [SC]
+  FA757B8D64ABBC21FC02BC1CA1CEDBA8AD0C022A
+uid [ultimate] Sameer Agarwal <samee...@apache.org>
+sub   rsa2048/5B0E7FAD797FCBE2 2018-01-11 [E]
+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQENBFpX9XgBCADGZb9Jywy7gJuoyzX3+8JA7kPnc6Ah/mTbCemzkq+NkrMQ+eXP
+D6IyHH+ktCp8rG0KEZph3BwQ9m/9YpvGpyUjEAl7miWvnYQCoBfhoMdoM+/9R77G
+yaUgV1z85n0rI7+EUmstitb1Q1qu6FJgO0r/YOBImEqD0VID+vuDVEmjg9DPX2K/
+fADhKHvQDbR5car8Oh9lXEdxn6oRdQif9spkX26P75Oa7oLbK5s1PQm/z2Wn0q6/
+9tsh+HNCKU4oNTboTXiuNEI4S3ypjb5zsSL2PMmxw+eSV859lBuL/THRN1xe3+3h
+EK6Ma3UThtNcHpOHx+YJmiWahic9NHvO58jHABEBAAG0JFNhbWVlciBBZ2Fyd2Fs
+IDxzYW1lZXJhZ0BhcGFjaGUub3JnPokBTgQTAQgAOBYhBPp1e41kq7wh/AK8HKHO
+26itDAIqBQJaV/V4AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJEKHO26it
+DAIqIZYH/AoMHZ27lfK1XfQqEujmz5KSWsSVImgMh/t7F61D9sIvnoiMkrhP9/RG
+R/LJA8bIEIBR906Lto4fcuDboUhNYlGpOsJGSTQeEnGpuonNzNpOssFXYfxrGSRe
+M062/9GwvOer7MthhLbNYSzah6lYnijHe67a5woL3mLEnJj0a8vc0DH0jxpe0d/8
+f0VVQnWe+oZOiFx/Gp+RLfqtnMQ+FrPlGu7WFDseXd9NtMzEVQpoQoBbJ29nBvAU
+4AXjuBZa0dR7cZr4u8C+QMkJOBPEQcyBHYv0/MOT3ggABuLTSdJcGsj7NdCxkSZ2
+NTjjgi+OzLqsdU4srniy8vVDuaIqBhi5AQ0EWlf1eAEIAMk/n66XAoetLEyBHOO7
+wZJNnnCssuGOFh4+xLelOeB4Tx4fKeU9wWGUPaqHbyQJbYxEmVPH0Rq/VTfRYgGl
+XuJXgi7f0A/Q0bhxc5A3DRMl5ifnT6Ame9yOUq9BFoH/VG7qO/GVQ7yRrp+cmj5h
+kTSMUxYrzvHWzozxj9/P1bE5EGGsDjaHkA9t3RuzzV/mKjwpyCep72IxMbmRMfPM
+vD/KaKfNryvyEBmqQpdvJXXremfs3warmvhkYnSpkIeUrRjt32jMO4MHzzC74w+J
+/Cn4+0A/YuvFfU0YnjySRNMqpgT2EFA802QI+Mwj2D6fat8oKhnVvBAY+wHal1c2
+m/UAEQEAAYkBNgQYAQgAIBYhBPp1e41kq7wh/AK8HKHO26itDAIqBQJaV/V4AhsM
+AAoJEKHO26itDAIqMi4IAJ1dyai2f03R1AgzI+W5enp8989vf5KVxwDPv4tJX87o
+sAOSNYmPRXBbj2Hr2N+A+656vx3KkIIozuwuVSDbVDdDnxS6dUqvmA07qtKRXWEO
+da8taStwiaetbCJQkLOr1kyrL6XgL+t5E1jMcDmZxF2Owu4NSaEVERtkovY89V4m
+Ku0fEiDWr/6SWUcPnyPGpwZKccShDGl8JuwM/uRO5HKLeAJp93poqWeOtnpw1Xpw
+RiLNdJXDBol1/+xtV2O3CzX0i4o6Z/hhderuJc/v57LlP/PnOVkGG4/mZA8G/kSC
+jUFFi/fz1oSCMpcpdSOAhCs4oRFv2POgXTCLkpOJNSU=
+=Oc/a
+-END PGP PUBLIC KEY BLOCK-
+
+



-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



spark git commit: [MINOR][BUILD] Fix Java linter errors

2018-01-12 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 6152da389 -> db27a9365


[MINOR][BUILD] Fix Java linter errors

## What changes were proposed in this pull request?

This PR cleans up the java-lint errors (for v2.3.0-rc1 tag). Hopefully, this 
will be the final one.

```
$ dev/lint-java
Using `mvn` from path: /usr/local/bin/mvn
Checkstyle checks failed at following occurrences:
[ERROR] 
src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java:[85] 
(sizes) LineLength: Line is longer than 100 characters (found 101).
[ERROR] src/main/java/org/apache/spark/launcher/InProcessAppHandle.java:[20,8] 
(imports) UnusedImports: Unused import - java.io.IOException.
[ERROR] 
src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java:[41,9]
 (modifier) ModifierOrder: 'private' modifier out of order with the JLS 
suggestions.
[ERROR] src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java:[464] 
(sizes) LineLength: Line is longer than 100 characters (found 102).
```

## How was this patch tested?

Manual.

```
$ dev/lint-java
Using `mvn` from path: /usr/local/bin/mvn
Checkstyle checks passed.
```

Author: Dongjoon Hyun 

Closes #20242 from dongjoon-hyun/fix_lint_java_2.3_rc1.

(cherry picked from commit 7bd14cfd40500a0b6462cda647bdbb686a430328)
Signed-off-by: Sameer Agarwal 


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/db27a936
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/db27a936
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/db27a936

Branch: refs/heads/branch-2.3
Commit: db27a93652780f234f3c5fe750ef07bc5525d177
Parents: 6152da3
Author: Dongjoon Hyun 
Authored: Fri Jan 12 10:18:42 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 12 10:18:59 2018 -0800

--
 .../java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java  | 3 ++-
 .../main/java/org/apache/spark/launcher/InProcessAppHandle.java   | 1 -
 .../spark/sql/execution/datasources/orc/OrcColumnVector.java  | 2 +-
 .../test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java   | 3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/db27a936/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
--
diff --git 
a/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
 
b/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
index 3acfe36..a9603c1 100644
--- 
a/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
+++ 
b/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
@@ -82,7 +82,8 @@ public class HeapMemoryAllocator implements MemoryAllocator {
   "page has already been freed";
 assert ((memory.pageNumber == MemoryBlock.NO_PAGE_NUMBER)
 || (memory.pageNumber == MemoryBlock.FREED_IN_TMM_PAGE_NUMBER)) :
-  "TMM-allocated pages must first be freed via TMM.freePage(), not 
directly in allocator free()";
+  "TMM-allocated pages must first be freed via TMM.freePage(), not 
directly in allocator " +
+"free()";
 
 final long size = memory.size();
 if (MemoryAllocator.MEMORY_DEBUG_FILL_ENABLED) {

http://git-wip-us.apache.org/repos/asf/spark/blob/db27a936/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
--
diff --git 
a/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java 
b/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
index 0d6a73a..acd64c9 100644
--- a/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
+++ b/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
@@ -17,7 +17,6 @@
 
 package org.apache.spark.launcher;
 
-import java.io.IOException;
 import java.lang.reflect.Method;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.logging.Level;

http://git-wip-us.apache.org/repos/asf/spark/blob/db27a936/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
--
diff --git 
a/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
 
b/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
index f94c55d..b6e7922 100644
--- 
a/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
+++ 
b/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
@@ 

spark git commit: [MINOR][BUILD] Fix Java linter errors

2018-01-12 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/master 651f76153 -> 7bd14cfd4


[MINOR][BUILD] Fix Java linter errors

## What changes were proposed in this pull request?

This PR cleans up the java-lint errors (for v2.3.0-rc1 tag). Hopefully, this 
will be the final one.

```
$ dev/lint-java
Using `mvn` from path: /usr/local/bin/mvn
Checkstyle checks failed at following occurrences:
[ERROR] 
src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java:[85] 
(sizes) LineLength: Line is longer than 100 characters (found 101).
[ERROR] src/main/java/org/apache/spark/launcher/InProcessAppHandle.java:[20,8] 
(imports) UnusedImports: Unused import - java.io.IOException.
[ERROR] 
src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java:[41,9]
 (modifier) ModifierOrder: 'private' modifier out of order with the JLS 
suggestions.
[ERROR] src/test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java:[464] 
(sizes) LineLength: Line is longer than 100 characters (found 102).
```

## How was this patch tested?

Manual.

```
$ dev/lint-java
Using `mvn` from path: /usr/local/bin/mvn
Checkstyle checks passed.
```

Author: Dongjoon Hyun 

Closes #20242 from dongjoon-hyun/fix_lint_java_2.3_rc1.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7bd14cfd
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7bd14cfd
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7bd14cfd

Branch: refs/heads/master
Commit: 7bd14cfd40500a0b6462cda647bdbb686a430328
Parents: 651f761
Author: Dongjoon Hyun 
Authored: Fri Jan 12 10:18:42 2018 -0800
Committer: Sameer Agarwal 
Committed: Fri Jan 12 10:18:42 2018 -0800

--
 .../java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java  | 3 ++-
 .../main/java/org/apache/spark/launcher/InProcessAppHandle.java   | 1 -
 .../spark/sql/execution/datasources/orc/OrcColumnVector.java  | 2 +-
 .../test/java/test/org/apache/spark/sql/JavaDataFrameSuite.java   | 3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/7bd14cfd/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
--
diff --git 
a/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
 
b/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
index 3acfe36..a9603c1 100644
--- 
a/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
+++ 
b/common/unsafe/src/main/java/org/apache/spark/unsafe/memory/HeapMemoryAllocator.java
@@ -82,7 +82,8 @@ public class HeapMemoryAllocator implements MemoryAllocator {
   "page has already been freed";
 assert ((memory.pageNumber == MemoryBlock.NO_PAGE_NUMBER)
 || (memory.pageNumber == MemoryBlock.FREED_IN_TMM_PAGE_NUMBER)) :
-  "TMM-allocated pages must first be freed via TMM.freePage(), not 
directly in allocator free()";
+  "TMM-allocated pages must first be freed via TMM.freePage(), not 
directly in allocator " +
+"free()";
 
 final long size = memory.size();
 if (MemoryAllocator.MEMORY_DEBUG_FILL_ENABLED) {

http://git-wip-us.apache.org/repos/asf/spark/blob/7bd14cfd/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
--
diff --git 
a/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java 
b/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
index 0d6a73a..acd64c9 100644
--- a/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
+++ b/launcher/src/main/java/org/apache/spark/launcher/InProcessAppHandle.java
@@ -17,7 +17,6 @@
 
 package org.apache.spark.launcher;
 
-import java.io.IOException;
 import java.lang.reflect.Method;
 import java.util.concurrent.atomic.AtomicLong;
 import java.util.logging.Level;

http://git-wip-us.apache.org/repos/asf/spark/blob/7bd14cfd/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
--
diff --git 
a/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
 
b/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
index f94c55d..b6e7922 100644
--- 
a/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
+++ 
b/sql/core/src/main/java/org/apache/spark/sql/execution/datasources/orc/OrcColumnVector.java
@@ -38,7 +38,7 @@ public class OrcColumnVector extends 
org.apache.spark.sql.vectorized.ColumnVecto
   private BytesColumnVector 

svn commit: r24148 - in /dev/spark/v2.3.0-rc1-docs: ./ _site/ _site/api/ _site/api/R/ _site/api/java/ _site/api/java/lib/ _site/api/java/org/ _site/api/java/org/apache/ _site/api/java/org/apache/spark

2018-01-11 Thread sameerag
Author: sameerag
Date: Fri Jan 12 07:52:52 2018
New Revision: 24148

Log:
Apache Spark v2.3.0-rc1 docs


[This commit notification would consist of 1430 parts, 
which exceeds the limit of 50 ones, so it was shortened to the summary.]

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



svn commit: r24147 - /dev/spark/v2.3.0-rc1-bin/

2018-01-11 Thread sameerag
Author: sameerag
Date: Fri Jan 12 07:25:00 2018
New Revision: 24147

Log:
Apache Spark v2.3.0-rc1

Added:
dev/spark/v2.3.0-rc1-bin/
dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz   (with props)
dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.asc
dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.md5
dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.sha512
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.6.tgz   (with props)
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.6.tgz.asc
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.6.tgz.md5
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.6.tgz.sha512
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.7.tgz   (with props)
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.7.tgz.asc
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.7.tgz.md5
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-hadoop2.7.tgz.sha512
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-without-hadoop.tgz   (with props)
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-without-hadoop.tgz.asc
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-without-hadoop.tgz.md5
dev/spark/v2.3.0-rc1-bin/spark-2.3.0-bin-without-hadoop.tgz.sha512
dev/spark/v2.3.0-rc1-bin/spark-2.3.0.tgz   (with props)
dev/spark/v2.3.0-rc1-bin/spark-2.3.0.tgz.asc
dev/spark/v2.3.0-rc1-bin/spark-2.3.0.tgz.md5
dev/spark/v2.3.0-rc1-bin/spark-2.3.0.tgz.sha512
dev/spark/v2.3.0-rc1-bin/spark-parent_2.11.iml

Added: dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.asc Fri Jan 12 07:25:00 2018
@@ -0,0 +1,11 @@
+-BEGIN PGP SIGNATURE-
+
+iQEzBAABCAAdFiEE+nV7jWSrvCH8Arwcoc7bqK0MAioFAlpYYbMACgkQoc7bqK0M
+AirvPAf9Gsj1RKJiy01H4T7QJkTReJ0/0Qz3EBzqa2+7xC5AH+MZ1eH3RLBn3Rws
+UgXNQChjCcx0r5dYRYQfa2FHLUHKPxI4Ax6As9mrtW4D0iLuWhZ50Wjn44rHVjQs
+Vud4iclkvtBNe+qWW86ipLDz7U/2AInfmb8F2wwFih//5vuJNvSvc3biTR4dJos/
+2AIjOis/Rx05G+kULHQSrC25mXtJWEBqxBpOITuYii0x8S2e0LbD0zg2voTN8oVM
+PoQ8s6UYN5/QEih180bmLvw9GgdT+e39xqiin3vohCXGS7AboSNCLoKGCmhmKhCa
+M8PvdHlk4ffuJNYhbHV4/bhftAgdaw==
+=d5v6
+-END PGP SIGNATURE-

Added: dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.md5
==
--- dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.md5 (added)
+++ dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.md5 Fri Jan 12 07:25:00 2018
@@ -0,0 +1 @@
+SparkR_2.3.0.tar.gz: 19 43 B2 0C E9 07 3C 93  7C 92 D9 DD 47 F5 50 1B

Added: dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.sha512
==
--- dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.sha512 (added)
+++ dev/spark/v2.3.0-rc1-bin/SparkR_2.3.0.tar.gz.sha512 Fri Jan 12 07:25:00 2018
@@ -0,0 +1,3 @@
+SparkR_2.3.0.tar.gz: 2F303B2B A379A3AC 445B1D8F CFC76985 4DA7116B F2640E7D
+ 001B78BF F309FE5B 89799209 872F3051 D097F8EE EEF8A77D
+ 753BDB0A 2BA7D95E CAD7D01D 4EA8FF39

Added: dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz
==
Binary file - no diff available.

Propchange: dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz
--
svn:mime-type = application/octet-stream

Added: dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.asc
==
--- dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.asc (added)
+++ dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.asc Fri Jan 12 07:25:00 2018
@@ -0,0 +1,11 @@
+-BEGIN PGP SIGNATURE-
+
+iQEzBAABCAAdFiEE+nV7jWSrvCH8Arwcoc7bqK0MAioFAlpYYRkACgkQoc7bqK0M
+AiqaDwf/ZHZmj9SDqcd9Lh+jqqusa+l9kspNKQSbxxOSzX+6TSz3bqMap2UMrpva
+BG8Mf42HwMVsuRLuHFFTpHdcHkWSWdAvU4/N2Zo/cfsYBhQ/mJPYlKVVuSTaAJ2t
+//86APZxXDMJlPtvgtgwlixChuunNuGN7B5fQ+0ANLIZvD18hs1ppOY2Yth8jA43
+yifmDrj3tZ6IRJGY4XVx4pyPRTB8pHuJn+U/U2XRvUNN+eL7epb02A4tivyS3lH9
+idDAa8d1rjZKpPXuiQ0lFOnUg/sQHaqCoBqHGzjfqV3H2uPUbQkBxP3074fRNjBp
++Fynj4rlA/Zn2+LwOQ82Cmp9okVl4Q==
+=BJkd
+-END PGP SIGNATURE-

Added: dev/spark/v2.3.0-rc1-bin/pyspark-2.3.0.tar.gz.md5
==
--- dev/spark/v2.3.0-rc1-bin/pyspark

[2/2] spark git commit: Preparing development version 2.3.1-SNAPSHOT

2018-01-11 Thread sameerag
Preparing development version 2.3.1-SNAPSHOT


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/6bb22961
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/6bb22961
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/6bb22961

Branch: refs/heads/branch-2.3
Commit: 6bb22961c0c9df1a1f22e9491894895b297f5288
Parents: 964cc2e
Author: Sameer Agarwal 
Authored: Thu Jan 11 15:23:17 2018 -0800
Committer: Sameer Agarwal 
Committed: Thu Jan 11 15:23:17 2018 -0800

--
 R/pkg/DESCRIPTION | 2 +-
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 4 ++--
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 41 files changed, 42 insertions(+), 42 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/6bb22961/R/pkg/DESCRIPTION
--
diff --git a/R/pkg/DESCRIPTION b/R/pkg/DESCRIPTION
index 6d46c31..29a8a00 100644
--- a/R/pkg/DESCRIPTION
+++ b/R/pkg/DESCRIPTION
@@ -1,6 +1,6 @@
 Package: SparkR
 Type: Package
-Version: 2.3.0
+Version: 2.3.1
 Title: R Frontend for Apache Spark
 Description: Provides an R Frontend for Apache Spark.
 Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),

http://git-wip-us.apache.org/repos/asf/spark/blob/6bb22961/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 2ca9ab6..5c5a8e9 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6bb22961/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index 404c744..2a625da 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6bb22961/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 3c0b528..adb1890 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0
+2.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/6bb22961/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index fe3bcfd..4cdcfa2 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 

[1/2] spark git commit: Preparing Spark release v2.3.0-rc1

2018-01-11 Thread sameerag
Repository: spark
Updated Branches:
  refs/heads/branch-2.3 2ec302658 -> 6bb22961c


Preparing Spark release v2.3.0-rc1


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/964cc2e3
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/964cc2e3
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/964cc2e3

Branch: refs/heads/branch-2.3
Commit: 964cc2e31b2862bca0bd968b3e9e2cbf8d3ba5ea
Parents: 2ec3026
Author: Sameer Agarwal 
Authored: Thu Jan 11 15:23:10 2018 -0800
Committer: Sameer Agarwal 
Committed: Thu Jan 11 15:23:10 2018 -0800

--
 assembly/pom.xml  | 2 +-
 common/kvstore/pom.xml| 2 +-
 common/network-common/pom.xml | 2 +-
 common/network-shuffle/pom.xml| 2 +-
 common/network-yarn/pom.xml   | 2 +-
 common/sketch/pom.xml | 2 +-
 common/tags/pom.xml   | 2 +-
 common/unsafe/pom.xml | 2 +-
 core/pom.xml  | 2 +-
 docs/_config.yml  | 2 +-
 examples/pom.xml  | 2 +-
 external/docker-integration-tests/pom.xml | 2 +-
 external/flume-assembly/pom.xml   | 2 +-
 external/flume-sink/pom.xml   | 2 +-
 external/flume/pom.xml| 2 +-
 external/kafka-0-10-assembly/pom.xml  | 2 +-
 external/kafka-0-10-sql/pom.xml   | 2 +-
 external/kafka-0-10/pom.xml   | 2 +-
 external/kafka-0-8-assembly/pom.xml   | 2 +-
 external/kafka-0-8/pom.xml| 2 +-
 external/kinesis-asl-assembly/pom.xml | 2 +-
 external/kinesis-asl/pom.xml  | 2 +-
 external/spark-ganglia-lgpl/pom.xml   | 2 +-
 graphx/pom.xml| 2 +-
 hadoop-cloud/pom.xml  | 2 +-
 launcher/pom.xml  | 2 +-
 mllib-local/pom.xml   | 2 +-
 mllib/pom.xml | 2 +-
 pom.xml   | 2 +-
 python/pyspark/version.py | 2 +-
 repl/pom.xml  | 2 +-
 resource-managers/kubernetes/core/pom.xml | 2 +-
 resource-managers/mesos/pom.xml   | 2 +-
 resource-managers/yarn/pom.xml| 2 +-
 sql/catalyst/pom.xml  | 2 +-
 sql/core/pom.xml  | 2 +-
 sql/hive-thriftserver/pom.xml | 2 +-
 sql/hive/pom.xml  | 2 +-
 streaming/pom.xml | 2 +-
 tools/pom.xml | 2 +-
 40 files changed, 40 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/964cc2e3/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index b3b4239..2ca9ab6 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -21,7 +21,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0-SNAPSHOT
+2.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/964cc2e3/common/kvstore/pom.xml
--
diff --git a/common/kvstore/pom.xml b/common/kvstore/pom.xml
index cf93d41..404c744 100644
--- a/common/kvstore/pom.xml
+++ b/common/kvstore/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/964cc2e3/common/network-common/pom.xml
--
diff --git a/common/network-common/pom.xml b/common/network-common/pom.xml
index 18cbdad..3c0b528 100644
--- a/common/network-common/pom.xml
+++ b/common/network-common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/964cc2e3/common/network-shuffle/pom.xml
--
diff --git a/common/network-shuffle/pom.xml b/common/network-shuffle/pom.xml
index 9968480..fe3bcfd 100644
--- a/common/network-shuffle/pom.xml
+++ b/common/network-shuffle/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-2.3.0-SNAPSHOT
+2.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/spark/blob/964cc2e3/common/network-yarn/pom.xml
--
diff --git a/common/network-yarn/pom.xml b/common/network-yarn/pom.xml
index ec2db6e..90ca401 100644
--- a/common/network-yarn/pom.xml
+++ b/common/network-yarn/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.spark
 spark-parent_2.11
-

  1   2   >