[3/3] spark-website git commit: Update the website for Spark 2.1.2 and add linkts to API documentation and clarify release JIRA process.

2017-10-17 Thread holden
Update the website for Spark 2.1.2 and add linkts to API documentation and 
clarify release JIRA process.

update for release 2.1.2

doc

add release on jira


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/bdb87e97
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/bdb87e97
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/bdb87e97

Branch: refs/heads/asf-site
Commit: bdb87e97c006e8ca98b2b2d2554a9f07c26d494c
Parents: 5e04ca0
Author: Felix Cheung 
Authored: Sun Oct 15 13:40:12 2017 -0700
Committer: Holden Karau 
Committed: Tue Oct 17 22:57:04 2017 -0700

--
 documentation.md|   2 +
 js/downloads.js |   1 +
 news/_posts/2017-10-09-spark-2-1-2-released.md  |  15 ++
 release-process.md  |  28 ++-
 .../_posts/2017-10-09-spark-release-2-1-2.md|  18 ++
 security.md |  10 +-
 site/committers.html|   6 +-
 site/community.html |   6 +-
 site/contributing.html  |   6 +-
 site/developer-tools.html   |   6 +-
 site/docs/2.0.2/latest  |   1 -
 site/documentation.html |   8 +-
 site/downloads.html |   6 +-
 site/examples.html  |   6 +-
 site/faq.html   |   6 +-
 site/graphx/index.html  |   6 +-
 site/improvement-proposals.html |   6 +-
 site/index.html |   6 +-
 site/js/downloads.js|   1 +
 site/mailing-lists.html |   6 +-
 site/mllib/index.html   |   6 +-
 site/news/amp-camp-2013-registration-ope.html   |   6 +-
 .../news/announcing-the-first-spark-summit.html |   6 +-
 .../news/fourth-spark-screencast-published.html |   6 +-
 site/news/index.html|  15 +-
 site/news/nsdi-paper.html   |   6 +-
 site/news/one-month-to-spark-summit-2015.html   |   6 +-
 .../proposals-open-for-spark-summit-east.html   |   6 +-
 ...registration-open-for-spark-summit-east.html |   6 +-
 .../news/run-spark-and-shark-on-amazon-emr.html |   6 +-
 site/news/spark-0-6-1-and-0-5-2-released.html   |   6 +-
 site/news/spark-0-6-2-released.html |   6 +-
 site/news/spark-0-7-0-released.html |   6 +-
 site/news/spark-0-7-2-released.html |   6 +-
 site/news/spark-0-7-3-released.html |   6 +-
 site/news/spark-0-8-0-released.html |   6 +-
 site/news/spark-0-8-1-released.html |   6 +-
 site/news/spark-0-9-0-released.html |   6 +-
 site/news/spark-0-9-1-released.html |   6 +-
 site/news/spark-0-9-2-released.html |   6 +-
 site/news/spark-1-0-0-released.html |   6 +-
 site/news/spark-1-0-1-released.html |   6 +-
 site/news/spark-1-0-2-released.html |   6 +-
 site/news/spark-1-1-0-released.html |   6 +-
 site/news/spark-1-1-1-released.html |   6 +-
 site/news/spark-1-2-0-released.html |   6 +-
 site/news/spark-1-2-1-released.html |   6 +-
 site/news/spark-1-2-2-released.html |   6 +-
 site/news/spark-1-3-0-released.html |   6 +-
 site/news/spark-1-4-0-released.html |   6 +-
 site/news/spark-1-4-1-released.html |   6 +-
 site/news/spark-1-5-0-released.html |   6 +-
 site/news/spark-1-5-1-released.html |   6 +-
 site/news/spark-1-5-2-released.html |   6 +-
 site/news/spark-1-6-0-released.html |   6 +-
 site/news/spark-1-6-1-released.html |   6 +-
 site/news/spark-1-6-2-released.html |   6 +-
 site/news/spark-1-6-3-released.html |   6 +-
 site/news/spark-2-0-0-released.html |   6 +-
 site/news/spark-2-0-1-released.html |   6 +-
 site/news/spark-2-0-2-released.html |   6 +-
 site/news/spark-2-1-0-released.html |   6 +-
 site/news/spark-2-1-1-released.html |   6 +-
 site/news/spark-2-1-2-released.html | 223 ++
 site/news/spark-2-2-0-released.html |   6 +-
 site/news/spark-2.0.0-preview.html  |   6 +-
 .../spark-accepted-into-apache-incubator.html   |   6 +-
 site/news/spark-and-shark-in-the-news.html  |   6 +-
 site/news/spark-becomes-tlp.html|   6 +-
 site/news/spark-featured-in-wired.html  |   6 +-
 .../spark-mailing-lists-moving-to-apache.html   |   6 +-
 site/news/spark-meetups.html|   6 +-
 

[2/3] spark-website git commit: Update the website for Spark 2.1.2 and add linkts to API documentation and clarify release JIRA process.

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/news/spark-2-0-2-released.html
--
diff --git a/site/news/spark-2-0-2-released.html 
b/site/news/spark-2-0-2-released.html
index 300bfa0..7c8c812 100644
--- a/site/news/spark-2-0-2-released.html
+++ b/site/news/spark-2-0-2-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/news/spark-2-1-0-released.html
--
diff --git a/site/news/spark-2-1-0-released.html 
b/site/news/spark-2-1-0-released.html
index 86f9844..cc6546c 100644
--- a/site/news/spark-2-1-0-released.html
+++ b/site/news/spark-2-1-0-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/news/spark-2-1-1-released.html
--
diff --git a/site/news/spark-2-1-1-released.html 
b/site/news/spark-2-1-1-released.html
index 0250bf8..3f59546 100644
--- a/site/news/spark-2-1-1-released.html
+++ b/site/news/spark-2-1-1-released.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/news/spark-2-1-2-released.html
--
diff --git a/site/news/spark-2-1-2-released.html 
b/site/news/spark-2-1-2-released.html
new file mode 100644
index 000..947dced
--- /dev/null
+++ b/site/news/spark-2-1-2-released.html
@@ -0,0 +1,223 @@
+
+
+
+  
+  
+  
+
+  
+ Spark 2.1.2 released | Apache Spark
+
+  
+
+  
+
+  
+
+  
+  
+  
+
+  
+  
+
+  
+  
+  var _gaq = _gaq || [];
+  _gaq.push(['_setAccount', 'UA-32518208-2']);
+  _gaq.push(['_trackPageview']);
+  (function() {
+var ga = document.createElement('script'); ga.type = 'text/javascript'; 
ga.async = true;
+ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 
'http://www') + '.google-analytics.com/ga.js';
+var s = document.getElementsByTagName('script')[0]; 
s.parentNode.insertBefore(ga, s);
+  })();
+
+  
+  function trackOutboundLink(link, category, action) {
+try {
+  _gaq.push(['_trackEvent', category , action]);
+} catch(err){}
+
+setTimeout(function() {
+  document.location.href = link.href;
+}, 100);
+  }
+  
+
+  
+  
+
+
+
+
+https://code.jquery.com/jquery.js";>
+https://netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js";>
+
+
+
+
+
+
+  
+
+  
+  
+  Lightning-fast cluster computing
+  
+
+  
+
+
+
+  
+  
+
+  Toggle navigation
+  
+  
+  
+
+  
+
+  
+  
+
+  Download
+  
+
+  Libraries 
+
+
+  SQL and DataFrames
+  Spark Streaming
+  MLlib (machine learning)
+  GraphX (graph)
+  
+  Third-Party 
Projects
+
+  
+  
+
+  Documentation 
+
+
+  Latest Release (Spark 2.2.0)
+  Older Versions and Other 
Resources
+  Frequently Asked Questions
+
+  
+  Examples
+  
+
+  Community 
+
+
+  Mailing Lists  Resources
+  Contributing to Spark
+  Improvement Proposals 
(SPIP)
+  https://issues.apache.org/jira/browse/SPARK;>Issue 
Tracker
+

[1/3] spark-website git commit: Update the website for Spark 2.1.2 and add linkts to API documentation and clarify release JIRA process.

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/asf-site 5e04ca053 -> bdb87e97c


http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/releases/spark-release-1-2-0.html
--
diff --git a/site/releases/spark-release-1-2-0.html 
b/site/releases/spark-release-1-2-0.html
index 45a9b17..98ba92a 100644
--- a/site/releases/spark-release-1-2-0.html
+++ b/site/releases/spark-release-1-2-0.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/releases/spark-release-1-2-1.html
--
diff --git a/site/releases/spark-release-1-2-1.html 
b/site/releases/spark-release-1-2-1.html
index 6103993..6383c66 100644
--- a/site/releases/spark-release-1-2-1.html
+++ b/site/releases/spark-release-1-2-1.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/releases/spark-release-1-2-2.html
--
diff --git a/site/releases/spark-release-1-2-2.html 
b/site/releases/spark-release-1-2-2.html
index bd2ca9f..48a11b1 100644
--- a/site/releases/spark-release-1-2-2.html
+++ b/site/releases/spark-release-1-2-2.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/releases/spark-release-1-3-0.html
--
diff --git a/site/releases/spark-release-1-3-0.html 
b/site/releases/spark-release-1-3-0.html
index 008f31b..fb22f53 100644
--- a/site/releases/spark-release-1-3-0.html
+++ b/site/releases/spark-release-1-3-0.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/releases/spark-release-1-3-1.html
--
diff --git a/site/releases/spark-release-1-3-1.html 
b/site/releases/spark-release-1-3-1.html
index 8ce1da5..886826f 100644
--- a/site/releases/spark-release-1-3-1.html
+++ b/site/releases/spark-release-1-3-1.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 2017, Dublin, Ireland) agenda posted
   (Aug 28, 2017)
 
@@ -170,9 +173,6 @@
   Spark 2.1.1 
released
   (May 02, 2017)
 
-  Spark 
Summit (June 5-7th, 2017, San Francisco) agenda posted
-  (Mar 31, 2017)
-
   
   Archive
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/bdb87e97/site/releases/spark-release-1-4-0.html
--
diff --git a/site/releases/spark-release-1-4-0.html 
b/site/releases/spark-release-1-4-0.html
index e8c1448..d86635f 100644
--- a/site/releases/spark-release-1-4-0.html
+++ b/site/releases/spark-release-1-4-0.html
@@ -161,6 +161,9 @@
   Latest News
   
 
+  Spark 2.1.2 
released
+  (Oct 09, 2017)
+
   Spark 
Summit Europe (October 24-26th, 

spark-website git commit: Update the release process notes to cover PyPi for the next RM.

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/asf-site 6634f88ab -> 5e04ca053


Update the release process notes to cover PyPi for the next RM.

Update release process notes

Update change to release process html page

Switch around notes

Update release process docs to include after what

Try and improve wording a bit

Include the URLS in the link text to sort of match the style of the rest of the 
page

Eh looks weird

Add a note about how you can use twine as well

Generate release process file.

Update to use twine

Update the release process documentation to twine compile

s/apache/Apache/ in release-process per @srowen's comment

Update release process html page too

Update release process remove unecessary "on the"


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/5e04ca05
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/5e04ca05
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/5e04ca05

Branch: refs/heads/asf-site
Commit: 5e04ca05364862c07175f206834ef8360e342632
Parents: 6634f88
Author: Holden Karau 
Authored: Sat May 6 16:37:23 2017 -0700
Committer: Holden Karau 
Committed: Tue Oct 17 22:52:50 2017 -0700

--
 release-process.md| 21 +++--
 site/release-process.html | 17 +++--
 2 files changed, 34 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark-website/blob/5e04ca05/release-process.md
--
diff --git a/release-process.md b/release-process.md
index f86ebaa..cd6010a 100644
--- a/release-process.md
+++ b/release-process.md
@@ -113,7 +113,7 @@ mkdir spark-1.1.1-rc2
 $ sftp -r andrewo...@people.apache.org:~/public_html/spark-1.1.1-rc2/* 
spark-1.1.1-rc2
  
 # NOTE: Remove any binaries you don’t want to publish
-# E.g. never push MapR and *without-hive artifacts to apache
+# E.g. never push MapR and *without-hive artifacts to Apache
 $ rm spark-1.1.1-rc2/*mapr*
 $ rm spark-1.1.1-rc2/*without-hive*
 $ svn add spark-1.1.1-rc2
@@ -129,6 +129,23 @@ Verify that the resources are present in https://www.apache.org/dist/sp
 It may take a while for them to be visible. This will be mirrored throughout 
the Apache network. 
 There are a few remaining steps.
 
+Upload to PyPI
+
+Uploading to PyPI is done after the release has been uploaded to Apache. To 
get started, go to the https://pypi.python.org;>PyPI website and 
log in with the spark-upload account (see the PMC mailing list for account 
permissions).
+
+
+Once you have logged in it is time to register the new release, on the https://pypi.python.org/pypi?%3Aaction=submit_form;>submitting package 
information page by uploading the PKG-INFO file from inside the pyspark 
packaged artifact.
+
+
+Once the release has been registered you can upload the artifacts
+to the legacy pypi interface, using https://pypi.python.org/pypi/twine;>twine.
+If you don't have twine setup you will need to create a .pypirc file with the 
reository pointing to `https://upload.pypi.org/legacy/` and the same username 
and password for the spark-upload account.
+
+In the release directory run `twine upload -r legacy pyspark-version.tar.gz 
pyspark-version.tar.gz.asc`.
+If for some reason the twine upload is incorrect (e.g. http failure or other 
issue), you can rename the artifact to `pyspark-version.post0.tar.gz`, delete 
the old artifact from PyPI and re-upload.
+
+
+
 Remove Old Releases from Mirror Network
 
 Spark always keeps two releases in the mirror network: the most recent release 
on the current and 
@@ -190,7 +207,7 @@ $ git checkout v1.1.1
 $ cd docs
 $ PRODUCTION=1 jekyll build
  
-# Copy the new documentation to apache
+# Copy the new documentation to Apache
 $ git clone https://github.com/apache/spark-website
 ...
 $ cp -R _site spark-website/site/docs/1.1.1

http://git-wip-us.apache.org/repos/asf/spark-website/blob/5e04ca05/site/release-process.html
--
diff --git a/site/release-process.html b/site/release-process.html
index 6261650..18e871f 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -316,7 +316,7 @@ mkdir spark-1.1.1-rc2
 $ sftp -r andrewo...@people.apache.org:~/public_html/spark-1.1.1-rc2/* 
spark-1.1.1-rc2
  
 # NOTE: Remove any binaries you don’t want to publish
-# E.g. never push MapR and *without-hive artifacts to apache
+# E.g. never push MapR and *without-hive artifacts to Apache
 $ rm spark-1.1.1-rc2/*mapr*
 $ rm spark-1.1.1-rc2/*without-hive*
 $ svn add spark-1.1.1-rc2
@@ -332,6 +332,19 @@ $ svn mv 
https://dist.apache.org/repos/dist/dev/spark/spark-1.1.1-rc2 https://di
 It may take a while for them to be visible. This will be 

spark-website git commit: Update the release process documentation on the basis of having a new person run through. In addition to documentation some previously undocumented steps and updating some de

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/asf-site a6155a89d -> 6634f88ab


Update the release process documentation on the basis of having a new person 
run through.
In addition to documentation some previously undocumented steps and updating 
some deprecated parts, this changes
the recommended build to be on the RM's machine to allow individual key signing 
until the Jenkins process is updated.

Update the release docs based on my initial looking at jenkins

Mention how to configure the jobs

Regenerate release process doc

Fix sentence fragment

Regenerate release process doc

The version information is taken care of by the jenkins scripts

Re-build release process doc with change

Update release process to describe rolling a release by hand, also switch from 
scp to sftp since scp is now disabled on people.apache

Update release process description more

Update with CR feedback

Update corresponding HTML


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/6634f88a
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/6634f88a
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/6634f88a

Branch: refs/heads/asf-site
Commit: 6634f88abf9a2485665956229afa420f528dd81c
Parents: a6155a8
Author: Holden Karau 
Authored: Tue Sep 12 14:49:23 2017 -0700
Committer: Holden Karau 
Committed: Tue Oct 17 22:49:06 2017 -0700

--
 release-process.md| 67 +++-
 site/release-process.html | 70 --
 2 files changed, 66 insertions(+), 71 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark-website/blob/6634f88a/release-process.md
--
diff --git a/release-process.md b/release-process.md
index a5a609e..f86ebaa 100644
--- a/release-process.md
+++ b/release-process.md
@@ -33,37 +33,6 @@ standard Git branching mechanism and should be announced to 
the community once t
 created. It is also good to set up Jenkins jobs for the release branch once it 
is cut to 
 ensure tests are passing (consult Josh Rosen and Shane Knapp for help with 
this).
 
-Next, ensure that all Spark versions are correct in the code base on the 
release branch (see 
-https://github.com/apache/spark/commit/01d233e4aede65ffa39b9d2322196d4b64186526;>this
 example commit).
-You should grep through the codebase to find all instances of the version 
string. Some known 
-places to change are:
-
-- **SparkContext**. Search for VERSION (only for branch 1.x)
-- **Maven build**. Ensure that the version in all the `pom.xml` files is 
`-SNAPSHOT` 
-(e.g. `1.1.1-SNAPSHOT`). This will be changed to `` (e.g. 
1.1.1) automatically by 
-Maven when cutting the release. Note that there are a few exceptions that 
should just use 
-``. These modules are not published as artifacts.
-- **Spark REPLs**. Look for the Spark ASCII art in `SparkILoopInit.scala` for 
the Scala shell 
-and in `shell.py` for the Python REPL.
-- **Docs**. Search for VERSION in `docs/_config.yml`
-- **PySpark**. Search for `__version__` in `python/pyspark/version.py`
-- **SparkR**. Search for `Version` in `R/pkg/DESCRIPTION`
-
-Finally, update `CHANGES.txt` with this script in the Spark repository. 
`CHANGES.txt` captures 
-all the patches that have made it into this release candidate since the last 
release.
-
-```
-$ export SPARK_HOME=
-$ cd spark
-# Update release versions
-$ vim dev/create-release/generate-changelist.py
-$ dev/create-release/generate-changelist.py
-```
-
-This produces a `CHANGES.txt.new` that should be a superset of the existing 
`CHANGES.txt`. 
-Replace the old `CHANGES.txt` with the new one (see 
-https://github.com/apache/spark/commit/131c62672a39a6f71f6834e9aad54b587237f13c;>this
 example commit).
-
 Cutting a Release Candidate
 
 If this is not the first RC, then make sure that the JIRA issues that have 
been solved since the 
@@ -75,9 +44,37 @@ For example if you are cutting RC for 1.0.2, mark such 
issues as `FIXED` in 1.0.
 release, and change them to the current release.
 - Verify from `git log` whether they are actually making it in the new RC or 
not.
 
-The process of cutting a release candidate has been automated via the AMPLab 
Jenkins. There are 
-Jenkins jobs that can tag a release candidate and create various packages 
based on that candidate. 
-The recommended process is to ask the previous release manager to walk you 
through the Jenkins jobs.
+The process of cutting a release candidate has been partially automated via 
the AMPLab Jenkins. There are
+Jenkins jobs that can tag a release candidate and create various packages 
based on that candidate.
+
+
+At present the Jenkins jobs *SHOULD NOT BE USED* as they use a legacy 

spark git commit: [SPARK-22278][SS] Expose current event time watermark and current processing time in GroupState

2017-10-17 Thread tdas
Repository: spark
Updated Branches:
  refs/heads/master 1437e344e -> f3137feec


[SPARK-22278][SS] Expose current event time watermark and current processing 
time in GroupState

## What changes were proposed in this pull request?

Complex state-updating and/or timeout-handling logic in mapGroupsWithState 
functions may require taking decisions based on the current event-time 
watermark and/or processing time. Currently, you can use the SQL function 
`current_timestamp` to get the current processing time, but it needs to be 
passed inserted in every row with a select, and then passed through the 
encoder, which isn't efficient. Furthermore, there is no way to get the current 
watermark.

This PR exposes both of them through the GroupState API.
Additionally, it also cleans up some of the GroupState docs.

## How was this patch tested?

New unit tests

Author: Tathagata Das 

Closes #19495 from tdas/SPARK-22278.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/f3137fee
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/f3137fee
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/f3137fee

Branch: refs/heads/master
Commit: f3137feecd30c74c47dbddb0e22b4ddf8cf2f912
Parents: 1437e34
Author: Tathagata Das 
Authored: Tue Oct 17 20:09:12 2017 -0700
Committer: Tathagata Das 
Committed: Tue Oct 17 20:09:12 2017 -0700

--
 .../apache/spark/sql/execution/objects.scala|   8 +-
 .../streaming/FlatMapGroupsWithStateExec.scala  |   7 +-
 .../execution/streaming/GroupStateImpl.scala|  50 +++---
 .../apache/spark/sql/streaming/GroupState.scala |  92 +++
 .../streaming/FlatMapGroupsWithStateSuite.scala | 160 ---
 5 files changed, 238 insertions(+), 79 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/f3137fee/sql/core/src/main/scala/org/apache/spark/sql/execution/objects.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/objects.scala 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/objects.scala
index c68975b..d861109 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/objects.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/objects.scala
@@ -29,7 +29,7 @@ import org.apache.spark.sql.catalyst.InternalRow
 import org.apache.spark.sql.catalyst.expressions._
 import org.apache.spark.sql.catalyst.expressions.codegen._
 import org.apache.spark.sql.catalyst.expressions.objects.Invoke
-import org.apache.spark.sql.catalyst.plans.logical.{FunctionUtils, 
LogicalGroupState}
+import org.apache.spark.sql.catalyst.plans.logical.{EventTimeWatermark, 
FunctionUtils, LogicalGroupState}
 import org.apache.spark.sql.catalyst.plans.physical._
 import org.apache.spark.sql.execution.streaming.GroupStateImpl
 import org.apache.spark.sql.streaming.GroupStateTimeout
@@ -361,8 +361,12 @@ object MapGroupsExec {
   outputObjAttr: Attribute,
   timeoutConf: GroupStateTimeout,
   child: SparkPlan): MapGroupsExec = {
+val watermarkPresent = child.output.exists {
+  case a: Attribute if a.metadata.contains(EventTimeWatermark.delayKey) => 
true
+  case _ => false
+}
 val f = (key: Any, values: Iterator[Any]) => {
-  func(key, values, GroupStateImpl.createForBatch(timeoutConf))
+  func(key, values, GroupStateImpl.createForBatch(timeoutConf, 
watermarkPresent))
 }
 new MapGroupsExec(f, keyDeserializer, valueDeserializer,
   groupingAttributes, dataAttributes, outputObjAttr, child)

http://git-wip-us.apache.org/repos/asf/spark/blob/f3137fee/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
--
diff --git 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
index c81f1a8..29f38fa 100644
--- 
a/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
+++ 
b/sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FlatMapGroupsWithStateExec.scala
@@ -61,6 +61,10 @@ case class FlatMapGroupsWithStateExec(
 
   private val isTimeoutEnabled = timeoutConf != NoTimeout
   val stateManager = new FlatMapGroupsWithState_StateManager(stateEncoder, 
isTimeoutEnabled)
+  val watermarkPresent = child.output.exists {
+case a: Attribute if a.metadata.contains(EventTimeWatermark.delayKey) => 
true
+case _ => false
+  }
 
   /** Distribute by grouping attributes */
   override def requiredChildDistribution: Seq[Distribution] =

spark git commit: [SPARK-22050][CORE] Allow BlockUpdated events to be optionally logged to the event log

2017-10-17 Thread vanzin
Repository: spark
Updated Branches:
  refs/heads/master 28f9f3f22 -> 1437e344e


[SPARK-22050][CORE] Allow BlockUpdated events to be optionally logged to the 
event log

## What changes were proposed in this pull request?

I see that block updates are not logged to the event log.
This makes sense as a default for performance reasons.
However, I find it helpful when trying to get a better understanding of caching 
for a job to be able to log these updates.
This PR adds a configuration setting `spark.eventLog.blockUpdates` (defaulting 
to false) which allows block updates to be recorded in the log.
This contribution is original work which is licensed to the Apache Spark 
project.

## How was this patch tested?

Current and additional unit tests.

Author: Michael Mior 

Closes #19263 from michaelmior/log-block-updates.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/1437e344
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/1437e344
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/1437e344

Branch: refs/heads/master
Commit: 1437e344ec0c29a44a19f4513986f5f184c44695
Parents: 28f9f3f
Author: Michael Mior 
Authored: Tue Oct 17 14:30:52 2017 -0700
Committer: Marcelo Vanzin 
Committed: Tue Oct 17 14:30:52 2017 -0700

--
 .../apache/spark/internal/config/package.scala  | 23 +
 .../spark/scheduler/EventLoggingListener.scala  | 18 +++
 .../org/apache/spark/util/JsonProtocol.scala| 34 ++--
 .../scheduler/EventLoggingListenerSuite.scala   |  2 ++
 .../apache/spark/util/JsonProtocolSuite.scala   | 27 
 docs/configuration.md   |  8 +
 6 files changed, 104 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/spark/blob/1437e344/core/src/main/scala/org/apache/spark/internal/config/package.scala
--
diff --git a/core/src/main/scala/org/apache/spark/internal/config/package.scala 
b/core/src/main/scala/org/apache/spark/internal/config/package.scala
index e7b406a..0c36bdc 100644
--- a/core/src/main/scala/org/apache/spark/internal/config/package.scala
+++ b/core/src/main/scala/org/apache/spark/internal/config/package.scala
@@ -41,6 +41,29 @@ package object config {
 .bytesConf(ByteUnit.MiB)
 .createWithDefaultString("1g")
 
+  private[spark] val EVENT_LOG_COMPRESS =
+ConfigBuilder("spark.eventLog.compress")
+  .booleanConf
+  .createWithDefault(false)
+
+  private[spark] val EVENT_LOG_BLOCK_UPDATES =
+ConfigBuilder("spark.eventLog.logBlockUpdates.enabled")
+  .booleanConf
+  .createWithDefault(false)
+
+  private[spark] val EVENT_LOG_TESTING =
+ConfigBuilder("spark.eventLog.testing")
+  .internal()
+  .booleanConf
+  .createWithDefault(false)
+
+  private[spark] val EVENT_LOG_OUTPUT_BUFFER_SIZE = 
ConfigBuilder("spark.eventLog.buffer.kb")
+.bytesConf(ByteUnit.KiB)
+.createWithDefaultString("100k")
+
+  private[spark] val EVENT_LOG_OVERWRITE =
+
ConfigBuilder("spark.eventLog.overwrite").booleanConf.createWithDefault(false)
+
   private[spark] val EXECUTOR_CLASS_PATH =
 
ConfigBuilder(SparkLauncher.EXECUTOR_EXTRA_CLASSPATH).stringConf.createOptional
 

http://git-wip-us.apache.org/repos/asf/spark/blob/1437e344/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala
--
diff --git 
a/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala 
b/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala
index 9dafa0b..a77adc5 100644
--- a/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala
+++ b/core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala
@@ -37,6 +37,7 @@ import org.json4s.jackson.JsonMethods._
 import org.apache.spark.{SPARK_VERSION, SparkConf}
 import org.apache.spark.deploy.SparkHadoopUtil
 import org.apache.spark.internal.Logging
+import org.apache.spark.internal.config._
 import org.apache.spark.io.CompressionCodec
 import org.apache.spark.util.{JsonProtocol, Utils}
 
@@ -45,6 +46,7 @@ import org.apache.spark.util.{JsonProtocol, Utils}
  *
  * Event logging is specified by the following configurable parameters:
  *   spark.eventLog.enabled - Whether event logging is enabled.
+ *   spark.eventLog.logBlockUpdates.enabled - Whether to log block updates
  *   spark.eventLog.compress - Whether to compress logged events
  *   spark.eventLog.overwrite - Whether to overwrite any existing files.
  *   spark.eventLog.dir - Path to the directory in which events are logged.
@@ -64,10 +66,11 @@ private[spark] class EventLoggingListener(
 this(appId, 

[spark-website] Git Push Summary

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/apache-asf-site [deleted] a6d9cbdef

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark-website] Git Push Summary

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/add-2.1.2-docs [deleted] 0b563c84c

-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[50/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/README.md
--
diff --git a/site/docs/2.1.2/README.md b/site/docs/2.1.2/README.md
new file mode 100644
index 000..ffd3b57
--- /dev/null
+++ b/site/docs/2.1.2/README.md
@@ -0,0 +1,72 @@
+Welcome to the Spark documentation!
+
+This readme will walk you through navigating and building the Spark 
documentation, which is included
+here with the Spark source code. You can also find documentation specific to 
release versions of
+Spark at http://spark.apache.org/documentation.html.
+
+Read on to learn more about viewing documentation in plain text (i.e., 
markdown) or building the
+documentation yourself. Why build it yourself? So that you have the docs that 
corresponds to
+whichever version of Spark you currently have checked out of revision control.
+
+## Prerequisites
+The Spark documentation build uses a number of tools to build HTML docs and 
API docs in Scala,
+Python and R.
+
+You need to have 
[Ruby](https://www.ruby-lang.org/en/documentation/installation/) and
+[Python](https://docs.python.org/2/using/unix.html#getting-and-installing-the-latest-version-of-python)
+installed. Also install the following libraries:
+```sh
+$ sudo gem install jekyll jekyll-redirect-from pygments.rb
+$ sudo pip install Pygments
+# Following is needed only for generating API docs
+$ sudo pip install sphinx pypandoc
+$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "roxygen2", 
"testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/;)'
+```
+(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to 
replace gem with gem2.0)
+
+## Generating the Documentation HTML
+
+We include the Spark documentation as part of the source (as opposed to using 
a hosted wiki, such as
+the github wiki, as the definitive documentation) to enable the documentation 
to evolve along with
+the source code and be captured by revision control (currently git). This way 
the code automatically
+includes the version of the documentation that is relevant regardless of which 
version or release
+you have checked out or downloaded.
+
+In this directory you will find textfiles formatted using Markdown, with an 
".md" suffix. You can
+read those text files directly if you want. Start with index.md.
+
+Execute `jekyll build` from the `docs/` directory to compile the site. 
Compiling the site with
+Jekyll will create a directory called `_site` containing index.html as well as 
the rest of the
+compiled files.
+
+$ cd docs
+$ jekyll build
+
+You can modify the default Jekyll build as follows:
+```sh
+# Skip generating API docs (which takes a while)
+$ SKIP_API=1 jekyll build
+
+# Serve content locally on port 4000
+$ jekyll serve --watch
+
+# Build the site with extra features used on the live page
+$ PRODUCTION=1 jekyll build
+```
+
+## API Docs (Scaladoc, Sphinx, roxygen2)
+
+You can build just the Spark scaladoc by running `build/sbt unidoc` from the 
SPARK_PROJECT_ROOT directory.
+
+Similarly, you can build just the PySpark docs by running `make html` from the
+SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for 
classes that are listed as
+public in `__init__.py`. The SparkR docs can be built by running 
SPARK_PROJECT_ROOT/R/create-docs.sh.
+
+When you run `jekyll` in the `docs` directory, it will also copy over the 
scaladoc for the various
+Spark subprojects into the `docs` directory (and then also into the `_site` 
directory). We use a
+jekyll plugin to run `build/sbt unidoc` before building the site so if you 
haven't run it (recently) it
+may take some time as it generates all of the scaladoc.  The jekyll plugin 
also generates the
+PySpark docs using [Sphinx](http://sphinx-doc.org/).
+
+NOTE: To skip the step of building and copying over the Scala, Python, R API 
docs, run `SKIP_API=1
+jekyll`.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api.html
--
diff --git a/site/docs/2.1.2/api.html b/site/docs/2.1.2/api.html
new file mode 100644
index 000..2496122
--- /dev/null
+++ b/site/docs/2.1.2/api.html
@@ -0,0 +1,178 @@
+
+
+
+
+
+  
+
+
+
+Spark API Documentation - Spark 2.1.2 Documentation
+
+
+
+
+
+
+body {
+padding-top: 60px;
+padding-bottom: 40px;
+}
+
+
+
+
+
+
+
+
+
+
+
+
+  var _gaq = _gaq || [];
+  _gaq.push(['_setAccount', 'UA-32518208-2']);
+  _gaq.push(['_trackPageview']);
+
+  (function() {
+var ga = document.createElement('script'); ga.type = 
'text/javascript'; ga.async = true;
+ga.src = ('https:' == 

[41/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/ncol.html
--
diff --git a/site/docs/2.1.2/api/R/ncol.html b/site/docs/2.1.2/api/R/ncol.html
new file mode 100644
index 000..8677389
--- /dev/null
+++ b/site/docs/2.1.2/api/R/ncol.html
@@ -0,0 +1,97 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Returns the number of 
columns in a SparkDataFrame
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+ncol 
{SparkR}R Documentation
+
+Returns the number of columns in a SparkDataFrame
+
+Description
+
+Returns the number of columns in a SparkDataFrame
+
+
+
+Usage
+
+
+## S4 method for signature 'SparkDataFrame'
+ncol(x)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame
+
+
+
+
+Note
+
+ncol since 1.5.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, histogram,
+insertInto, intersect,
+isLocal, join,
+limit, merge,
+mutate, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D ncol(df)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/negate.html
--
diff --git a/site/docs/2.1.2/api/R/negate.html 
b/site/docs/2.1.2/api/R/negate.html
new file mode 100644
index 000..cc0e06d
--- /dev/null
+++ b/site/docs/2.1.2/api/R/negate.html
@@ -0,0 +1,67 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: negate
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+negate 
{SparkR}R Documentation
+
+negate
+
+Description
+
+Unary minus, i.e. negate the expression.
+
+
+
+Usage
+
+
+negate(x)
+
+## S4 method for signature 'Column'
+negate(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+negate since 1.5.0
+
+
+
+See Also
+
+Other normal_funcs: abs,
+bitwiseNOT, coalesce,
+column, expr,
+greatest, ifelse,
+isnan, least,
+lit, nanvl,
+randn, rand,
+struct, when
+
+
+
+Examples
+
+## Not run: negate(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/next_day.html
--
diff --git a/site/docs/2.1.2/api/R/next_day.html 
b/site/docs/2.1.2/api/R/next_day.html
new file mode 100644
index 000..736d7e4
--- /dev/null
+++ b/site/docs/2.1.2/api/R/next_day.html
@@ -0,0 +1,89 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: next_day
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+next_day 
{SparkR}R Documentation
+
+next_day
+
+Description
+
+Given a date column, returns the first date which is later than the value 
of the date column
+that is on the specified day of the week.
+
+
+
+Usage
+
+
+next_day(y, x)
+
+## S4 method for signature 'Column,character'
+next_day(y, x)
+
+
+
+Arguments
+
+
+y
+
+Column to compute on.
+
+x
+
+Day of the week string.
+
+
+
+
+Details
+
+For example, next_day('2015-07-27', "Sunday") returns 
2015-08-02 because that is the first
+Sunday after 2015-07-27.
+
+Day of the week parameter is case insensitive, and accepts first three or 
two characters:
+Mon, Tue, Wed, Thu, 
Fri, Sat, Sun.
+
+
+
+Note
+
+next_day since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, datediff,
+dayofmonth, dayofyear,
+from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+quarter, second,

[12/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html
new file mode 100644
index 000..073fb95
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html
@@ -0,0 +1,226 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaFutureAction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Interface 
JavaFutureActionT
+
+
+
+
+
+
+All Superinterfaces:
+java.util.concurrent.FutureT
+
+
+
+public interface JavaFutureActionT
+extends java.util.concurrent.FutureT
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+java.util.ListInteger
+jobIds()
+Returns the job IDs run by the underlying async 
operation.
+
+
+
+
+
+
+
+Methods inherited from interfacejava.util.concurrent.Future
+cancel, get, get, isCancelled, isDone
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+jobIds
+java.util.ListIntegerjobIds()
+Returns the job IDs run by the underlying async operation.
+
+ This returns the current snapshot of the job list. Certain operations may run 
multiple
+ jobs, so multiple calls to this method may return different lists.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
new file mode 100644
index 000..60fc727
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
@@ -0,0 +1,325 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaHadoopRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class 
JavaHadoopRDDK,V
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaPairRDDK,V
+
+
+org.apache.spark.api.java.JavaHadoopRDDK,V
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikescala.Tuple2K,V,JavaPairRDDK,V
+
+
+
+public class JavaHadoopRDDK,V
+extends JavaPairRDDK,V
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaHadoopRDD(HadoopRDDK,Vrdd,
+ scala.reflect.ClassTagKkClassTag,
+ scala.reflect.ClassTagVvClassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+scala.reflect.ClassTagK
+kClassTag()
+
+
+RJavaRDDR
+mapPartitionsWithInputSplit(Function2org.apache.hadoop.mapred.InputSplit,java.util.Iteratorscala.Tuple2K,V,java.util.IteratorRf,
+   booleanpreservesPartitioning)
+Maps over a partition, providing the InputSplit that was 
used as the base of the 

[44/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/filter.html
--
diff --git a/site/docs/2.1.2/api/R/filter.html 
b/site/docs/2.1.2/api/R/filter.html
new file mode 100644
index 000..b245e43
--- /dev/null
+++ b/site/docs/2.1.2/api/R/filter.html
@@ -0,0 +1,121 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Filter
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+filter 
{SparkR}R Documentation
+
+Filter
+
+Description
+
+Filter the rows of a SparkDataFrame according to a given condition.
+
+
+
+Usage
+
+
+filter(x, condition)
+
+where(x, condition)
+
+## S4 method for signature 'SparkDataFrame,characterOrColumn'
+filter(x, condition)
+
+## S4 method for signature 'SparkDataFrame,characterOrColumn'
+where(x, condition)
+
+
+
+Arguments
+
+
+x
+
+A SparkDataFrame to be sorted.
+
+condition
+
+The condition to filter on. This may either be a Column expression
+or a string containing a SQL statement
+
+
+
+
+Value
+
+A SparkDataFrame containing only the rows that meet the condition.
+
+
+
+Note
+
+filter since 1.4.0
+
+where since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+histogram, insertInto,
+intersect, isLocal,
+join, limit,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+Other subsetting functions: select,
+subset
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D filter(df, col1  0)
+##D filter(df, df$col2 != abcdefg)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/first.html
--
diff --git a/site/docs/2.1.2/api/R/first.html b/site/docs/2.1.2/api/R/first.html
new file mode 100644
index 000..dd6f123
--- /dev/null
+++ b/site/docs/2.1.2/api/R/first.html
@@ -0,0 +1,136 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Return the first row of a 
SparkDataFrame
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+first 
{SparkR}R Documentation
+
+Return the first row of a SparkDataFrame
+
+Description
+
+Return the first row of a SparkDataFrame
+
+Aggregate function: returns the first value in a group.
+
+
+
+Usage
+
+
+first(x, ...)
+
+## S4 method for signature 'SparkDataFrame'
+first(x)
+
+## S4 method for signature 'characterOrColumn'
+first(x, na.rm = FALSE)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame or a column used in aggregation function.
+
+...
+
+further arguments to be passed to or from other methods.
+
+na.rm
+
+a logical value indicating whether NA values should be stripped
+before the computation proceeds.
+
+
+
+
+Details
+
+The function by default returns the first values it sees. It will return 
the first non-missing
+value it sees when na.rm is set to true. If all values are missing, then NA is 
returned.
+
+
+
+Note
+
+first(SparkDataFrame) since 1.4.0
+
+first(characterOrColumn) since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+histogram, insertInto,
+intersect, isLocal,
+join, limit,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+Other agg_funcs: agg, avg,

[08/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html
new file mode 100644
index 000..c249acb
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html
@@ -0,0 +1,1786 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaRDDLike (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Interface 
JavaRDDLikeT,This extends JavaRDDLikeT,This
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+All Known Implementing Classes:
+JavaDoubleRDD, JavaHadoopRDD, JavaNewHadoopRDD, JavaPairRDD, JavaRDD
+
+
+
+public interface JavaRDDLikeT,This extends 
JavaRDDLikeT,This
+extends scala.Serializable
+Defines operations common to several Java RDD 
implementations.
+ 
+Note:
+  This trait is not intended to be implemented by user code.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+UU
+aggregate(UzeroValue,
+ Function2U,T,UseqOp,
+ Function2U,U,UcombOp)
+Aggregate the elements of each partition, and then the 
results for all the partitions, using
+ given combine functions and a neutral "zero value".
+
+
+
+UJavaPairRDDT,U
+cartesian(JavaRDDLikeU,?other)
+Return the Cartesian product of this RDD and another one, 
that is, the RDD of all pairs of
+ elements (a, b) where a is in this and b is in 
other.
+
+
+
+void
+checkpoint()
+Mark this RDD for checkpointing.
+
+
+
+scala.reflect.ClassTagT
+classTag()
+
+
+java.util.ListT
+collect()
+Return an array that contains all of the elements in this 
RDD.
+
+
+
+JavaFutureActionjava.util.ListT
+collectAsync()
+The asynchronous version of collect, which 
returns a future for
+ retrieving an array containing all of the elements in this RDD.
+
+
+
+java.util.ListT[]
+collectPartitions(int[]partitionIds)
+Return an array that contains all of the elements in a 
specific partition of this RDD.
+
+
+
+SparkContext
+context()
+The SparkContext that this RDD was created 
on.
+
+
+
+long
+count()
+Return the number of elements in the RDD.
+
+
+
+PartialResultBoundedDouble
+countApprox(longtimeout)
+Approximate version of count() that returns a potentially 
incomplete result
+ within a timeout, even if not all tasks have finished.
+
+
+
+PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+Approximate version of count() that returns a potentially 
incomplete result
+ within a timeout, even if not all tasks have finished.
+
+
+
+long
+countApproxDistinct(doublerelativeSD)
+Return approximate number of distinct elements in the 
RDD.
+
+
+
+JavaFutureActionLong
+countAsync()
+The asynchronous version of count, which 
returns a
+ future for counting the number of elements in this RDD.
+
+
+
+java.util.MapT,Long
+countByValue()
+Return the count of each unique value in this RDD as a map 
of (value, count) pairs.
+
+
+
+PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout)
+Approximate version of countByValue().
+
+
+
+PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence)
+Approximate version of countByValue().
+
+
+
+T
+first()
+Return the first element in this RDD.
+
+
+
+UJavaRDDU
+flatMap(FlatMapFunctionT,Uf)
+Return a new RDD by first applying a function to all 
elements of this
+  RDD, and then flattening the results.
+
+
+
+JavaDoubleRDD
+flatMapToDouble(DoubleFlatMapFunctionTf)
+Return a new RDD by first applying a function to all 
elements of this
+  RDD, and then flattening the results.
+
+
+
+K2,V2JavaPairRDDK2,V2
+flatMapToPair(PairFlatMapFunctionT,K2,V2f)
+Return a new RDD by first applying a function to all 
elements of this
+  RDD, and then flattening the results.
+
+
+
+T
+fold(TzeroValue,
+Function2T,T,Tf)
+Aggregate the elements of each partition, and then the 
results for all the partitions, using a
+ given associative function and a neutral "zero value".
+
+
+
+void
+foreach(VoidFunctionTf)
+Applies a function f to all elements of this RDD.
+
+
+

[29/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/lib/jquery.js
--
diff --git a/site/docs/2.1.2/api/java/lib/jquery.js 
b/site/docs/2.1.2/api/java/lib/jquery.js
new file mode 100644
index 000..bc3fbc8
--- /dev/null
+++ b/site/docs/2.1.2/api/java/lib/jquery.js
@@ -0,0 +1,2 @@
+/*! jQuery v1.8.2 jquery.com | jquery.org/license */
+(function(a,b){function G(a){var b=F[a]={};return 
p.each(a.split(s),function(a,c){b[c]=!0}),b}function 
J(a,c,d){if(d===b&===1){var 
e="data-"+c.replace(I,"-$1").toLowerCase();d=a.getAttribute(e);if(typeof 
d=="string"){try{d=d==="true"?!0:d==="false"?!1:d==="null"?null:+d+""===d?+d:H.test(d)?p.parseJSON(d):d}catch(f){}p.data(a,c,d)}else
 d=b}return d}function K(a){var b;for(b in 
a){if(b==="data"&(a[b]))continue;if(b!=="toJSON")return!1}return!0}function
 ba(){return!1}function bb(){return!0}function 
bh(a){return!a||!a.parentNode||a.parentNode.nodeType===11}function bi(a,b){do 
a=a[b];while(a&!==1);return a}function 
bj(a,b,c){b=b||0;if(p.isFunction(b))return p.grep(a,function(a,d){var 
e=!!b.call(a,d,a);return e===c});if(b.nodeType)return 
p.grep(a,function(a,d){return a===b===c});if(typeof b=="string"){var 
d=p.grep(a,function(a){return a.nodeType===1});if(be.test(b))return 
p.filter(b,d,!c);b=p.filter(b,d)}return p.grep(a,function(a,d){return p.inArray(
 a,b)>=0===c})}function bk(a){var 
b=bl.split("|"),c=a.createDocumentFragment();if(c.createElement)while(b.length)c.createElement(b.pop());return
 c}function bC(a,b){return 
a.getElementsByTagName(b)[0]||a.appendChild(a.ownerDocument.createElement(b))}function
 bD(a,b){if(b.nodeType!==1||!p.hasData(a))return;var 
c,d,e,f=p._data(a),g=p._data(b,f),h=f.events;if(h){delete 
g.handle,g.events={};for(c in 
h)for(d=0,e=h[c].length;d").appendTo(e.body),c=b.css("display");b.remove();if(c==="none"||c===""){bI=e.body.appendChild(bI||p.extend(e.createElement("iframe"),{frameBorder:0,width:0,height:0}));if(!bJ||!bI.
 
createElement)bJ=(bI.contentWindow||bI.contentDocument).document,bJ.write(""),bJ.close();b=bJ.body.appendChild(bJ.createElement(a)),c=bH(b,"display"),e.body.removeChild(bI)}return
 bS[a]=c,c}function ci(a,b,c,d){var 
e;if(p.isArray(b))p.each(b,function(b,e){c||ce.test(a)?d(a,e):ci(a+"["+(typeof 
e=="object"?b:"")+"]",e,c,d)});else if(!c&(b)==="object")for(e in 
b)ci(a+"["+e+"]",b[e],c,d);else d(a,b)}function cz(a){return 
function(b,c){typeof b!="string"&&(c=b,b="*");var 

[21/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html 
b/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html
new file mode 100644
index 000..75e4293
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html
@@ -0,0 +1,394 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+RangePartitioner (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
RangePartitionerK,V
+
+
+
+Object
+
+
+org.apache.spark.Partitioner
+
+
+org.apache.spark.RangePartitionerK,V
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+
+public class RangePartitionerK,V
+extends Partitioner
+A Partitioner that partitions 
sortable records by range into roughly
+ equal ranges. The ranges are determined by sampling the content of the RDD 
passed in.
+ 
+See Also:Serialized
 FormNote:
+  The actual number of partitions created by the RangePartitioner might 
not be the same
+ as the partitions parameter, in the case where the number of 
sampled records is less than
+ the value of partitions.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+RangePartitioner(intpartitions,
+RDD? extends scala.Product2K,Vrdd,
+booleanascending,
+scala.math.OrderingKevidence$1,
+scala.reflect.ClassTagKevidence$2)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static KObject
+determineBounds(scala.collection.mutable.ArrayBufferscala.Tuple2K,Objectcandidates,
+   intpartitions,
+   scala.math.OrderingKevidence$4,
+   scala.reflect.ClassTagKevidence$5)
+Determines the bounds for range partitioning from 
candidates with weights indicating how many
+ items each represents.
+
+
+
+boolean
+equals(Objectother)
+
+
+int
+getPartition(Objectkey)
+
+
+int
+hashCode()
+
+
+int
+numPartitions()
+
+
+static 
Kscala.Tuple2Object,scala.Tuple3Object,Object,Object[]
+sketch(RDDKrdd,
+  intsampleSizePerPartition,
+  scala.reflect.ClassTagKevidence$3)
+Sketches the input RDD via reservoir sampling on each 
partition.
+
+
+
+
+
+
+
+Methods inherited from classorg.apache.spark.Partitioner
+defaultPartitioner
+
+
+
+
+
+Methods inherited from classObject
+getClass, notify, notifyAll, toString, wait, wait, wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+RangePartitioner
+publicRangePartitioner(intpartitions,
+RDD? extends scala.Product2K,Vrdd,
+booleanascending,
+scala.math.OrderingKevidence$1,
+scala.reflect.ClassTagKevidence$2)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+sketch
+public 
staticKscala.Tuple2Object,scala.Tuple3Object,Object,Object[]sketch(RDDKrdd,
+   
intsampleSizePerPartition,
+   
scala.reflect.ClassTagKevidence$3)
+Sketches the input RDD via reservoir sampling on each 
partition.
+ 
+Parameters:rdd - the 
input RDD to sketchsampleSizePerPartition - max sample 
size per partitionevidence$3 - (undocumented)
+Returns:(total number of items, an 
array of (partitionId, number of items, sample))
+
+
+
+
+
+
+
+determineBounds
+public 
staticKObjectdetermineBounds(scala.collection.mutable.ArrayBufferscala.Tuple2K,Objectcandidates,
+ intpartitions,
+ scala.math.OrderingKevidence$4,
+ scala.reflect.ClassTagKevidence$5)
+Determines the bounds for range partitioning from 
candidates with weights indicating how many
+ items each represents. Usually this is 1 over the probability used to sample 
this candidate.
+ 
+Parameters:candidates - unordered 
candidates with weightspartitions - number of 
partitionsevidence$4 - 
(undocumented)evidence$5 - (undocumented)
+Returns:selected bounds
+
+
+
+
+
+
+
+numPartitions
+publicintnumPartitions()
+
+Specified by:
+numPartitionsin
 classPartitioner
+
+
+
+
+
+
+
+

[20/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html
new file mode 100644
index 000..4c33803
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html
@@ -0,0 +1,1147 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkConf (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkConf
+
+
+
+Object
+
+
+org.apache.spark.SparkConf
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, Cloneable
+
+
+
+public class SparkConf
+extends Object
+implements scala.Cloneable, scala.Serializable
+Configuration for a Spark application. Used to set various 
Spark parameters as key-value pairs.
+ 
+ Most of the time, you would create a SparkConf object with new 
SparkConf(), which will load
+ values from any spark.* Java system properties set in your 
application as well. In this case,
+ parameters you set directly on the SparkConf object take 
priority over system properties.
+ 
+ For unit tests, you can also call new SparkConf(false) to skip 
loading external settings and
+ get the same configuration no matter what the system properties are.
+ 
+ All setter methods in this class support chaining. For example, you can write
+ new SparkConf().setMaster("local").setAppName("My app").
+ 
+ param:  loadDefaults whether to also load values from Java system properties
+ 
+See Also:Serialized 
FormNote:
+  Once a SparkConf object is passed to Spark, it is cloned and can no 
longer be modified
+ by the user. Spark does not support modifying the configuration at 
runtime.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkConf()
+Create a SparkConf that loads defaults from system 
properties and the classpath
+
+
+
+SparkConf(booleanloadDefaults)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+SparkConf
+clone()
+Copy this object
+
+
+
+boolean
+contains(Stringkey)
+Does the configuration contain a given parameter?
+
+
+
+String
+get(Stringkey)
+Get a parameter; throws a NoSuchElementException if it's 
not set
+
+
+
+String
+get(Stringkey,
+   StringdefaultValue)
+Get a parameter, falling back to a default if not set
+
+
+
+scala.Tuple2String,String[]
+getAll()
+Get all parameters as a list of pairs
+
+
+
+scala.Tuple2String,String[]
+getAllWithPrefix(Stringprefix)
+Get all parameters that start with prefix
+
+
+
+String
+getAppId()
+Returns the Spark application id, valid in the Driver after 
TaskScheduler registration and
+ from the start in the Executor.
+
+
+
+scala.collection.immutable.MapObject,String
+getAvroSchema()
+Gets all the avro schemas in the configuration used in the 
generic Avro record serializer
+
+
+
+boolean
+getBoolean(Stringkey,
+  booleandefaultValue)
+Get a parameter as a boolean, falling back to a default if 
not set
+
+
+
+static scala.OptionString
+getDeprecatedConfig(Stringkey,
+   SparkConfconf)
+Looks for available deprecated keys for the given config 
option, and return the first
+ value available.
+
+
+
+double
+getDouble(Stringkey,
+ doubledefaultValue)
+Get a parameter as a double, falling back to a default if 
not set
+
+
+
+scala.collection.Seqscala.Tuple2String,String
+getExecutorEnv()
+Get all executor environment variables set on this 
SparkConf
+
+
+
+int
+getInt(Stringkey,
+  intdefaultValue)
+Get a parameter as an integer, falling back to a default if 
not set
+
+
+
+long
+getLong(Stringkey,
+   longdefaultValue)
+Get a parameter as a long, falling back to a default if not 
set
+
+
+
+scala.OptionString
+getOption(Stringkey)
+Get a parameter as an Option
+
+
+
+long
+getSizeAsBytes(Stringkey)
+Get a size parameter as bytes; throws a 
NoSuchElementException if it's not set.
+
+
+
+long
+getSizeAsBytes(Stringkey,
+  longdefaultValue)
+Get a size parameter as bytes, falling back to a default if 
not set.
+
+
+
+long
+getSizeAsBytes(Stringkey,
+  StringdefaultValue)
+Get a size parameter as bytes, falling back to a 

[45/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/date_format.html
--
diff --git a/site/docs/2.1.2/api/R/date_format.html 
b/site/docs/2.1.2/api/R/date_format.html
new file mode 100644
index 000..7f9ad3c
--- /dev/null
+++ b/site/docs/2.1.2/api/R/date_format.html
@@ -0,0 +1,87 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: date_format
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+date_format {SparkR}R 
Documentation
+
+date_format
+
+Description
+
+Converts a date/timestamp/string to a value of string in the format 
specified by the date
+format given by the second argument.
+
+
+
+Usage
+
+
+date_format(y, x)
+
+## S4 method for signature 'Column,character'
+date_format(y, x)
+
+
+
+Arguments
+
+
+y
+
+Column to compute on.
+
+x
+
+date format specification.
+
+
+
+
+Details
+
+A pattern could be for instance 
+dd.MM. and could return a string like '18.03.1993'. All
+pattern letters of java.text.SimpleDateFormat can be used.
+
+Note: Use when ever possible specialized functions like year. 
These benefit from a
+specialized implementation.
+
+
+
+Note
+
+date_format since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_sub,
+datediff, dayofmonth,
+dayofyear, from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: date_format(df$t, MM/dd/yyy)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/date_sub.html
--
diff --git a/site/docs/2.1.2/api/R/date_sub.html 
b/site/docs/2.1.2/api/R/date_sub.html
new file mode 100644
index 000..89d6661
--- /dev/null
+++ b/site/docs/2.1.2/api/R/date_sub.html
@@ -0,0 +1,75 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: date_sub
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+date_sub 
{SparkR}R Documentation
+
+date_sub
+
+Description
+
+Returns the date that is x days before
+
+
+
+Usage
+
+
+date_sub(y, x)
+
+## S4 method for signature 'Column,numeric'
+date_sub(y, x)
+
+
+
+Arguments
+
+
+y
+
+Column to compute on
+
+x
+
+Number of days to substract
+
+
+
+
+Note
+
+date_sub since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+datediff, dayofmonth,
+dayofyear, from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: date_sub(df$d, 1)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/datediff.html
--
diff --git a/site/docs/2.1.2/api/R/datediff.html 
b/site/docs/2.1.2/api/R/datediff.html
new file mode 100644
index 000..78ff84c
--- /dev/null
+++ b/site/docs/2.1.2/api/R/datediff.html
@@ -0,0 +1,75 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: datediff
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+datediff 
{SparkR}R Documentation
+
+datediff
+
+Description
+
+Returns the number of days from start to end.
+
+
+
+Usage
+
+
+datediff(y, x)
+
+## S4 method for signature 'Column'
+datediff(y, x)
+
+
+
+Arguments
+
+
+y
+
+end Column to use.
+
+x
+
+start Column to use.
+
+
+
+
+Note
+
+datediff since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, dayofmonth,
+dayofyear, from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: datediff(df$c, x)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/dayofmonth.html

[23/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html 
b/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html
new file mode 100644
index 000..b236e74
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html
@@ -0,0 +1,499 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+InternalAccumulator (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
InternalAccumulator
+
+
+
+Object
+
+
+org.apache.spark.InternalAccumulator
+
+
+
+
+
+
+
+
+public class InternalAccumulator
+extends Object
+A collection of fields and methods concerned with internal 
accumulators that represent
+ task level metrics.
+
+
+
+
+
+
+
+
+
+
+
+Nested Class Summary
+
+Nested Classes
+
+Modifier and Type
+Class and Description
+
+
+static class
+InternalAccumulator.input$
+
+
+static class
+InternalAccumulator.output$
+
+
+static class
+InternalAccumulator.shuffleRead$
+
+
+static class
+InternalAccumulator.shuffleWrite$
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+InternalAccumulator()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static String
+DISK_BYTES_SPILLED()
+
+
+static String
+EXECUTOR_CPU_TIME()
+
+
+static String
+EXECUTOR_DESERIALIZE_CPU_TIME()
+
+
+static String
+EXECUTOR_DESERIALIZE_TIME()
+
+
+static String
+EXECUTOR_RUN_TIME()
+
+
+static String
+INPUT_METRICS_PREFIX()
+
+
+static String
+JVM_GC_TIME()
+
+
+static String
+MEMORY_BYTES_SPILLED()
+
+
+static String
+METRICS_PREFIX()
+
+
+static String
+OUTPUT_METRICS_PREFIX()
+
+
+static String
+PEAK_EXECUTION_MEMORY()
+
+
+static String
+RESULT_SERIALIZATION_TIME()
+
+
+static String
+RESULT_SIZE()
+
+
+static String
+SHUFFLE_READ_METRICS_PREFIX()
+
+
+static String
+SHUFFLE_WRITE_METRICS_PREFIX()
+
+
+static String
+TEST_ACCUM()
+
+
+static String
+UPDATED_BLOCK_STATUSES()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+InternalAccumulator
+publicInternalAccumulator()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+METRICS_PREFIX
+public staticStringMETRICS_PREFIX()
+
+
+
+
+
+
+
+SHUFFLE_READ_METRICS_PREFIX
+public staticStringSHUFFLE_READ_METRICS_PREFIX()
+
+
+
+
+
+
+
+SHUFFLE_WRITE_METRICS_PREFIX
+public staticStringSHUFFLE_WRITE_METRICS_PREFIX()
+
+
+
+
+
+
+
+OUTPUT_METRICS_PREFIX
+public staticStringOUTPUT_METRICS_PREFIX()
+
+
+
+
+
+
+
+INPUT_METRICS_PREFIX
+public staticStringINPUT_METRICS_PREFIX()
+
+
+
+
+
+
+
+EXECUTOR_DESERIALIZE_TIME
+public staticStringEXECUTOR_DESERIALIZE_TIME()
+
+
+
+
+
+
+
+EXECUTOR_DESERIALIZE_CPU_TIME
+public staticStringEXECUTOR_DESERIALIZE_CPU_TIME()
+
+
+
+
+
+
+
+EXECUTOR_RUN_TIME
+public staticStringEXECUTOR_RUN_TIME()
+
+
+
+
+
+
+
+EXECUTOR_CPU_TIME
+public staticStringEXECUTOR_CPU_TIME()
+
+
+
+
+
+
+
+RESULT_SIZE
+public staticStringRESULT_SIZE()
+
+
+
+
+
+
+
+JVM_GC_TIME
+public staticStringJVM_GC_TIME()
+
+
+
+
+
+
+
+RESULT_SERIALIZATION_TIME
+public staticStringRESULT_SERIALIZATION_TIME()
+
+
+
+
+
+
+
+MEMORY_BYTES_SPILLED
+public staticStringMEMORY_BYTES_SPILLED()
+
+
+
+
+
+
+
+DISK_BYTES_SPILLED
+public staticStringDISK_BYTES_SPILLED()
+
+
+
+
+
+
+
+PEAK_EXECUTION_MEMORY
+public staticStringPEAK_EXECUTION_MEMORY()
+
+
+
+
+
+
+
+UPDATED_BLOCK_STATUSES
+public staticStringUPDATED_BLOCK_STATUSES()
+
+
+
+
+
+
+
+TEST_ACCUM
+public staticStringTEST_ACCUM()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+


[24/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/ExecutorRemoved.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/ExecutorRemoved.html 
b/site/docs/2.1.2/api/java/org/apache/spark/ExecutorRemoved.html
new file mode 100644
index 000..60dcc93
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/ExecutorRemoved.html
@@ -0,0 +1,356 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+ExecutorRemoved (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class ExecutorRemoved
+
+
+
+Object
+
+
+org.apache.spark.ExecutorRemoved
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, scala.Equals, scala.Product
+
+
+
+public class ExecutorRemoved
+extends Object
+implements scala.Product, scala.Serializable
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+ExecutorRemoved(StringexecutorId)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+String
+executorId()
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+Methods inherited from interfacescala.Product
+productArity, productElement, productIterator, productPrefix
+
+
+
+
+
+Methods inherited from interfacescala.Equals
+canEqual, equals
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+ExecutorRemoved
+publicExecutorRemoved(StringexecutorId)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+executorId
+publicStringexecutorId()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html 
b/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html
new file mode 100644
index 000..4edc799
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html
@@ -0,0 +1,323 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+ExpireDeadHosts (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class ExpireDeadHosts
+
+
+
+Object
+
+

[01/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/asf-site 0490125a8 -> a6155a89d


http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html
new file mode 100644
index 000..19fea40
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html
@@ -0,0 +1,2158 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+RRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.r
+Class RRDDT
+
+
+
+Object
+
+
+org.apache.spark.rdd.RDDU
+
+
+org.apache.spark.api.r.BaseRRDDT,byte[]
+
+
+org.apache.spark.api.r.RRDDT
+
+
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+
+public class RRDDT
+extends BaseRRDDT,byte[]
+An RDD that stores serialized R objects as 
Array[Byte].
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+RRDD(RDDTparent,
+byte[]func,
+Stringdeserializer,
+Stringserializer,
+byte[]packageNames,
+Object[]broadcastVars,
+scala.reflect.ClassTagTevidence$4)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static RDDT
+$plus$plus(RDDTother)
+
+
+static UU
+aggregate(UzeroValue,
+ scala.Function2U,T,UseqOp,
+ scala.Function2U,U,UcombOp,
+ scala.reflect.ClassTagUevidence$30)
+
+
+JavaRDDbyte[]
+asJavaRDD()
+
+
+static RDDT
+cache()
+
+
+static URDDscala.Tuple2T,U
+cartesian(RDDUother,
+ scala.reflect.ClassTagUevidence$5)
+
+
+static void
+checkpoint()
+
+
+static RDDT
+coalesce(intnumPartitions,
+booleanshuffle,
+scala.OptionPartitionCoalescerpartitionCoalescer,
+scala.math.OrderingTord)
+
+
+static boolean
+coalesce$default$2()
+
+
+static scala.OptionPartitionCoalescer
+coalesce$default$3()
+
+
+static scala.math.OrderingT
+coalesce$default$4(intnumPartitions,
+  booleanshuffle,
+  scala.OptionPartitionCoalescerpartitionCoalescer)
+
+
+static Object
+collect()
+
+
+static URDDU
+collect(scala.PartialFunctionT,Uf,
+   scala.reflect.ClassTagUevidence$29)
+
+
+static 
scala.collection.IteratorU
+compute(Partitionpartition,
+   TaskContextcontext)
+
+
+static SparkContext
+context()
+
+
+static long
+count()
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+
+
+static double
+countApprox$default$2()
+
+
+static long
+countApproxDistinct(doublerelativeSD)
+
+
+static long
+countApproxDistinct(intp,
+   intsp)
+
+
+static double
+countApproxDistinct$default$1()
+
+
+static 
scala.collection.MapT,Object
+countByValue(scala.math.OrderingTord)
+
+
+static scala.math.OrderingT
+countByValue$default$1()
+
+
+static PartialResultscala.collection.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence,
+  scala.math.OrderingTord)
+
+
+static double
+countByValueApprox$default$2()
+
+
+static scala.math.OrderingT
+countByValueApprox$default$3(longtimeout,
+doubleconfidence)
+
+
+static JavaRDDbyte[]
+createRDDFromArray(JavaSparkContextjsc,
+  byte[][]arr)
+Create an RRDD given a sequence of byte arrays.
+
+
+
+static JavaRDDbyte[]
+createRDDFromFile(JavaSparkContextjsc,
+ StringfileName,
+ intparallelism)
+Create an RRDD given a temporary file name.
+
+
+
+static JavaSparkContext
+createSparkContext(Stringmaster,
+  StringappName,
+  StringsparkHome,
+  String[]jars,
+  java.util.MapObject,ObjectsparkEnvirMap,
+  
java.util.MapObject,ObjectsparkExecutorEnvMap)
+
+
+static scala.collection.SeqDependency?
+dependencies()
+
+
+static RDDT
+distinct()
+
+
+static RDDT
+distinct(intnumPartitions,
+scala.math.OrderingTord)
+
+
+static scala.math.OrderingT
+distinct$default$2(intnumPartitions)
+
+
+static RDDT
+filter(scala.Function1T,Objectf)
+
+
+static T
+first()
+
+
+static URDDU

[46/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/columns.html
--
diff --git a/site/docs/2.1.2/api/R/columns.html 
b/site/docs/2.1.2/api/R/columns.html
new file mode 100644
index 000..3e7242a
--- /dev/null
+++ b/site/docs/2.1.2/api/R/columns.html
@@ -0,0 +1,137 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Column Names of 
SparkDataFrame
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+colnames 
{SparkR}R Documentation
+
+Column Names of SparkDataFrame
+
+Description
+
+Return all column names as a list.
+
+
+
+Usage
+
+
+colnames(x, do.NULL = TRUE, prefix = "col")
+
+colnames(x) - value
+
+columns(x)
+
+## S4 method for signature 'SparkDataFrame'
+columns(x)
+
+## S4 method for signature 'SparkDataFrame'
+names(x)
+
+## S4 replacement method for signature 'SparkDataFrame'
+names(x) - value
+
+## S4 method for signature 'SparkDataFrame'
+colnames(x)
+
+## S4 replacement method for signature 'SparkDataFrame'
+colnames(x) - value
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+do.NULL
+
+currently not used.
+
+prefix
+
+currently not used.
+
+value
+
+a character vector. Must have the same length as the number
+of columns in the SparkDataFrame.
+
+
+
+
+Note
+
+columns since 1.4.0
+
+names since 1.5.0
+
+names- since 1.5.0
+
+colnames since 1.6.0
+
+colnames- since 1.6.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, histogram,
+insertInto, intersect,
+isLocal, join,
+limit, merge,
+mutate, ncol,
+nrow, persist,
+printSchema, randomSplit,
+rbind, registerTempTable,
+rename, repartition,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,
+str, subset,
+take, union,
+unpersist, withColumn,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D columns(df)
+##D colnames(df)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/concat.html
--
diff --git a/site/docs/2.1.2/api/R/concat.html 
b/site/docs/2.1.2/api/R/concat.html
new file mode 100644
index 000..6d226c8
--- /dev/null
+++ b/site/docs/2.1.2/api/R/concat.html
@@ -0,0 +1,77 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: concat
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+concat 
{SparkR}R Documentation
+
+concat
+
+Description
+
+Concatenates multiple input string columns together into a single string 
column.
+
+
+
+Usage
+
+
+concat(x, ...)
+
+## S4 method for signature 'Column'
+concat(x, ...)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on
+
+...
+
+other columns
+
+
+
+
+Note
+
+concat since 1.5.0
+
+
+
+See Also
+
+Other string_funcs: ascii,
+base64, concat_ws,
+decode, encode,
+format_number, format_string,
+initcap, instr,
+length, levenshtein,
+locate, lower,
+lpad, ltrim,
+regexp_extract,
+regexp_replace, reverse,
+rpad, rtrim,
+soundex, substring_index,
+translate, trim,
+unbase64, upper
+
+
+
+Examples
+
+## Not run: concat(df$strings, df$strings2)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/concat_ws.html
--
diff --git a/site/docs/2.1.2/api/R/concat_ws.html 
b/site/docs/2.1.2/api/R/concat_ws.html
new file mode 100644
index 000..e30c492
--- /dev/null
+++ b/site/docs/2.1.2/api/R/concat_ws.html
@@ -0,0 +1,82 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: concat_ws
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+concat_ws 
{SparkR}R Documentation
+
+concat_ws
+

[47/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/ascii.html
--
diff --git a/site/docs/2.1.2/api/R/ascii.html b/site/docs/2.1.2/api/R/ascii.html
new file mode 100644
index 000..8d8cfc2
--- /dev/null
+++ b/site/docs/2.1.2/api/R/ascii.html
@@ -0,0 +1,74 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: ascii
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+ascii 
{SparkR}R Documentation
+
+ascii
+
+Description
+
+Computes the numeric value of the first character of the string column, and 
returns the
+result as a int column.
+
+
+
+Usage
+
+
+ascii(x)
+
+## S4 method for signature 'Column'
+ascii(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+ascii since 1.5.0
+
+
+
+See Also
+
+Other string_funcs: base64,
+concat_ws, concat,
+decode, encode,
+format_number, format_string,
+initcap, instr,
+length, levenshtein,
+locate, lower,
+lpad, ltrim,
+regexp_extract,
+regexp_replace, reverse,
+rpad, rtrim,
+soundex, substring_index,
+translate, trim,
+unbase64, upper
+
+
+
+Examples
+
+## Not run: \dontrun{ascii(df$c)}
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/asin.html
--
diff --git a/site/docs/2.1.2/api/R/asin.html b/site/docs/2.1.2/api/R/asin.html
new file mode 100644
index 000..1cc2a94
--- /dev/null
+++ b/site/docs/2.1.2/api/R/asin.html
@@ -0,0 +1,78 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: asin
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+asin 
{SparkR}R Documentation
+
+asin
+
+Description
+
+Computes the sine inverse of the given value; the returned angle is in the 
range
+-pi/2 through pi/2.
+
+
+
+Usage
+
+
+## S4 method for signature 'Column'
+asin(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+asin since 1.5.0
+
+
+
+See Also
+
+Other math_funcs: acos, atan2,
+atan, bin,
+bround, cbrt,
+ceil, conv,
+corr, cosh,
+cos, covar_pop,
+cov, expm1,
+exp, factorial,
+floor, hex,
+hypot, log10,
+log1p, log2,
+log, pmod,
+rint, round,
+shiftLeft,
+shiftRightUnsigned,
+shiftRight, signum,
+sinh, sin,
+sqrt, tanh,
+tan, toDegrees,
+toRadians, unhex
+
+
+
+Examples
+
+## Not run: asin(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/atan.html
--
diff --git a/site/docs/2.1.2/api/R/atan.html b/site/docs/2.1.2/api/R/atan.html
new file mode 100644
index 000..bc3fa97
--- /dev/null
+++ b/site/docs/2.1.2/api/R/atan.html
@@ -0,0 +1,77 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: atan
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+atan 
{SparkR}R Documentation
+
+atan
+
+Description
+
+Computes the tangent inverse of the given value.
+
+
+
+Usage
+
+
+## S4 method for signature 'Column'
+atan(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+atan since 1.5.0
+
+
+
+See Also
+
+Other math_funcs: acos, asin,
+atan2, bin,
+bround, cbrt,
+ceil, conv,
+corr, cosh,
+cos, covar_pop,
+cov, expm1,
+exp, factorial,
+floor, hex,
+hypot, log10,
+log1p, log2,
+log, pmod,
+rint, round,
+shiftLeft,
+shiftRightUnsigned,
+shiftRight, signum,
+sinh, sin,
+sqrt, tanh,
+tan, toDegrees,
+toRadians, unhex
+
+
+
+Examples
+
+## Not run: atan(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/atan2.html
--
diff --git a/site/docs/2.1.2/api/R/atan2.html b/site/docs/2.1.2/api/R/atan2.html
new file mode 100644
index 000..3160bd3
--- /dev/null
+++ b/site/docs/2.1.2/api/R/atan2.html
@@ -0,0 +1,82 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: atan2
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>

[17/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html
new file mode 100644
index 000..a311bae
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html
@@ -0,0 +1,247 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkJobInfo (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Interface SparkJobInfo
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+All Known Implementing Classes:
+SparkJobInfoImpl
+
+
+
+public interface SparkJobInfo
+extends java.io.Serializable
+Exposes information about Spark Jobs.
+
+ This interface is not designed to be implemented outside of Spark.  We may 
add additional methods
+ which may break binary compatibility with outside implementations.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+jobId()
+
+
+int[]
+stageIds()
+
+
+JobExecutionStatus
+status()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+jobId
+intjobId()
+
+
+
+
+
+
+
+stageIds
+int[]stageIds()
+
+
+
+
+
+
+
+status
+JobExecutionStatusstatus()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html
new file mode 100644
index 000..1a94cdf
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html
@@ -0,0 +1,306 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkJobInfoImpl (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkJobInfoImpl
+
+
+
+Object
+
+
+org.apache.spark.SparkJobInfoImpl
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, SparkJobInfo
+
+
+
+public class SparkJobInfoImpl
+extends Object
+implements SparkJobInfo
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkJobInfoImpl(intjobId,
+int[]stageIds,
+JobExecutionStatusstatus)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+jobId()
+
+
+int[]
+stageIds()
+
+
+JobExecutionStatus
+status()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+SparkJobInfoImpl
+publicSparkJobInfoImpl(intjobId,
+int[]stageIds,
+JobExecutionStatusstatus)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+jobId
+publicintjobId()
+
+Specified by:
+jobIdin
 interfaceSparkJobInfo
+
+
+
+
+
+
+
+
+stageIds
+publicint[]stageIds()
+
+Specified by:

[13/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaDoubleRDD.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaDoubleRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaDoubleRDD.html
new file mode 100644
index 000..a1da46f
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaDoubleRDD.html
@@ -0,0 +1,2084 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaDoubleRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class JavaDoubleRDD
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaDoubleRDD
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikeDouble,JavaDoubleRDD
+
+
+
+public class JavaDoubleRDD
+extends Object
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaDoubleRDD(RDDObjectsrdd)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static UU
+aggregate(UzeroValue,
+ Function2U,T,UseqOp,
+ Function2U,U,UcombOp)
+
+
+JavaDoubleRDD
+cache()
+Persist this RDD with the default storage level 
(MEMORY_ONLY).
+
+
+
+static UJavaPairRDDT,U
+cartesian(JavaRDDLikeU,?other)
+
+
+static void
+checkpoint()
+
+
+scala.reflect.ClassTagDouble
+classTag()
+
+
+JavaDoubleRDD
+coalesce(intnumPartitions)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+JavaDoubleRDD
+coalesce(intnumPartitions,
+booleanshuffle)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+static java.util.ListT
+collect()
+
+
+static JavaFutureActionjava.util.ListT
+collectAsync()
+
+
+static java.util.ListT[]
+collectPartitions(int[]partitionIds)
+
+
+static SparkContext
+context()
+
+
+static long
+count()
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout)
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+
+
+static long
+countApproxDistinct(doublerelativeSD)
+
+
+static JavaFutureActionLong
+countAsync()
+
+
+static java.util.MapT,Long
+countByValue()
+
+
+static PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout)
+
+
+static PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence)
+
+
+JavaDoubleRDD
+distinct()
+Return a new RDD containing the distinct elements in this 
RDD.
+
+
+
+JavaDoubleRDD
+distinct(intnumPartitions)
+Return a new RDD containing the distinct elements in this 
RDD.
+
+
+
+JavaDoubleRDD
+filter(FunctionDouble,Booleanf)
+Return a new RDD containing only the elements that satisfy 
a predicate.
+
+
+
+Double
+first()
+Return the first element in this RDD.
+
+
+
+static UJavaRDDU
+flatMap(FlatMapFunctionT,Uf)
+
+
+static JavaDoubleRDD
+flatMapToDouble(DoubleFlatMapFunctionTf)
+
+
+static K2,V2JavaPairRDDK2,V2
+flatMapToPair(PairFlatMapFunctionT,K2,V2f)
+
+
+static T
+fold(TzeroValue,
+Function2T,T,Tf)
+
+
+static void
+foreach(VoidFunctionTf)
+
+
+static JavaFutureActionVoid
+foreachAsync(VoidFunctionTf)
+
+
+static void
+foreachPartition(VoidFunctionjava.util.IteratorTf)
+
+
+static JavaFutureActionVoid
+foreachPartitionAsync(VoidFunctionjava.util.IteratorTf)
+
+
+static JavaDoubleRDD
+fromRDD(RDDObjectrdd)
+
+
+static OptionalString
+getCheckpointFile()
+
+
+static int
+getNumPartitions()
+
+
+static StorageLevel
+getStorageLevel()
+
+
+static JavaRDDjava.util.ListT
+glom()
+
+
+static UJavaPairRDDU,IterableT
+groupBy(FunctionT,Uf)
+
+
+static UJavaPairRDDU,IterableT
+groupBy(FunctionT,Uf,
+   intnumPartitions)
+
+
+long[]
+histogram(double[]buckets)
+Compute a histogram using the provided buckets.
+
+
+
+long[]
+histogram(Double[]buckets,
+ booleanevenBuckets)
+
+
+scala.Tuple2double[],long[]
+histogram(intbucketCount)
+Compute a histogram of the data using bucketCount number of 
buckets evenly
+  spaced between the minimum and maximum of the RDD.
+
+
+
+static int
+id()
+
+
+JavaDoubleRDD
+intersection(JavaDoubleRDDother)
+Return the intersection of this RDD and another one.
+
+
+
+static boolean
+isCheckpointed()
+
+
+static boolean

[31/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/index-all.html
--
diff --git a/site/docs/2.1.2/api/java/index-all.html 
b/site/docs/2.1.2/api/java/index-all.html
new file mode 100644
index 000..67505d4
--- /dev/null
+++ b/site/docs/2.1.2/api/java/index-all.html
@@ -0,0 +1,46481 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Index (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+$ABCDEFGHIJKLMNOPQRSTUVWXYZ_
+
+
+$
+
+$colon$bslash(B,
 Function2A, B, B) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$colon$plus(B,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$div$colon(B,
 Function2B, A, B) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$greater(A)
 - Static method in class org.apache.spark.sql.types.Decimal
+
+$greater(A)
 - Static method in class org.apache.spark.storage.RDDInfo
+
+$greater$eq(A)
 - Static method in class org.apache.spark.sql.types.Decimal
+
+$greater$eq(A)
 - Static method in class org.apache.spark.storage.RDDInfo
+
+$less(A) - 
Static method in class org.apache.spark.sql.types.Decimal
+
+$less(A) - 
Static method in class org.apache.spark.storage.RDDInfo
+
+$less$eq(A)
 - Static method in class org.apache.spark.sql.types.Decimal
+
+$less$eq(A)
 - Static method in class org.apache.spark.storage.RDDInfo
+
+$minus$greater(T)
 - Static method in class org.apache.spark.ml.param.DoubleParam
+
+$minus$greater(T)
 - Static method in class org.apache.spark.ml.param.FloatParam
+
+$plus$colon(B,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$eq(T) - 
Static method in class org.apache.spark.Accumulator
+
+Deprecated.
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.api.r.RRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.EdgeRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.VertexRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.HadoopRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.JdbcRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.NewHadoopRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.PartitionPruningRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.UnionRDD
+
+$plus$plus(GenTraversableOnceB,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$plus$colon(TraversableOnceB,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$plus$colon(TraversableB,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$plus$eq(R)
 - Static method in class org.apache.spark.Accumulator
+
+Deprecated.
+
+
+
+
+
+A
+
+abortJob(JobContext)
 - Method in class org.apache.spark.internal.io.FileCommitProtocol
+
+Aborts a job after the writes fail.
+
+abortJob(JobContext)
 - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
+
+abortTask(TaskAttemptContext)
 - Method in class org.apache.spark.internal.io.FileCommitProtocol
+
+Aborts a task after the writes have failed.
+
+abortTask(TaskAttemptContext)
 - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
+
+abs(Column)
 - Static method in class org.apache.spark.sql.functions
+
+Computes the absolute value.
+
+abs() - 
Method in class org.apache.spark.sql.types.Decimal
+
+absent() - 
Static method in class org.apache.spark.api.java.Optional
+
+AbsoluteError - Class in org.apache.spark.mllib.tree.loss
+
+:: DeveloperApi ::
+ Class for absolute error loss calculation (for regression).
+
+AbsoluteError()
 - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
+
+accept(Parsers)
 - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
+accept(ES,
 Function1ES, ListObject) - Static method in class 
org.apache.spark.ml.feature.RFormulaParser
+
+accept(String,
 PartialFunctionObject, U) - Static method in class 
org.apache.spark.ml.feature.RFormulaParser
+
+acceptIf(Function1Object,
 

[03/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html
new file mode 100644
index 000..8153556
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html
@@ -0,0 +1,219 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+VoidFunction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
VoidFunctionT
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface VoidFunctionT
+extends java.io.Serializable
+A function with no return value.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+call(Tt)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+voidcall(Tt)
+  throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
new file mode 100644
index 000..1dad476
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
@@ -0,0 +1,221 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+VoidFunction2 (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
VoidFunction2T1,T2
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface VoidFunction2T1,T2
+extends java.io.Serializable
+A two-argument function that takes arguments of type T1 and 
T2 with no return value.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+call(T1v1,
+T2v2)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+voidcall(T1v1,
+T2v2)
+  throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/package-frame.html
--
diff --git 

[09/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html
new file mode 100644
index 000..a7004a5
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html
@@ -0,0 +1,1854 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class JavaRDDT
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaRDDT
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikeT,JavaRDDT
+
+
+
+public class JavaRDDT
+extends Object
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaRDD(RDDTrdd,
+   scala.reflect.ClassTagTclassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static UU
+aggregate(UzeroValue,
+ Function2U,T,UseqOp,
+ Function2U,U,UcombOp)
+
+
+JavaRDDT
+cache()
+Persist this RDD with the default storage level 
(MEMORY_ONLY).
+
+
+
+static UJavaPairRDDT,U
+cartesian(JavaRDDLikeU,?other)
+
+
+static void
+checkpoint()
+
+
+scala.reflect.ClassTagT
+classTag()
+
+
+JavaRDDT
+coalesce(intnumPartitions)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+JavaRDDT
+coalesce(intnumPartitions,
+booleanshuffle)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+static java.util.ListT
+collect()
+
+
+static JavaFutureActionjava.util.ListT
+collectAsync()
+
+
+static java.util.ListT[]
+collectPartitions(int[]partitionIds)
+
+
+static SparkContext
+context()
+
+
+static long
+count()
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout)
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+
+
+static long
+countApproxDistinct(doublerelativeSD)
+
+
+static JavaFutureActionLong
+countAsync()
+
+
+static java.util.MapT,Long
+countByValue()
+
+
+static PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout)
+
+
+static PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence)
+
+
+JavaRDDT
+distinct()
+Return a new RDD containing the distinct elements in this 
RDD.
+
+
+
+JavaRDDT
+distinct(intnumPartitions)
+Return a new RDD containing the distinct elements in this 
RDD.
+
+
+
+JavaRDDT
+filter(FunctionT,Booleanf)
+Return a new RDD containing only the elements that satisfy 
a predicate.
+
+
+
+static T
+first()
+
+
+static UJavaRDDU
+flatMap(FlatMapFunctionT,Uf)
+
+
+static JavaDoubleRDD
+flatMapToDouble(DoubleFlatMapFunctionTf)
+
+
+static K2,V2JavaPairRDDK2,V2
+flatMapToPair(PairFlatMapFunctionT,K2,V2f)
+
+
+static T
+fold(TzeroValue,
+Function2T,T,Tf)
+
+
+static void
+foreach(VoidFunctionTf)
+
+
+static JavaFutureActionVoid
+foreachAsync(VoidFunctionTf)
+
+
+static void
+foreachPartition(VoidFunctionjava.util.IteratorTf)
+
+
+static JavaFutureActionVoid
+foreachPartitionAsync(VoidFunctionjava.util.IteratorTf)
+
+
+static TJavaRDDT
+fromRDD(RDDTrdd,
+   scala.reflect.ClassTagTevidence$1)
+
+
+static OptionalString
+getCheckpointFile()
+
+
+static int
+getNumPartitions()
+
+
+static StorageLevel
+getStorageLevel()
+
+
+static JavaRDDjava.util.ListT
+glom()
+
+
+static UJavaPairRDDU,IterableT
+groupBy(FunctionT,Uf)
+
+
+static UJavaPairRDDU,IterableT
+groupBy(FunctionT,Uf,
+   intnumPartitions)
+
+
+static int
+id()
+
+
+JavaRDDT
+intersection(JavaRDDTother)
+Return the intersection of this RDD and another one.
+
+
+
+static boolean
+isCheckpointed()
+
+
+static boolean
+isEmpty()
+
+
+static java.util.IteratorT
+iterator(Partitionsplit,
+TaskContexttaskContext)
+
+
+static UJavaPairRDDU,T
+keyBy(FunctionT,Uf)
+
+
+static RJavaRDDR
+map(FunctionT,Rf)
+
+
+static UJavaRDDU
+mapPartitions(FlatMapFunctionjava.util.IteratorT,Uf)
+
+
+static UJavaRDDU
+mapPartitions(FlatMapFunctionjava.util.IteratorT,Uf,
+ booleanpreservesPartitioning)
+
+
+static JavaDoubleRDD

[42/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/limit.html
--
diff --git a/site/docs/2.1.2/api/R/limit.html b/site/docs/2.1.2/api/R/limit.html
new file mode 100644
index 000..309b624
--- /dev/null
+++ b/site/docs/2.1.2/api/R/limit.html
@@ -0,0 +1,109 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Limit
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+limit 
{SparkR}R Documentation
+
+Limit
+
+Description
+
+Limit the resulting SparkDataFrame to the number of rows specified.
+
+
+
+Usage
+
+
+limit(x, num)
+
+## S4 method for signature 'SparkDataFrame,numeric'
+limit(x, num)
+
+
+
+Arguments
+
+
+x
+
+A SparkDataFrame
+
+num
+
+The number of rows to return
+
+
+
+
+Value
+
+A new SparkDataFrame containing the number of rows specified.
+
+
+
+Note
+
+limit since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, histogram,
+insertInto, intersect,
+isLocal, join,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D limitedDF - limit(df, 10)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/lit.html
--
diff --git a/site/docs/2.1.2/api/R/lit.html b/site/docs/2.1.2/api/R/lit.html
new file mode 100644
index 000..e227436
--- /dev/null
+++ b/site/docs/2.1.2/api/R/lit.html
@@ -0,0 +1,72 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: lit
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+lit 
{SparkR}R Documentation
+
+lit
+
+Description
+
+A new Column is created to represent the literal 
value.
+If the parameter is a Column, it is returned 
unchanged.
+
+
+
+Usage
+
+
+lit(x)
+
+## S4 method for signature 'ANY'
+lit(x)
+
+
+
+Arguments
+
+
+x
+
+a literal value or a Column.
+
+
+
+
+Note
+
+lit since 1.5.0
+
+
+
+See Also
+
+Other normal_funcs: abs,
+bitwiseNOT, coalesce,
+column, expr,
+greatest, ifelse,
+isnan, least,
+nanvl, negate,
+randn, rand,
+struct, when
+
+
+
+Examples
+
+## Not run: 
+##D lit(df$name)
+##D select(df, lit(x))
+##D select(df, lit(2015-01-01))
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/locate.html
--
diff --git a/site/docs/2.1.2/api/R/locate.html 
b/site/docs/2.1.2/api/R/locate.html
new file mode 100644
index 000..8240ffd
--- /dev/null
+++ b/site/docs/2.1.2/api/R/locate.html
@@ -0,0 +1,92 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: locate
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+locate 
{SparkR}R Documentation
+
+locate
+
+Description
+
+Locate the position of the first occurrence of substr.
+
+
+
+Usage
+
+
+locate(substr, str, ...)
+
+## S4 method for signature 'character,Column'
+locate(substr, str, pos = 1)
+
+
+
+Arguments
+
+
+substr
+
+a character string to be matched.
+
+str
+
+a Column where matches are sought for each entry.
+
+...
+
+further arguments to be passed to or from other methods.
+
+pos
+
+start position of search.
+
+
+
+
+Details
+
+Note: The position is not zero based, but 1 based index. Returns 0 if substr
+could not be found in str.
+
+
+
+Note
+
+locate since 1.5.0
+
+
+
+See Also
+
+Other string_funcs: ascii,
+base64, concat_ws,

[25/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html 
b/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html
new file mode 100644
index 000..85f6c0d
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html
@@ -0,0 +1,489 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+ComplexFutureAction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
ComplexFutureActionT
+
+
+
+Object
+
+
+org.apache.spark.ComplexFutureActionT
+
+
+
+
+
+
+
+All Implemented Interfaces:
+FutureActionT, 
scala.concurrent.AwaitableT, scala.concurrent.FutureT
+
+
+
+public class ComplexFutureActionT
+extends Object
+implements FutureActionT
+A FutureAction for actions 
that could trigger multiple Spark jobs. Examples include take,
+ takeSample. Cancellation works by setting the cancelled flag to true and 
cancelling any pending
+ jobs.
+
+
+
+
+
+
+
+
+
+
+
+Nested Class Summary
+
+
+
+
+Nested classes/interfaces inherited from 
interfacescala.concurrent.Future
+scala.concurrent.Future.InternalCallbackExecutor$
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+ComplexFutureAction(scala.Function1JobSubmitter,scala.concurrent.FutureTrun)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+cancel()
+Cancels the execution of this action.
+
+
+
+boolean
+isCancelled()
+Returns whether the action has been cancelled.
+
+
+
+boolean
+isCompleted()
+Returns whether the action has already been completed with 
a value or an exception.
+
+
+
+scala.collection.SeqObject
+jobIds()
+Returns the job IDs run by the underlying async 
operation.
+
+
+
+Uvoid
+onComplete(scala.Function1scala.util.TryT,Ufunc,
+  scala.concurrent.ExecutionContextexecutor)
+When this action is completed, either through an exception, 
or a value, applies the provided
+ function.
+
+
+
+ComplexFutureActionT
+ready(scala.concurrent.duration.DurationatMost,
+ scala.concurrent.CanAwaitpermit)
+Blocks until this action completes.
+
+
+
+T
+result(scala.concurrent.duration.DurationatMost,
+  scala.concurrent.CanAwaitpermit)
+Awaits and returns the result (of type T) of this 
action.
+
+
+
+scala.Optionscala.util.TryT
+value()
+The value of this Future.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+Methods inherited from interfaceorg.apache.spark.FutureAction
+get
+
+
+
+
+
+Methods inherited from interfacescala.concurrent.Future
+andThen, collect, failed, fallbackTo, filter, flatMap, foreach, map, 
mapTo, onFailure, onSuccess, recover, recoverWith, transform, withFilter, 
zip
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+ComplexFutureAction
+publicComplexFutureAction(scala.Function1JobSubmitter,scala.concurrent.FutureTrun)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+cancel
+publicvoidcancel()
+Description copied from interface:FutureAction
+Cancels the execution of this action.
+
+Specified by:
+cancelin
 interfaceFutureActionT
+
+
+
+
+
+
+
+
+isCancelled
+publicbooleanisCancelled()
+Description copied from interface:FutureAction
+Returns whether the action has been cancelled.
+
+Specified by:
+isCancelledin
 interfaceFutureActionT
+Returns:(undocumented)
+
+
+
+
+
+
+
+ready
+publicComplexFutureActionTready(scala.concurrent.duration.DurationatMost,
+   scala.concurrent.CanAwaitpermit)
+ throws InterruptedException,
+java.util.concurrent.TimeoutException
+Description copied from interface:FutureAction
+Blocks until this action completes.
+ 
+
+Specified by:
+readyin
 interfaceFutureActionT
+Specified by:
+readyin 
interfacescala.concurrent.AwaitableT
+Parameters:atMost - 
maximum wait time, which may be negative (no waiting is done), Duration.Inf
+   for unbounded waiting, or a finite positive 
durationpermit - (undocumented)
+Returns:this FutureAction
+Throws:
+InterruptedException

[19/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html
new file mode 100644
index 000..38c0b01
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html
@@ -0,0 +1,2527 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkContext (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkContext
+
+
+
+Object
+
+
+org.apache.spark.SparkContext
+
+
+
+
+
+
+
+
+public class SparkContext
+extends Object
+Main entry point for Spark functionality. A SparkContext 
represents the connection to a Spark
+ cluster, and can be used to create RDDs, accumulators and broadcast variables 
on that cluster.
+ 
+ Only one SparkContext may be active per JVM.  You must stop() 
the active SparkContext before
+ creating a new one.  This limitation may eventually be removed; see 
SPARK-2243 for more details.
+ 
+ param:  config a Spark Config object describing the application 
configuration. Any settings in
+   this config overrides the default configs as well as system 
properties.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkContext()
+Create a SparkContext that loads settings from system 
properties (for instance, when
+ launching with ./bin/spark-submit).
+
+
+
+SparkContext(SparkConfconfig)
+
+
+SparkContext(Stringmaster,
+StringappName,
+SparkConfconf)
+Alternative constructor that allows setting common Spark 
properties directly
+
+
+
+SparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+scala.collection.SeqStringjars,
+scala.collection.MapString,Stringenvironment)
+Alternative constructor that allows setting common Spark 
properties directly
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+R,TAccumulableR,T
+accumulable(RinitialValue,
+   AccumulableParamR,Tparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+R,TAccumulableR,T
+accumulable(RinitialValue,
+   Stringname,
+   AccumulableParamR,Tparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+R,TAccumulableR,T
+accumulableCollection(RinitialValue,
+ 
scala.Function1R,scala.collection.generic.GrowableTevidence$9,
+ scala.reflect.ClassTagRevidence$10)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   AccumulatorParamTparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   Stringname,
+   AccumulatorParamTparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+void
+addFile(Stringpath)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addFile(Stringpath,
+   booleanrecursive)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addJar(Stringpath)
+Adds a JAR dependency for all tasks to be executed on this 
SparkContext in the future.
+
+
+
+void
+addSparkListener(org.apache.spark.scheduler.SparkListenerInterfacelistener)
+:: DeveloperApi ::
+ Register a listener to receive up-calls from events that happen during 
execution.
+
+
+
+scala.OptionString
+applicationAttemptId()
+
+
+String
+applicationId()
+A unique identifier for the Spark application.
+
+
+
+String
+appName()
+
+
+RDDscala.Tuple2String,PortableDataStream
+binaryFiles(Stringpath,
+   intminPartitions)
+Get an RDD for a Hadoop-readable dataset as 
PortableDataStream for each file
+ (useful for binary data)
+
+
+
+RDDbyte[]
+binaryRecords(Stringpath,
+ intrecordLength,
+ org.apache.hadoop.conf.Configurationconf)
+Load data from a flat binary file, assuming the length of 
each record is constant.
+
+
+
+TBroadcastT
+broadcast(Tvalue,
+ scala.reflect.ClassTagTevidence$11)
+Broadcast a read-only variable to the cluster, returning a
+ Broadcast object for reading it in 
distributed functions.
+
+
+
+void

[40/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/read.df.html
--
diff --git a/site/docs/2.1.2/api/R/read.df.html 
b/site/docs/2.1.2/api/R/read.df.html
new file mode 100644
index 000..fa64d11
--- /dev/null
+++ b/site/docs/2.1.2/api/R/read.df.html
@@ -0,0 +1,97 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Load a 
SparkDataFrame
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+read.df 
{SparkR}R Documentation
+
+Load a SparkDataFrame
+
+Description
+
+Returns the dataset in a data source as a SparkDataFrame
+
+
+
+Usage
+
+
+## Default S3 method:
+read.df(path = NULL, source = NULL, schema = NULL,
+  na.strings = "NA", ...)
+
+## Default S3 method:
+loadDF(path = NULL, source = NULL, schema = NULL, ...)
+
+
+
+Arguments
+
+
+path
+
+The path of files to load
+
+source
+
+The name of external data source
+
+schema
+
+The data schema defined in structType
+
+na.strings
+
+Default string value for NA when source is csv
+
+...
+
+additional external data source specific named properties.
+
+
+
+
+Details
+
+The data source is specified by the source and a set of 
options(...).
+If source is not specified, the default data source configured by
+spark.sql.sources.default will be used. 
+Similar to R read.csv, when source is csv, by 
default, a value of NA will be
+interpreted as NA.
+
+
+
+Value
+
+SparkDataFrame
+
+
+
+Note
+
+read.df since 1.4.0
+
+loadDF since 1.6.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D df1 - read.df(path/to/file.json, source = json)
+##D schema - structType(structField(name, string),
+##D  structField(info, 
mapstring,double))
+##D df2 - read.df(mapTypeJsonPath, json, schema)
+##D df3 - loadDF(data/test_table, parquet, 
mergeSchema = true)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/read.jdbc.html
--
diff --git a/site/docs/2.1.2/api/R/read.jdbc.html 
b/site/docs/2.1.2/api/R/read.jdbc.html
new file mode 100644
index 000..f527c5b
--- /dev/null
+++ b/site/docs/2.1.2/api/R/read.jdbc.html
@@ -0,0 +1,105 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create a SparkDataFrame 
representing the database table...
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+read.jdbc 
{SparkR}R Documentation
+
+Create a SparkDataFrame representing the database table accessible via 
JDBC URL
+
+Description
+
+Additional JDBC database connection properties can be set (...)
+
+
+
+Usage
+
+
+read.jdbc(url, tableName, partitionColumn = NULL, lowerBound = NULL,
+  upperBound = NULL, numPartitions = 0L, predicates = list(), ...)
+
+
+
+Arguments
+
+
+url
+
+JDBC database url of the form jdbc:subprotocol:subname
+
+tableName
+
+the name of the table in the external database
+
+partitionColumn
+
+the name of a column of integral type that will be used for partitioning
+
+lowerBound
+
+the minimum value of partitionColumn used to decide partition 
stride
+
+upperBound
+
+the maximum value of partitionColumn used to decide partition 
stride
+
+numPartitions
+
+the number of partitions, This, along with lowerBound 
(inclusive),
+upperBound (exclusive), form partition strides for generated WHERE
+clause expressions used to split the column partitionColumn 
evenly.
+This defaults to SparkContext.defaultParallelism when unset.
+
+predicates
+
+a list of conditions in the where clause; each one defines one partition
+
+...
+
+additional JDBC database connection named properties.
+
+
+
+
+Details
+
+Only one of partitionColumn or predicates should be set. Partitions of the 
table will be
+retrieved in parallel based on the numPartitions or by the 
predicates.
+
+Don't create too many partitions in parallel on a large cluster; otherwise 
Spark might crash
+your external database systems.
+
+
+
+Value
+
+SparkDataFrame
+
+
+
+Note
+
+read.jdbc since 2.0.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D jdbcUrl - jdbc:mysql://localhost:3306/databasename
+##D df - read.jdbc(jdbcUrl, table, predicates = 
list(field=123), user = username)
+##D df2 - read.jdbc(jdbcUrl, table2, partitionColumn = 
index, lowerBound = 0,
+##D  upperBound = 1, user = username, password 
= password)
+## End(Not run)
+
+
+
+[Package 

[38/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/spark.gbt.html
--
diff --git a/site/docs/2.1.2/api/R/spark.gbt.html 
b/site/docs/2.1.2/api/R/spark.gbt.html
new file mode 100644
index 000..98b2b03
--- /dev/null
+++ b/site/docs/2.1.2/api/R/spark.gbt.html
@@ -0,0 +1,244 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Gradient Boosted Tree 
Model for Regression and Classification
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+spark.gbt 
{SparkR}R Documentation
+
+Gradient Boosted Tree Model for Regression and Classification
+
+Description
+
+spark.gbt fits a Gradient Boosted Tree Regression model or 
Classification model on a
+SparkDataFrame. Users can call summary to get a summary of the 
fitted
+Gradient Boosted Tree model, predict to make predictions on new 
data, and
+write.ml/read.ml to save/load fitted models.
+For more details, see
+http://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-tree-regression;>
+GBT Regression and
+http://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-tree-classifier;>
+GBT Classification
+
+
+
+Usage
+
+
+spark.gbt(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.gbt(data, formula,
+  type = c("regression", "classification"), maxDepth = 5, maxBins = 32,
+  maxIter = 20, stepSize = 0.1, lossType = NULL, seed = NULL,
+  subsamplingRate = 1, minInstancesPerNode = 1, minInfoGain = 0,
+  checkpointInterval = 10, maxMemoryInMB = 256, cacheNodeIds = FALSE)
+
+## S4 method for signature 'GBTRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'GBTClassificationModel'
+predict(object, newData)
+
+## S4 method for signature 'GBTRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+## S4 method for signature 'GBTClassificationModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+## S4 method for signature 'GBTRegressionModel'
+summary(object)
+
+## S4 method for signature 'GBTClassificationModel'
+summary(object)
+
+## S3 method for class 'summary.GBTRegressionModel'
+print(x, ...)
+
+## S3 method for class 'summary.GBTClassificationModel'
+print(x, ...)
+
+
+
+Arguments
+
+
+data
+
+a SparkDataFrame for training.
+
+formula
+
+a symbolic description of the model to be fitted. Currently only a few 
formula
+operators are supported, including '~', ':', '+', and '-'.
+
+...
+
+additional arguments passed to the method.
+
+type
+
+type of model, one of regression or classification, 
to fit
+
+maxDepth
+
+Maximum depth of the tree (= 0).
+
+maxBins
+
+Maximum number of bins used for discretizing continuous features and for 
choosing
+how to split on features at each node. More bins give higher granularity. Must 
be
+= 2 and = number of categories in any categorical feature.
+
+maxIter
+
+Param for maximum number of iterations (= 0).
+
+stepSize
+
+Param for Step size to be used for each iteration of optimization.
+
+lossType
+
+Loss function which GBT tries to minimize.
+For classification, must be logistic. For regression, must be one 
of
+squared (L2) and absolute (L1), default is 
squared.
+
+seed
+
+integer seed for random number generation.
+
+subsamplingRate
+
+Fraction of the training data used for learning each decision tree, in
+range (0, 1].
+
+minInstancesPerNode
+
+Minimum number of instances each child must have after split. If a
+split causes the left or right child to have fewer than
+minInstancesPerNode, the split will be discarded as invalid. Should be
+= 1.
+
+minInfoGain
+
+Minimum information gain for a split to be considered at a tree node.
+
+checkpointInterval
+
+Param for set checkpoint interval (= 1) or disable checkpoint (-1).
+
+maxMemoryInMB
+
+Maximum memory in MB allocated to histogram aggregation.
+
+cacheNodeIds
+
+If FALSE, the algorithm will pass trees to executors to match instances with
+nodes. If TRUE, the algorithm will cache node IDs for each instance. Caching
+can speed up training of deeper trees. Users can set how often should the
+cache be checkpointed or disable it by setting checkpointInterval.
+
+object
+
+A fitted Gradient Boosted Tree regression model or classification model.
+
+newData
+
+a SparkDataFrame for testing.
+
+path
+
+The directory where the model is saved.
+
+overwrite
+
+Overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.
+
+x
+
+summary object of Gradient Boosted Tree regression model or classification 
model
+returned by summary.
+
+
+
+
+Value
+
+spark.gbt returns a fitted Gradient Boosted Tree model.
+

[48/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/00frame_toc.html
--
diff --git a/site/docs/2.1.2/api/R/00frame_toc.html 
b/site/docs/2.1.2/api/R/00frame_toc.html
new file mode 100644
index 000..1c62835
--- /dev/null
+++ b/site/docs/2.1.2/api/R/00frame_toc.html
@@ -0,0 +1,406 @@
+
+
+
+
+
+R Documentation of SparkR
+
+
+window.onload = function() {
+  var imgs = document.getElementsByTagName('img'), i, img;
+  for (i = 0; i < imgs.length; i++) {
+img = imgs[i];
+// center an image if it is the only element of its parent
+if (img.parentElement.childElementCount === 1)
+  img.parentElement.style.textAlign = 'center';
+  }
+};
+
+
+
+
+
+
+
+* {
+   font-family: "Trebuchet MS", "Lucida Grande", "Lucida Sans Unicode", 
"Lucida Sans", Arial, sans-serif;
+   font-size: 14px;
+}
+body {
+  padding: 0 5px; 
+  margin: 0 auto; 
+  width: 80%;
+  max-width: 60em; /* 960px */
+}
+
+h1, h2, h3, h4, h5, h6 {
+   color: #666;
+}
+h1, h2 {
+   text-align: center;
+}
+h1 {
+   font-size: x-large;
+}
+h2, h3 {
+   font-size: large;
+}
+h4, h6 {
+   font-style: italic;
+}
+h3 {
+   border-left: solid 5px #ddd;
+   padding-left: 5px;
+   font-variant: small-caps;
+}
+
+p img {
+   display: block;
+   margin: auto;
+}
+
+span, code, pre {
+   font-family: Monaco, "Lucida Console", "Courier New", Courier, 
monospace;
+}
+span.acronym {}
+span.env {
+   font-style: italic;
+}
+span.file {}
+span.option {}
+span.pkg {
+   font-weight: bold;
+}
+span.samp{}
+
+dt, p code {
+   background-color: #F7F7F7;
+}
+
+
+
+
+
+
+
+
+SparkR
+
+
+AFTSurvivalRegressionModel-class
+ALSModel-class
+GBTClassificationModel-class
+GBTRegressionModel-class
+GaussianMixtureModel-class
+GeneralizedLinearRegressionModel-class
+GroupedData
+IsotonicRegressionModel-class
+KMeansModel-class
+KSTest-class
+LDAModel-class
+LogisticRegressionModel-class
+MultilayerPerceptronClassificationModel-class
+NaiveBayesModel-class
+RandomForestClassificationModel-class
+RandomForestRegressionModel-class
+SparkDataFrame
+WindowSpec
+abs
+acos
+add_months
+alias
+approxCountDistinct
+approxQuantile
+arrange
+array_contains
+as.data.frame
+ascii
+asin
+atan
+atan2
+attach
+avg
+base64
+between
+bin
+bitwiseNOT
+bround
+cache
+cacheTable
+cancelJobGroup
+cast
+cbrt
+ceil
+clearCache
+clearJobGroup
+coalesce
+collect
+coltypes
+column
+columnfunctions
+columns
+concat
+concat_ws
+conv
+corr
+cos
+cosh
+count
+countDistinct
+cov
+covar_pop
+crc32
+createDataFrame
+createExternalTable
+createOrReplaceTempView
+crossJoin
+crosstab
+cume_dist
+dapply
+dapplyCollect
+date_add
+date_format
+date_sub
+datediff
+dayofmonth
+dayofyear
+decode
+dense_rank
+dim
+distinct
+drop
+dropDuplicates
+dropTempTable-deprecated
+dropTempView
+dtypes
+encode
+endsWith
+except
+exp
+explain
+explode
+expm1
+expr
+factorial
+filter
+first
+fitted
+floor
+format_number
+format_string
+freqItems
+from_unixtime
+fromutctimestamp
+gapply
+gapplyCollect
+generateAliasesForIntersectedCols
+getNumPartitions
+glm
+greatest
+groupBy
+hash
+hashCode
+head
+hex
+histogram
+hour
+hypot
+ifelse
+initcap
+insertInto
+install.spark
+instr
+intersect
+is.nan
+isLocal
+join
+kurtosis
+lag
+last
+last_day
+lead
+least
+length
+levenshtein
+limit
+lit
+locate
+log
+log10
+log1p
+log2
+lower
+lpad
+ltrim
+match
+max
+md5
+mean
+merge
+min
+minute
+monotonicallyincreasingid
+month
+months_between
+mutate
+nafunctions
+nanvl
+ncol
+negate
+next_day
+nrow
+ntile
+orderBy
+otherwise
+over
+partitionBy
+percent_rank
+persist
+pivot
+pmod
+posexplode
+predict
+print.jobj
+print.structField
+print.structType
+printSchema
+quarter
+rand
+randn
+randomSplit
+rangeBetween
+rank
+rbind
+read.df
+read.jdbc
+read.json
+read.ml
+read.orc
+read.parquet
+read.text
+regexp_extract
+regexp_replace
+registerTempTable-deprecated
+rename
+repartition
+reverse
+rint
+round
+row_number
+rowsBetween
+rpad
+rtrim
+sample
+sampleBy
+saveAsTable
+schema
+sd
+second
+select
+selectExpr
+setJobGroup
+setLogLevel
+sha1
+sha2
+shiftLeft
+shiftRight
+shiftRightUnsigned
+show
+showDF
+sign
+sin
+sinh
+size
+skewness
+sort_array
+soundex
+spark.addFile
+spark.als
+spark.gaussianMixture
+spark.gbt
+spark.getSparkFiles
+spark.getSparkFilesRootDirectory
+spark.glm
+spark.isoreg
+spark.kmeans
+spark.kstest
+spark.lapply
+spark.lda
+spark.logit
+spark.mlp
+spark.naiveBayes
+spark.randomForest
+spark.survreg
+sparkR.callJMethod
+sparkR.callJStatic
+sparkR.conf
+sparkR.init-deprecated
+sparkR.newJObject
+sparkR.session
+sparkR.session.stop
+sparkR.uiWebUrl
+sparkR.version
+sparkRHive.init-deprecated
+sparkRSQL.init-deprecated
+sparkpartitionid
+sql
+sqrt
+startsWith
+stddev_pop
+stddev_samp
+storageLevel
+str
+struct
+structField
+structType
+subset
+substr
+substring_index
+sum
+sumDistinct
+summarize
+summary
+tableNames
+tableToDF
+tables
+take
+tan
+tanh

[39/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/sd.html
--
diff --git a/site/docs/2.1.2/api/R/sd.html b/site/docs/2.1.2/api/R/sd.html
new file mode 100644
index 000..1018604
--- /dev/null
+++ b/site/docs/2.1.2/api/R/sd.html
@@ -0,0 +1,85 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: sd
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sd {SparkR}R Documentation
+
+sd
+
+Description
+
+Aggregate function: alias for stddev_samp
+
+
+
+Usage
+
+
+sd(x, na.rm = FALSE)
+
+stddev(x)
+
+## S4 method for signature 'Column'
+sd(x)
+
+## S4 method for signature 'Column'
+stddev(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+na.rm
+
+currently not used.
+
+
+
+
+Note
+
+sd since 1.6.0
+
+stddev since 1.6.0
+
+
+
+See Also
+
+stddev_pop, stddev_samp
+
+Other agg_funcs: agg, avg,
+countDistinct, count,
+first, kurtosis,
+last, max,
+mean, min,
+skewness, stddev_pop,
+stddev_samp, sumDistinct,
+sum, var_pop,
+var_samp, var
+
+
+
+Examples
+
+## Not run: 
+##D stddev(df$c)
+##D select(df, stddev(df$age))
+##D agg(df, sd(df$age))
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/second.html
--
diff --git a/site/docs/2.1.2/api/R/second.html 
b/site/docs/2.1.2/api/R/second.html
new file mode 100644
index 000..25cf1e6
--- /dev/null
+++ b/site/docs/2.1.2/api/R/second.html
@@ -0,0 +1,71 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: second
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+second 
{SparkR}R Documentation
+
+second
+
+Description
+
+Extracts the seconds as an integer from a given date/timestamp/string.
+
+
+
+Usage
+
+
+second(x)
+
+## S4 method for signature 'Column'
+second(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+second since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, datediff,
+dayofmonth, dayofyear,
+from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+to_date, to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: second(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/select.html
--
diff --git a/site/docs/2.1.2/api/R/select.html 
b/site/docs/2.1.2/api/R/select.html
new file mode 100644
index 000..82cc5b6
--- /dev/null
+++ b/site/docs/2.1.2/api/R/select.html
@@ -0,0 +1,150 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Select
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+select 
{SparkR}R Documentation
+
+Select
+
+Description
+
+Selects a set of columns with names or Column expressions.
+
+
+
+Usage
+
+
+select(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame'
+x$name
+
+## S4 replacement method for signature 'SparkDataFrame'
+x$name - value
+
+## S4 method for signature 'SparkDataFrame,character'
+select(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame,Column'
+select(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame,list'
+select(x, col)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+col
+
+a list of columns or single Column or name.
+
+...
+
+additional column(s) if only one column is specified in col.
+If more than one column is assigned in col, ...
+should be left empty.
+
+name
+
+name of a Column (without being wrapped by "").
+
+value
+
+a Column or an atomic vector in the length of 1 as literal value, or 
NULL.
+If NULL, the specified Column is dropped.
+
+
+
+
+Value
+
+A new SparkDataFrame with selected columns.
+
+
+
+Note
+
+$ since 1.4.0
+
+$- since 1.4.0
+
+select(SparkDataFrame, character) since 1.4.0
+
+select(SparkDataFrame, Column) since 1.4.0
+
+select(SparkDataFrame, list) since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,

[28/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html 
b/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html
new file mode 100644
index 000..a54423b
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html
@@ -0,0 +1,460 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Accumulable (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class AccumulableR,T
+
+
+
+Object
+
+
+org.apache.spark.AccumulableR,T
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+Direct Known Subclasses:
+Accumulator
+
+
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+public class AccumulableR,T
+extends Object
+implements java.io.Serializable
+A data type that can be accumulated, i.e. has a commutative 
and associative "add" operation,
+ but where the result type, R, may be different from the element 
type being added, T.
+ 
+ You must define how to add data, and how to merge two of these together.  For 
some data types,
+ such as a counter, these might be the same operation. In that case, you can 
use the simpler
+ Accumulator. They won't always be the same, 
though -- e.g., imagine you are
+ accumulating a set. You will add items to the set, and you will union two 
sets together.
+ 
+ Operations are not thread-safe.
+ 
+ param:  id ID of this accumulator; for internal use only.
+ param:  initialValue initial value of accumulator
+ param:  param helper object defining how to add elements of type 
R and T
+ param:  name human-readable name for use in Spark's web UI
+ param:  countFailedValues whether to accumulate values from failed tasks. 
This is set to true
+  for system and time metrics like serialization time 
or bytes spilled,
+  and false for things with absolute values like 
number of input rows.
+  This should be used for internal metrics only.
+See Also:Serialized 
Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+Accumulable(RinitialValue,
+   AccumulableParamR,Tparam)
+Deprecated.
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+add(Tterm)
+Deprecated.
+Add more data to this accumulator / accumulable
+
+
+
+long
+id()
+Deprecated.
+
+
+
+R
+localValue()
+Deprecated.
+Get the current value of this accumulator from within a 
task.
+
+
+
+void
+merge(Rterm)
+Deprecated.
+Merge two accumulable objects together
+
+
+
+scala.OptionString
+name()
+Deprecated.
+
+
+
+void
+setValue(RnewValue)
+Deprecated.
+Set the accumulator's value.
+
+
+
+String
+toString()
+Deprecated.
+
+
+
+R
+value()
+Deprecated.
+Access the accumulator's current value; only allowed on 
driver.
+
+
+
+R
+zero()
+Deprecated.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+
+
+Accumulable
+publicAccumulable(RinitialValue,
+   AccumulableParamR,Tparam)
+Deprecated.
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+id
+publiclongid()
+Deprecated.
+
+
+
+
+
+
+
+name
+publicscala.OptionStringname()
+Deprecated.
+
+
+
+
+
+
+
+zero
+publicRzero()
+Deprecated.
+
+
+
+
+
+
+
+
+
+add
+publicvoidadd(Tterm)
+Deprecated.
+Add more data to this accumulator / accumulable
+Parameters:term - 
the data to add
+
+
+
+
+
+
+
+
+
+merge
+publicvoidmerge(Rterm)
+Deprecated.
+Merge two accumulable objects together
+ 
+ Normally, a user will not want to use this version, but will instead call 
add.
+Parameters:term - 
the other R that will get merged with this
+
+
+
+
+
+
+
+value
+publicRvalue()
+Deprecated.
+Access the accumulator's current value; only allowed on 
driver.
+Returns:(undocumented)
+
+
+
+
+
+
+
+localValue
+publicRlocalValue()
+Deprecated.
+Get the current value of this accumulator from within a 
task.
+ 
+ This is NOT the global value of the accumulator.  To get the global value 
after a
+ completed operation on the dataset, call value.
+ 
+ The typical use of this method is to 

[27/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
new file mode 100644
index 000..8a8d442
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
@@ -0,0 +1,365 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+AccumulatorParam.IntAccumulatorParam$ (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No 
Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
AccumulatorParam.IntAccumulatorParam$
+
+
+
+Object
+
+
+org.apache.spark.AccumulatorParam.IntAccumulatorParam$
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, AccumulableParamObject,Object, AccumulatorParamObject
+
+
+Enclosing interface:
+AccumulatorParamT
+
+
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+public static class AccumulatorParam.IntAccumulatorParam$
+extends Object
+implements AccumulatorParamObject
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Nested Class Summary
+
+
+
+
+Nested classes/interfaces inherited from 
interfaceorg.apache.spark.AccumulatorParam
+AccumulatorParam.DoubleAccumulatorParam$, 
AccumulatorParam.FloatAccumulatorParam$, 
AccumulatorParam.IntAccumulatorParam$, AccumulatorParam.LongAccumulatorParam$, 
AccumulatorParam.StringAccumulatorParam$
+
+
+
+
+
+
+
+
+Field Summary
+
+Fields
+
+Modifier and Type
+Field and Description
+
+
+static AccumulatorParam.IntAccumulatorParam$
+MODULE$
+Deprecated.
+Static reference to the singleton instance of this Scala 
object.
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+AccumulatorParam.IntAccumulatorParam$()
+Deprecated.
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+addInPlace(intt1,
+  intt2)
+Deprecated.
+
+
+
+int
+zero(intinitialValue)
+Deprecated.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+Methods inherited from interfaceorg.apache.spark.AccumulatorParam
+addAccumulator
+
+
+
+
+
+Methods inherited from interfaceorg.apache.spark.AccumulableParam
+addInPlace,
 zero
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Field Detail
+
+
+
+
+
+MODULE$
+public static finalAccumulatorParam.IntAccumulatorParam$ 
MODULE$
+Deprecated.
+Static reference to the singleton instance of this Scala 
object.
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+AccumulatorParam.IntAccumulatorParam$
+publicAccumulatorParam.IntAccumulatorParam$()
+Deprecated.
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+addInPlace
+publicintaddInPlace(intt1,
+ intt2)
+Deprecated.
+
+
+
+
+
+
+
+zero
+publicintzero(intinitialValue)
+Deprecated.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No 
Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
new file mode 100644
index 000..e2cd2c9
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
@@ -0,0 +1,365 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+AccumulatorParam.LongAccumulatorParam$ (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class JavaPairRDDK,V
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaPairRDDK,V
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikescala.Tuple2K,V,JavaPairRDDK,V
+
+
+Direct Known Subclasses:
+JavaHadoopRDD, JavaNewHadoopRDD
+
+
+
+public class JavaPairRDDK,V
+extends Object
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaPairRDD(RDDscala.Tuple2K,Vrdd,
+   scala.reflect.ClassTagKkClassTag,
+   scala.reflect.ClassTagVvClassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static UU
+aggregate(UzeroValue,
+ Function2U,T,UseqOp,
+ Function2U,U,UcombOp)
+
+
+UJavaPairRDDK,U
+aggregateByKey(UzeroValue,
+  Function2U,V,UseqFunc,
+  Function2U,U,UcombFunc)
+Aggregate the values of each key, using given combine 
functions and a neutral "zero value".
+
+
+
+UJavaPairRDDK,U
+aggregateByKey(UzeroValue,
+  intnumPartitions,
+  Function2U,V,UseqFunc,
+  Function2U,U,UcombFunc)
+Aggregate the values of each key, using given combine 
functions and a neutral "zero value".
+
+
+
+UJavaPairRDDK,U
+aggregateByKey(UzeroValue,
+  Partitionerpartitioner,
+  Function2U,V,UseqFunc,
+  Function2U,U,UcombFunc)
+Aggregate the values of each key, using given combine 
functions and a neutral "zero value".
+
+
+
+JavaPairRDDK,V
+cache()
+Persist this RDD with the default storage level 
(MEMORY_ONLY).
+
+
+
+static UJavaPairRDDT,U
+cartesian(JavaRDDLikeU,?other)
+
+
+static void
+checkpoint()
+
+
+scala.reflect.ClassTagscala.Tuple2K,V
+classTag()
+
+
+JavaPairRDDK,V
+coalesce(intnumPartitions)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+JavaPairRDDK,V
+coalesce(intnumPartitions,
+booleanshuffle)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+WJavaPairRDDK,scala.Tuple2IterableV,IterableW
+cogroup(JavaPairRDDK,Wother)
+For each key k in this or other, 
return a resulting RDD that contains a tuple with the
+ list of values for that key in this as well as 
other.
+
+
+
+WJavaPairRDDK,scala.Tuple2IterableV,IterableW
+cogroup(JavaPairRDDK,Wother,
+   intnumPartitions)
+For each key k in this or other, 
return a resulting RDD that contains a tuple with the
+ list of values for that key in this as well as 
other.
+
+
+
+WJavaPairRDDK,scala.Tuple2IterableV,IterableW
+cogroup(JavaPairRDDK,Wother,
+   Partitionerpartitioner)
+For each key k in this or other, 
return a resulting RDD that contains a tuple with the
+ list of values for that key in this as well as 
other.
+
+
+
+W1,W2JavaPairRDDK,scala.Tuple3IterableV,IterableW1,IterableW2
+cogroup(JavaPairRDDK,W1other1,
+   JavaPairRDDK,W2other2)
+For each key k in this or other1 
or other2, return a resulting RDD that contains a
+ tuple with the list of values for that key in this, 
other1 and other2.
+
+
+
+W1,W2JavaPairRDDK,scala.Tuple3IterableV,IterableW1,IterableW2
+cogroup(JavaPairRDDK,W1other1,
+   JavaPairRDDK,W2other2,
+   intnumPartitions)
+For each key k in this or other1 
or other2, return a resulting RDD that contains a
+ tuple with the list of values for that key in this, 
other1 and other2.
+
+
+
+W1,W2,W3JavaPairRDDK,scala.Tuple4IterableV,IterableW1,IterableW2,IterableW3
+cogroup(JavaPairRDDK,W1other1,
+   JavaPairRDDK,W2other2,
+   JavaPairRDDK,W3other3)
+For each key k in this or other1 
or other2 or other3,
+ return a resulting RDD that contains a tuple with the list of values
+ for that key 

[32/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/constant-values.html
--
diff --git a/site/docs/2.1.2/api/java/constant-values.html 
b/site/docs/2.1.2/api/java/constant-values.html
new file mode 100644
index 000..2782c51
--- /dev/null
+++ b/site/docs/2.1.2/api/java/constant-values.html
@@ -0,0 +1,237 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Constant Field Values (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+
+Constant Field Values
+Contents
+
+org.apache.*
+
+
+
+
+
+org.apache.*
+
+
+
+org.apache.spark.launcher.SparkLauncher
+
+Modifier and Type
+Constant Field
+Value
+
+
+
+
+
+publicstaticfinalString
+CHILD_CONNECTION_TIMEOUT
+"spark.launcher.childConectionTimeout"
+
+
+
+
+publicstaticfinalString
+CHILD_PROCESS_LOGGER_NAME
+"spark.launcher.childProcLoggerName"
+
+
+
+
+publicstaticfinalString
+DEPLOY_MODE
+"spark.submit.deployMode"
+
+
+
+
+publicstaticfinalString
+DRIVER_EXTRA_CLASSPATH
+"spark.driver.extraClassPath"
+
+
+
+
+publicstaticfinalString
+DRIVER_EXTRA_JAVA_OPTIONS
+"spark.driver.extraJavaOptions"
+
+
+
+
+publicstaticfinalString
+DRIVER_EXTRA_LIBRARY_PATH
+"spark.driver.extraLibraryPath"
+
+
+
+
+publicstaticfinalString
+DRIVER_MEMORY
+"spark.driver.memory"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_CORES
+"spark.executor.cores"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_EXTRA_CLASSPATH
+"spark.executor.extraClassPath"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_EXTRA_JAVA_OPTIONS
+"spark.executor.extraJavaOptions"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_EXTRA_LIBRARY_PATH
+"spark.executor.extraLibraryPath"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_MEMORY
+"spark.executor.memory"
+
+
+
+
+publicstaticfinalString
+NO_RESOURCE
+"spark-internal"
+
+
+
+
+publicstaticfinalString
+SPARK_MASTER
+"spark.master"
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/deprecated-list.html
--
diff --git a/site/docs/2.1.2/api/java/deprecated-list.html 
b/site/docs/2.1.2/api/java/deprecated-list.html
new file mode 100644
index 000..51c3c20
--- /dev/null
+++ b/site/docs/2.1.2/api/java/deprecated-list.html
@@ -0,0 +1,611 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Deprecated List (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+
+Deprecated API
+Contents
+
+Deprecated Interfaces
+Deprecated Classes
+Deprecated Methods
+Deprecated Constructors
+
+
+
+
+
+
+
+
+Deprecated Interfaces
+
+Interface and Description
+
+
+
+org.apache.spark.AccumulableParam
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+
+
+
+
+
+
+
+
+Deprecated Classes
+
+Class and Description
+
+
+
+org.apache.spark.Accumulable
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.Accumulator
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.IntAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.LongAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.StringAccumulatorParam$
+use 

[35/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/window.html
--
diff --git a/site/docs/2.1.2/api/R/window.html 
b/site/docs/2.1.2/api/R/window.html
new file mode 100644
index 000..fc41b25
--- /dev/null
+++ b/site/docs/2.1.2/api/R/window.html
@@ -0,0 +1,122 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: window
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+window 
{SparkR}R Documentation
+
+window
+
+Description
+
+Bucketize rows into one or more time windows given a timestamp specifying 
column. Window
+starts are inclusive but the window ends are exclusive, e.g. 12:05 will be in 
the window
+[12:05,12:10) but not in [12:00,12:05). Windows can support microsecond 
precision. Windows in
+the order of months are not supported.
+
+
+
+Usage
+
+
+window(x, ...)
+
+## S4 method for signature 'Column'
+window(x, windowDuration, slideDuration = NULL,
+  startTime = NULL)
+
+
+
+Arguments
+
+
+x
+
+a time Column. Must be of TimestampType.
+
+...
+
+further arguments to be passed to or from other methods.
+
+windowDuration
+
+a string specifying the width of the window, e.g. '1 second',
+'1 day 12 hours', '2 minutes'. Valid interval strings are 'week',
+'day', 'hour', 'minute', 'second', 'millisecond', 'microsecond'. Note that
+the duration is a fixed length of time, and does not vary over time
+according to a calendar. For example, '1 day' always means 86,400,000
+milliseconds, not a calendar day.
+
+slideDuration
+
+a string specifying the sliding interval of the window. Same format as
+windowDuration. A new window will be generated every
+slideDuration. Must be less than or equal to
+the windowDuration. This duration is likewise absolute, and does 
not
+vary according to a calendar.
+
+startTime
+
+the offset with respect to 1970-01-01 00:00:00 UTC with which to start
+window intervals. For example, in order to have hourly tumbling windows
+that start 15 minutes past the hour, e.g. 12:15-13:15, 13:15-14:15... provide
+startTime as "15 minutes".
+
+
+
+
+Value
+
+An output column of struct called 'window' by default with the nested 
columns 'start'
+and 'end'.
+
+
+
+Note
+
+window since 2.0.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, datediff,
+dayofmonth, dayofyear,
+from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+year
+
+
+
+Examples
+
+## Not run: 
+##D   # One minute windows every 15 seconds 10 seconds after the minute, e.g. 
09:00:10-09:01:10,
+##D   # 09:00:25-09:01:25, 09:00:40-09:01:40, ...
+##D   window(df$time, 1 minute, 15 seconds, 10 
seconds)
+##D 
+##D   # One minute tumbling windows 15 seconds after the minute, e.g. 
09:00:15-09:01:15,
+##D# 09:01:15-09:02:15...
+##D   window(df$time, 1 minute, startTime = 15 seconds)
+##D 
+##D   # Thirty-second windows every 10 seconds, e.g. 09:00:00-09:00:30, 
09:00:10-09:00:40, ...
+##D   window(df$time, 30 seconds, 10 seconds)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/windowOrderBy.html
--
diff --git a/site/docs/2.1.2/api/R/windowOrderBy.html 
b/site/docs/2.1.2/api/R/windowOrderBy.html
new file mode 100644
index 000..19a48fe
--- /dev/null
+++ b/site/docs/2.1.2/api/R/windowOrderBy.html
@@ -0,0 +1,71 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: windowOrderBy
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+windowOrderBy {SparkR}R 
Documentation
+
+windowOrderBy
+
+Description
+
+Creates a WindowSpec with the ordering defined.
+
+
+
+Usage
+
+
+windowOrderBy(col, ...)
+
+## S4 method for signature 'character'
+windowOrderBy(col, ...)
+
+## S4 method for signature 'Column'
+windowOrderBy(col, ...)
+
+
+
+Arguments
+
+
+col
+
+A column name or Column by which rows are ordered within
+windows.
+
+...
+
+Optional column names or Columns in addition to col, by
+which rows are ordered within windows.
+
+
+
+
+Note
+
+windowOrderBy(character) since 2.0.0
+
+windowOrderBy(Column) since 2.0.0
+
+
+
+Examples
+
+## Not run: 
+##D   ws - windowOrderBy(key1, key2)
+##D   df1 - select(df, over(lead(value, 1), ws))
+##D 
+##D   ws - 

[05/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/CoGroupFunction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/CoGroupFunction.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/CoGroupFunction.html
new file mode 100644
index 000..172a23f
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/CoGroupFunction.html
@@ -0,0 +1,224 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+CoGroupFunction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
CoGroupFunctionK,V1,V2,R
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface CoGroupFunctionK,V1,V2,R
+extends java.io.Serializable
+A function that returns zero or more output records from 
each grouping key and its values from 2
+ Datasets.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+java.util.IteratorR
+call(Kkey,
+java.util.IteratorV1left,
+java.util.IteratorV2right)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+java.util.IteratorRcall(Kkey,
+ java.util.IteratorV1left,
+ java.util.IteratorV2right)
+   throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
new file mode 100644
index 000..628dfa8
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
@@ -0,0 +1,219 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+DoubleFlatMapFunction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
DoubleFlatMapFunctionT
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface DoubleFlatMapFunctionT
+extends java.io.Serializable
+A function that returns zero or more records of type Double 
from each input record.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+java.util.IteratorDouble
+call(Tt)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+java.util.IteratorDoublecall(Tt)
+throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class TaskKilled
+
+
+
+Object
+
+
+org.apache.spark.TaskKilled
+
+
+
+
+
+
+
+
+public class TaskKilled
+extends Object
+:: DeveloperApi ::
+ Task was killed intentionally and needs to be rescheduled.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+TaskKilled()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+static boolean
+countTowardsTaskFailures()
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+static String
+toErrorString()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+TaskKilled
+publicTaskKilled()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+toErrorString
+public staticStringtoErrorString()
+
+
+
+
+
+
+
+countTowardsTaskFailures
+public staticbooleancountTowardsTaskFailures()
+
+
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/TaskKilledException.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/TaskKilledException.html 
b/site/docs/2.1.2/api/java/org/apache/spark/TaskKilledException.html
new file mode 100644
index 000..40b1e9f
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/TaskKilledException.html
@@ -0,0 +1,259 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+TaskKilledException (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
TaskKilledException
+
+
+
+Object
+
+
+Throwable
+
+
+Exception
+
+
+RuntimeException
+
+
+org.apache.spark.TaskKilledException
+
+
+
+
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+
+public class 

[30/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/index.html
--
diff --git a/site/docs/2.1.2/api/java/index.html 
b/site/docs/2.1.2/api/java/index.html
new file mode 100644
index 000..08d3ceb
--- /dev/null
+++ b/site/docs/2.1.2/api/java/index.html
@@ -0,0 +1,75 @@
+http://www.w3.org/TR/html4/frameset.dtd;>
+
+
+
+
+Spark 2.1.2 JavaDoc
+
+tmpTargetPage = "" + window.location.search;
+if (tmpTargetPage != "" && tmpTargetPage != "undefined")
+tmpTargetPage = tmpTargetPage.substring(1);
+if (tmpTargetPage.indexOf(":") != -1 || (tmpTargetPage != "" && 
!validURL(tmpTargetPage)))
+tmpTargetPage = "undefined";
+targetPage = tmpTargetPage;
+function validURL(url) {
+try {
+url = decodeURIComponent(url);
+}
+catch (error) {
+return false;
+}
+var pos = url.indexOf(".html");
+if (pos == -1 || pos != url.length - 5)
+return false;
+var allowNumber = false;
+var allowSep = false;
+var seenDot = false;
+for (var i = 0; i < url.length - 5; i++) {
+var ch = url.charAt(i);
+if ('a' <= ch && ch <= 'z' ||
+'A' <= ch && ch <= 'Z' ||
+ch == '$' ||
+ch == '_' ||
+ch.charCodeAt(0) > 127) {
+allowNumber = true;
+allowSep = true;
+} else if ('0' <= ch && ch <= '9'
+|| ch == '-') {
+if (!allowNumber)
+ return false;
+} else if (ch == '/' || ch == '.') {
+if (!allowSep)
+return false;
+allowNumber = false;
+allowSep = false;
+if (ch == '.')
+ seenDot = true;
+if (ch == '/' && seenDot)
+ return false;
+} else {
+return false;
+}
+}
+return true;
+}
+function loadFrames() {
+if (targetPage != "" && targetPage != "undefined")
+ top.classFrame.location = top.targetPage;
+}
+
+
+
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+Frame Alert
+This document is designed to be viewed using the frames feature. If you see 
this message, you are using a non-frame-capable web client. Link to Non-frame version.
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/lib/api-javadocs.js
--
diff --git a/site/docs/2.1.2/api/java/lib/api-javadocs.js 
b/site/docs/2.1.2/api/java/lib/api-javadocs.js
new file mode 100644
index 000..ead13d6
--- /dev/null
+++ b/site/docs/2.1.2/api/java/lib/api-javadocs.js
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/* Dynamically injected post-processing code for the API docs */
+
+$(document).ready(function() {
+  addBadges(":: AlphaComponent ::", 'Alpha 
Component');
+  addBadges(":: DeveloperApi ::", 'Developer 
API');
+  addBadges(":: Experimental ::", 'Experimental');
+});
+
+function addBadges(tag, html) {
+  var tags = $(".block:contains(" + tag + ")")
+
+  // Remove identifier tags
+  tags.each(function(index) {
+var oldHTML = $(this).html();
+var newHTML = oldHTML.replace(tag, "");
+$(this).html(newHTML);
+  });
+
+  // Add html badge tags
+  tags.each(function(index) {
+if ($(this).parent().is('td.colLast')) {
+  $(this).parent().prepend(html);
+} else if ($(this).parent('li.blockList')
+  .parent('ul.blockList')
+  .parent('div.description')
+  .parent().is('div.contentContainer')) {
+  var contentContainer = $(this).parent('li.blockList')
+.parent('ul.blockList')
+.parent('div.description')
+.parent('div.contentContainer')
+  var header = contentContainer.prev('div.header');
+  if (header.length > 0) {
+

[49/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/00Index.html
--
diff --git a/site/docs/2.1.2/api/R/00Index.html 
b/site/docs/2.1.2/api/R/00Index.html
new file mode 100644
index 000..541a952
--- /dev/null
+++ b/site/docs/2.1.2/api/R/00Index.html
@@ -0,0 +1,1585 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>
+http://www.w3.org/1999/xhtml;>
+R: R Frontend for Apache Spark
+
+
+
+ R Frontend for Apache Spark
+http://stat.ethz.ch/R-manual/R-devel/doc/html/Rlogo.svg; alt="[R logo]" />
+
+
+
+http://stat.ethz.ch/R-manual/R-devel/doc/html/packages.html;>http://stat.ethz.ch/R-manual/R-devel/doc/html/left.jpg; 
alt="[Up]" />
+http://stat.ethz.ch/R-manual/R-devel/doc/html/index.html;>http://stat.ethz.ch/R-manual/R-devel/doc/html/up.jpg; 
alt="[Top]" />
+Documentation for package SparkR version 2.1.2
+
+DESCRIPTION file.
+
+
+Help Pages
+
+
+
+A
+B
+C
+D
+E
+F
+G
+H
+I
+J
+K
+L
+M
+N
+O
+P
+Q
+R
+S
+T
+U
+V
+W
+Y
+misc
+
+
+
+-- A --
+
+
+abs
+abs
+abs-method
+abs
+acos
+acos
+acos-method
+acos
+add_months
+add_months
+add_months-method
+add_months
+AFTSurvivalRegressionModel-class
+S4 class that represents a AFTSurvivalRegressionModel
+agg
+summarize
+agg-method
+summarize
+alias
+alias
+alias-method
+alias
+ALSModel-class
+S4 class that represents an ALSModel
+approxCountDistinct
+Returns the approximate number of distinct items in a group
+approxCountDistinct-method
+Returns the approximate number of distinct items in a group
+approxQuantile
+Calculates the approximate quantiles of a numerical column of a 
SparkDataFrame
+approxQuantile-method
+Calculates the approximate quantiles of a numerical column of a 
SparkDataFrame
+arrange
+Arrange Rows by Variables
+arrange-method
+Arrange Rows by Variables
+array_contains
+array_contains
+array_contains-method
+array_contains
+as.data.frame
+Download data from a SparkDataFrame into a R data.frame
+as.data.frame-method
+Download data from a SparkDataFrame into a R data.frame
+as.DataFrame
+Create a SparkDataFrame
+as.DataFrame.default
+Create a SparkDataFrame
+asc
+A set of operations working with SparkDataFrame columns
+ascii
+ascii
+ascii-method
+ascii
+asin
+asin
+asin-method
+asin
+atan
+atan
+atan-method
+atan
+atan2
+atan2
+atan2-method
+atan2
+attach
+Attach SparkDataFrame to R search path
+attach-method
+Attach SparkDataFrame to R search path
+avg
+avg
+avg-method
+avg
+
+
+-- B --
+
+
+base64
+base64
+base64-method
+base64
+between
+between
+between-method
+between
+bin
+bin
+bin-method
+bin
+bitwiseNOT
+bitwiseNOT
+bitwiseNOT-method
+bitwiseNOT
+bround
+bround
+bround-method
+bround
+
+
+-- C --
+
+
+cache
+Cache
+cache-method
+Cache
+cacheTable
+Cache Table
+cacheTable.default
+Cache Table
+cancelJobGroup
+Cancel active jobs for the specified group
+cancelJobGroup.default
+Cancel active jobs for the specified group
+cast
+Casts the column to a different data type.
+cast-method
+Casts the column to a different data type.
+cbrt
+cbrt
+cbrt-method
+cbrt
+ceil
+Computes the ceiling of the given value
+ceil-method
+Computes the ceiling of the given value
+ceiling
+Computes the ceiling of the given value
+ceiling-method
+Computes the ceiling of the given value
+clearCache
+Clear Cache
+clearCache.default
+Clear Cache
+clearJobGroup
+Clear current job group ID and its description
+clearJobGroup.default
+Clear current job group ID and its description
+coalesce
+Coalesce
+coalesce-method
+Coalesce
+collect
+Collects all the elements of a SparkDataFrame and coerces them into an R 
data.frame.
+collect-method
+Collects all the elements of a SparkDataFrame and coerces them into an R 
data.frame.
+colnames
+Column Names of SparkDataFrame
+colnames-method
+Column Names of SparkDataFrame
+colnames-
+Column Names of SparkDataFrame
+colnames--method
+Column Names of SparkDataFrame
+coltypes
+coltypes
+coltypes-method
+coltypes
+coltypes-
+coltypes
+coltypes--method
+coltypes
+column
+S4 class that represents a SparkDataFrame column
+Column-class
+S4 class that represents a SparkDataFrame column
+column-method
+S4 class that represents a SparkDataFrame column
+columnfunctions
+A set of operations working with SparkDataFrame columns
+columns
+Column Names of SparkDataFrame
+columns-method
+Column Names of SparkDataFrame
+concat
+concat
+concat-method
+concat
+concat_ws
+concat_ws
+concat_ws-method
+concat_ws
+contains
+A set of operations working with SparkDataFrame columns
+conv
+conv
+conv-method
+conv
+corr
+corr
+corr-method
+corr
+cos
+cos
+cos-method
+cos
+cosh
+cosh
+cosh-method
+cosh
+count
+Count
+count-method
+Count
+count-method
+Returns the number of rows in a SparkDataFrame
+countDistinct
+Count Distinct Values
+countDistinct-method
+Count Distinct Values
+cov
+cov
+cov-method
+cov
+covar_pop
+covar_pop
+covar_pop-method
+covar_pop
+covar_samp
+cov
+covar_samp-method
+cov
+crc32
+crc32
+crc32-method
+crc32
+createDataFrame
+Create a 

[16/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html 
b/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html
new file mode 100644
index 000..ffb5849
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html
@@ -0,0 +1,323 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+StopMapOutputTracker (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
StopMapOutputTracker
+
+
+
+Object
+
+
+org.apache.spark.StopMapOutputTracker
+
+
+
+
+
+
+
+
+public class StopMapOutputTracker
+extends Object
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+StopMapOutputTracker()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+StopMapOutputTracker
+publicStopMapOutputTracker()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/Success.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/Success.html 
b/site/docs/2.1.2/api/java/org/apache/spark/Success.html
new file mode 100644
index 000..830b4d8
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/Success.html
@@ -0,0 +1,325 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Success (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class Success
+
+
+
+Object
+
+
+org.apache.spark.Success
+
+
+
+
+
+
+
+
+public class Success
+extends Object
+:: DeveloperApi ::
+ Task succeeded.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+Success()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static 

[18/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/SparkEnv.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkEnv.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkEnv.html
new file mode 100644
index 000..bdb8d4d
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkEnv.html
@@ -0,0 +1,478 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkEnv (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkEnv
+
+
+
+Object
+
+
+org.apache.spark.SparkEnv
+
+
+
+
+
+
+
+
+public class SparkEnv
+extends Object
+:: DeveloperApi ::
+ Holds all the runtime environment objects for a running Spark instance 
(either master or worker),
+ including the serializer, RpcEnv, block manager, map output tracker, etc. 
Currently
+ Spark code finds the SparkEnv through a global variable, so all the threads 
can access the same
+ SparkEnv. It can be accessed by SparkEnv.get (e.g. after creating a 
SparkContext).
+ 
+ NOTE: This is not intended for external use. This is exposed for Shark and 
may be made private
+   in a future release.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkEnv(StringexecutorId,
+org.apache.spark.rpc.RpcEnvrpcEnv,
+Serializerserializer,
+SerializerclosureSerializer,
+org.apache.spark.serializer.SerializerManagerserializerManager,
+org.apache.spark.MapOutputTrackermapOutputTracker,
+org.apache.spark.shuffle.ShuffleManagershuffleManager,
+org.apache.spark.broadcast.BroadcastManagerbroadcastManager,
+org.apache.spark.storage.BlockManagerblockManager,
+org.apache.spark.SecurityManagersecurityManager,
+org.apache.spark.metrics.MetricsSystemmetricsSystem,
+org.apache.spark.memory.MemoryManagermemoryManager,
+
org.apache.spark.scheduler.OutputCommitCoordinatoroutputCommitCoordinator,
+SparkConfconf)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+org.apache.spark.storage.BlockManager
+blockManager()
+
+
+org.apache.spark.broadcast.BroadcastManager
+broadcastManager()
+
+
+Serializer
+closureSerializer()
+
+
+SparkConf
+conf()
+
+
+String
+executorId()
+
+
+static SparkEnv
+get()
+Returns the SparkEnv.
+
+
+
+org.apache.spark.MapOutputTracker
+mapOutputTracker()
+
+
+org.apache.spark.memory.MemoryManager
+memoryManager()
+
+
+org.apache.spark.metrics.MetricsSystem
+metricsSystem()
+
+
+org.apache.spark.scheduler.OutputCommitCoordinator
+outputCommitCoordinator()
+
+
+org.apache.spark.SecurityManager
+securityManager()
+
+
+Serializer
+serializer()
+
+
+org.apache.spark.serializer.SerializerManager
+serializerManager()
+
+
+static void
+set(SparkEnve)
+
+
+org.apache.spark.shuffle.ShuffleManager
+shuffleManager()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+SparkEnv
+publicSparkEnv(StringexecutorId,
+org.apache.spark.rpc.RpcEnvrpcEnv,
+Serializerserializer,
+SerializerclosureSerializer,
+org.apache.spark.serializer.SerializerManagerserializerManager,
+org.apache.spark.MapOutputTrackermapOutputTracker,
+org.apache.spark.shuffle.ShuffleManagershuffleManager,
+org.apache.spark.broadcast.BroadcastManagerbroadcastManager,
+org.apache.spark.storage.BlockManagerblockManager,
+org.apache.spark.SecurityManagersecurityManager,
+org.apache.spark.metrics.MetricsSystemmetricsSystem,
+org.apache.spark.memory.MemoryManagermemoryManager,
+
org.apache.spark.scheduler.OutputCommitCoordinatoroutputCommitCoordinator,
+SparkConfconf)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+set
+public staticvoidset(SparkEnve)
+
+
+
+
+
+
+
+get
+public staticSparkEnvget()
+Returns the SparkEnv.
+Returns:(undocumented)
+
+
+
+
+
+
+
+executorId
+publicStringexecutorId()
+
+
+
+
+
+
+
+serializer
+publicSerializerserializer()
+
+
+
+
+
+
+
+closureSerializer
+publicSerializerclosureSerializer()
+
+
+
+
+
+
+
+serializerManager

[04/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html
new file mode 100644
index 000..35cf9d7
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html
@@ -0,0 +1,217 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Function0 (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface Function0R
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface Function0R
+extends java.io.Serializable
+A zero-argument function that returns an R.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+R
+call()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+call
+Rcall()
+   throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html
new file mode 100644
index 000..ef10f45
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html
@@ -0,0 +1,221 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Function2 (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
Function2T1,T2,R
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface Function2T1,T2,R
+extends java.io.Serializable
+A two-argument function that takes arguments of type T1 and 
T2 and returns an R.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+R
+call(T1v1,
+T2v2)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+Rcall(T1v1,
+ T2v2)
+   throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function3.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function3.html 

[36/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/summary.html
--
diff --git a/site/docs/2.1.2/api/R/summary.html 
b/site/docs/2.1.2/api/R/summary.html
new file mode 100644
index 000..d074b00
--- /dev/null
+++ b/site/docs/2.1.2/api/R/summary.html
@@ -0,0 +1,132 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: summary
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+describe 
{SparkR}R Documentation
+
+summary
+
+Description
+
+Computes statistics for numeric and string columns.
+If no columns are given, this function computes statistics for all numerical 
or string columns.
+
+
+
+Usage
+
+
+describe(x, col, ...)
+
+summary(object, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+describe(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame,ANY'
+describe(x)
+
+## S4 method for signature 'SparkDataFrame'
+summary(object, ...)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame to be computed.
+
+col
+
+a string of name.
+
+...
+
+additional expressions.
+
+object
+
+a SparkDataFrame to be summarized.
+
+
+
+
+Value
+
+A SparkDataFrame.
+
+
+
+Note
+
+describe(SparkDataFrame, character) since 1.4.0
+
+describe(SparkDataFrame) since 1.4.0
+
+summary(SparkDataFrame) since 1.5.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+histogram, insertInto,
+intersect, isLocal,
+join, limit,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D describe(df)
+##D describe(df, col1)
+##D describe(df, col1, col2)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/tableNames.html
--
diff --git a/site/docs/2.1.2/api/R/tableNames.html 
b/site/docs/2.1.2/api/R/tableNames.html
new file mode 100644
index 000..cf05272
--- /dev/null
+++ b/site/docs/2.1.2/api/R/tableNames.html
@@ -0,0 +1,61 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Table Names
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+tableNames 
{SparkR}R Documentation
+
+Table Names
+
+Description
+
+Returns the names of tables in the given database as an array.
+
+
+
+Usage
+
+
+## Default S3 method:
+tableNames(databaseName = NULL)
+
+
+
+Arguments
+
+
+databaseName
+
+name of the database
+
+
+
+
+Value
+
+a list of table names
+
+
+
+Note
+
+tableNames since 1.4.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D tableNames(hive)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/tableToDF.html
--
diff --git a/site/docs/2.1.2/api/R/tableToDF.html 
b/site/docs/2.1.2/api/R/tableToDF.html
new file mode 100644
index 000..3164dae
--- /dev/null
+++ b/site/docs/2.1.2/api/R/tableToDF.html
@@ -0,0 +1,64 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create a SparkDataFrame 
from a SparkSQL Table
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+tableToDF 
{SparkR}R Documentation
+
+Create a SparkDataFrame from a SparkSQL Table
+
+Description
+
+Returns the specified Table as a SparkDataFrame.  The Table must have 
already been registered
+in the SparkSession.
+
+
+
+Usage
+
+
+tableToDF(tableName)
+
+
+
+Arguments
+
+
+tableName
+
+The SparkSQL Table to 

[33/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/allclasses-noframe.html
--
diff --git a/site/docs/2.1.2/api/java/allclasses-noframe.html 
b/site/docs/2.1.2/api/java/allclasses-noframe.html
new file mode 100644
index 000..413f8db
--- /dev/null
+++ b/site/docs/2.1.2/api/java/allclasses-noframe.html
@@ -0,0 +1,1138 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+All Classes (Spark 2.1.2 JavaDoc)
+
+
+
+
+All Classes
+
+
+AbsoluteError
+Accumulable
+AccumulableInfo
+AccumulableInfo
+AccumulableParam
+Accumulator
+AccumulatorContext
+AccumulatorParam
+AccumulatorParam.DoubleAccumulatorParam$
+AccumulatorParam.FloatAccumulatorParam$
+AccumulatorParam.IntAccumulatorParam$
+AccumulatorParam.LongAccumulatorParam$
+AccumulatorParam.StringAccumulatorParam$
+AccumulatorV2
+AFTAggregator
+AFTCostFun
+AFTSurvivalRegression
+AFTSurvivalRegressionModel
+AggregatedDialect
+AggregatingEdgeContext
+Aggregator
+Aggregator
+Algo
+AllJobsCancelled
+AllReceiverIds
+ALS
+ALS
+ALS.InBlock$
+ALS.Rating
+ALS.Rating$
+ALS.RatingBlock$
+ALSModel
+AnalysisException
+And
+AnyDataType
+ApplicationAttemptInfo
+ApplicationInfo
+ApplicationsListResource
+ApplicationStatus
+ApplyInPlace
+AreaUnderCurve
+ArrayType
+AskPermissionToCommitOutput
+AssociationRules
+AssociationRules.Rule
+AsyncRDDActions
+Attribute
+AttributeGroup
+AttributeKeys
+AttributeType
+BaseRelation
+BaseRRDD
+BatchInfo
+BernoulliCellSampler
+BernoulliSampler
+Binarizer
+BinaryAttribute
+BinaryClassificationEvaluator
+BinaryClassificationMetrics
+BinaryLogisticRegressionSummary
+BinaryLogisticRegressionTrainingSummary
+BinarySample
+BinaryType
+BinomialBounds
+BisectingKMeans
+BisectingKMeans
+BisectingKMeansModel
+BisectingKMeansModel
+BisectingKMeansModel.SaveLoadV1_0$
+BisectingKMeansSummary
+BlacklistTracker
+BLAS
+BLAS
+BlockId
+BlockManagerId
+BlockManagerMessages
+BlockManagerMessages.BlockManagerHeartbeat
+BlockManagerMessages.BlockManagerHeartbeat$
+BlockManagerMessages.GetBlockStatus
+BlockManagerMessages.GetBlockStatus$
+BlockManagerMessages.GetExecutorEndpointRef
+BlockManagerMessages.GetExecutorEndpointRef$
+BlockManagerMessages.GetLocations
+BlockManagerMessages.GetLocations$
+BlockManagerMessages.GetLocationsMultipleBlockIds
+BlockManagerMessages.GetLocationsMultipleBlockIds$
+BlockManagerMessages.GetMatchingBlockIds
+BlockManagerMessages.GetMatchingBlockIds$
+BlockManagerMessages.GetMemoryStatus$
+BlockManagerMessages.GetPeers
+BlockManagerMessages.GetPeers$
+BlockManagerMessages.GetStorageStatus$
+BlockManagerMessages.HasCachedBlocks
+BlockManagerMessages.HasCachedBlocks$
+BlockManagerMessages.RegisterBlockManager
+BlockManagerMessages.RegisterBlockManager$
+BlockManagerMessages.RemoveBlock
+BlockManagerMessages.RemoveBlock$
+BlockManagerMessages.RemoveBroadcast
+BlockManagerMessages.RemoveBroadcast$
+BlockManagerMessages.RemoveExecutor
+BlockManagerMessages.RemoveExecutor$
+BlockManagerMessages.RemoveRdd
+BlockManagerMessages.RemoveRdd$
+BlockManagerMessages.RemoveShuffle
+BlockManagerMessages.RemoveShuffle$
+BlockManagerMessages.StopBlockManagerMaster$
+BlockManagerMessages.ToBlockManagerMaster
+BlockManagerMessages.ToBlockManagerSlave
+BlockManagerMessages.TriggerThreadDump$
+BlockManagerMessages.UpdateBlockInfo
+BlockManagerMessages.UpdateBlockInfo$
+BlockMatrix
+BlockNotFoundException
+BlockReplicationPolicy
+BlockStatus
+BlockUpdatedInfo
+BloomFilter
+BloomFilter.Version
+BooleanParam
+BooleanType
+BoostingStrategy
+BoundedDouble
+BreezeUtil
+Broadcast
+BroadcastBlockId
+Broker
+BucketedRandomProjectionLSH
+BucketedRandomProjectionLSHModel
+Bucketizer
+BufferReleasingInputStream
+BytecodeUtils
+ByteType
+CalendarIntervalType
+Catalog
+CatalystScan
+CategoricalSplit
+CausedBy
+CharType
+CheckpointReader
+CheckpointState
+ChiSqSelector
+ChiSqSelector
+ChiSqSelectorModel
+ChiSqSelectorModel
+ChiSqSelectorModel.SaveLoadV1_0$
+ChiSqTest
+ChiSqTest.Method
+ChiSqTest.Method$
+ChiSqTest.NullHypothesis$
+ChiSqTestResult
+CholeskyDecomposition
+ChunkedByteBufferInputStream
+ClassificationModel
+ClassificationModel
+Classifier
+CleanAccum
+CleanBroadcast
+CleanCheckpoint
+CleanRDD
+CleanShuffle
+CleanupTask
+CleanupTaskWeakReference
+ClosureCleaner
+ClusteringSummary
+CoarseGrainedClusterMessages
+CoarseGrainedClusterMessages.AddWebUIFilter
+CoarseGrainedClusterMessages.AddWebUIFilter$
+CoarseGrainedClusterMessages.GetExecutorLossReason
+CoarseGrainedClusterMessages.GetExecutorLossReason$
+CoarseGrainedClusterMessages.KillExecutors
+CoarseGrainedClusterMessages.KillExecutors$
+CoarseGrainedClusterMessages.KillTask
+CoarseGrainedClusterMessages.KillTask$
+CoarseGrainedClusterMessages.LaunchTask
+CoarseGrainedClusterMessages.LaunchTask$
+CoarseGrainedClusterMessages.RegisterClusterManager
+CoarseGrainedClusterMessages.RegisterClusterManager$
+CoarseGrainedClusterMessages.RegisteredExecutor$
+CoarseGrainedClusterMessages.RegisterExecutor

[43/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/hex.html
--
diff --git a/site/docs/2.1.2/api/R/hex.html b/site/docs/2.1.2/api/R/hex.html
new file mode 100644
index 000..ee4955b
--- /dev/null
+++ b/site/docs/2.1.2/api/R/hex.html
@@ -0,0 +1,79 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: hex
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+hex 
{SparkR}R Documentation
+
+hex
+
+Description
+
+Computes hex value of the given column.
+
+
+
+Usage
+
+
+hex(x)
+
+## S4 method for signature 'Column'
+hex(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+hex since 1.5.0
+
+
+
+See Also
+
+Other math_funcs: acos, asin,
+atan2, atan,
+bin, bround,
+cbrt, ceil,
+conv, corr,
+cosh, cos,
+covar_pop, cov,
+expm1, exp,
+factorial, floor,
+hypot, log10,
+log1p, log2,
+log, pmod,
+rint, round,
+shiftLeft,
+shiftRightUnsigned,
+shiftRight, signum,
+sinh, sin,
+sqrt, tanh,
+tan, toDegrees,
+toRadians, unhex
+
+
+
+Examples
+
+## Not run: hex(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/histogram.html
--
diff --git a/site/docs/2.1.2/api/R/histogram.html 
b/site/docs/2.1.2/api/R/histogram.html
new file mode 100644
index 000..9aedec9
--- /dev/null
+++ b/site/docs/2.1.2/api/R/histogram.html
@@ -0,0 +1,121 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Compute histogram 
statistics for given column
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+histogram 
{SparkR}R Documentation
+
+Compute histogram statistics for given column
+
+Description
+
+This function computes a histogram for a given SparkR Column.
+
+
+
+Usage
+
+
+## S4 method for signature 'SparkDataFrame,characterOrColumn'
+histogram(df, col, nbins = 10)
+
+
+
+Arguments
+
+
+df
+
+the SparkDataFrame containing the Column to build the histogram from.
+
+col
+
+the column as Character string or a Column to build the histogram from.
+
+nbins
+
+the number of bins (optional). Default value is 10.
+
+
+
+
+Value
+
+a data.frame with the histogram statistics, i.e., counts and centroids.
+
+
+
+Note
+
+histogram since 2.0.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, insertInto,
+intersect, isLocal,
+join, limit,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D 
+##D # Create a SparkDataFrame from the Iris dataset
+##D irisDF - createDataFrame(iris)
+##D 
+##D # Compute histogram statistics
+##D histStats - histogram(irisDF, irisDF$Sepal_Length, nbins = 12)
+##D 
+##D # Once SparkR has computed the histogram statistics, the histogram can be
+##D # rendered using the ggplot2 library:
+##D 
+##D require(ggplot2)
+##D plot - ggplot(histStats, aes(x = centroids, y = counts)) +
+##D geom_bar(stat = identity) +
+##D xlab(Sepal_Length) + ylab(Frequency)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/R/hour.html
--
diff --git a/site/docs/2.1.2/api/R/hour.html b/site/docs/2.1.2/api/R/hour.html
new file mode 100644
index 000..9331aff
--- /dev/null
+++ b/site/docs/2.1.2/api/R/hour.html
@@ -0,0 +1,71 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: hour
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+hour 

[02/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html
new file mode 100644
index 000..29034e8
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html
@@ -0,0 +1,330 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+BaseRRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.r
+Class BaseRRDDT,U
+
+
+
+Object
+
+
+org.apache.spark.rdd.RDDU
+
+
+org.apache.spark.api.r.BaseRRDDT,U
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+Direct Known Subclasses:
+PairwiseRRDD, RRDD, StringRRDD
+
+
+
+public abstract class BaseRRDDT,U
+extends RDDU
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+BaseRRDD(RDDTparent,
+intnumPartitions,
+byte[]func,
+Stringdeserializer,
+Stringserializer,
+byte[]packageNames,
+BroadcastObject[]broadcastVars,
+scala.reflect.ClassTagTevidence$1,
+scala.reflect.ClassTagUevidence$2)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+scala.collection.IteratorU
+compute(Partitionpartition,
+   TaskContextcontext)
+:: DeveloperApi ::
+ Implemented by subclasses to compute a given partition.
+
+
+
+Partition[]
+getPartitions()
+Implemented by subclasses to return the set of partitions 
in this RDD.
+
+
+
+
+
+
+
+Methods inherited from classorg.apache.spark.rdd.RDD
+aggregate,
 cache, cartesian,
 checkpoint,
 coalesce,
 collect, 
collect,
 context, 
count, countApprox, countApproxDistinct,
 countApproxDistinct,
 countByValue,
 countByValueApprox,
 dependencies,
 distinct, distinct,
 doubleRDDToDoubleRDDFunctions,
 
 filter, first, flatMap,
 fold,
 foreach,
 foreachPartition,
 getCheckpointFile,
 getNumPartitions,
 getStorageLevel,
 glom, groupBy,
 groupBy,
 groupBy,
 id, intersection,
 intersection,
 intersection,
 isCheckpointed,
 isEmpty, 
iterator, keyBy,
 localCheckpoint,
 map,
 mapPartitions,
 mapPartitionsWithIndex,
 max,
 min,
 name, numericRDDToDoubleRDDFunctions, 
partitioner,
 partitions,
 persist, 
persist,
 pipe,
 pipe,
 pipe,
 preferredLocations,
 randomSplit, rddToAsyncRDDActions,
 rddToOrderedRDDFunctions,
 rddToPairRDDFunctions,
 rddToSequenceFileRDDFunctions,
 reduce,
 repartition, sample,
 saveAsObjectFile,
 saveAsTextFile,
 saveAsTextFile,
 setName,
 sortBy,
 sparkContext,
 subtract,
 subtract, subtract,
 take, takeOrdered,
 takeSample,
 toDebugString,
 toJavaRDD, 
toLocalIterator,
 top,
 toString, treeAggregate,
 treeReduce,
 union,
 unpersist,
 zip,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipWithIndex,
 zipWithUniqueId
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+BaseRRDD
+publicBaseRRDD(RDDTparent,
+intnumPartitions,
+byte[]func,
+Stringdeserializer,
+Stringserializer,
+byte[]packageNames,
+BroadcastObject[]broadcastVars,
+scala.reflect.ClassTagTevidence$1,
+scala.reflect.ClassTagUevidence$2)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+getPartitions
+publicPartition[]getPartitions()
+Description copied from class:RDD
+Implemented by subclasses to return the set of partitions 
in this RDD. This method will only
+ be called once, so it is safe to implement a time-consuming computation in it.
+ 
+ The partitions in this array must satisfy the following property:
+   rdd.partitions.zipWithIndex.forall { case (partition, index) = 
partition.index == index }
+Returns:(undocumented)
+
+
+
+
+
+
+
+compute
+publicscala.collection.IteratorUcompute(Partitionpartition,
+   TaskContextcontext)
+Description copied from class:RDD
+:: DeveloperApi ::
+ Implemented by subclasses to compute a given partition.
+
+Specified by:
+computein
 classRDDU
+Parameters:partition - 
(undocumented)context 

[07/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html
new file mode 100644
index 000..6927b66
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html
@@ -0,0 +1,2088 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaSparkContext (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class JavaSparkContext
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaSparkContext
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Closeable, AutoCloseable
+
+
+
+public class JavaSparkContext
+extends Object
+implements java.io.Closeable
+A Java-friendly version of SparkContext that returns
+ JavaRDDs and works with Java 
collections instead of Scala ones.
+ 
+ Only one SparkContext may be active per JVM.  You must stop() 
the active SparkContext before
+ creating a new one.  This limitation may eventually be removed; see 
SPARK-2243 for more details.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaSparkContext()
+Create a JavaSparkContext that loads settings from system 
properties (for instance, when
+ launching with ./bin/spark-submit).
+
+
+
+JavaSparkContext(SparkConfconf)
+
+
+JavaSparkContext(SparkContextsc)
+
+
+JavaSparkContext(Stringmaster,
+StringappName)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+SparkConfconf)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+StringjarFile)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+String[]jars)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+String[]jars,
+
java.util.MapString,Stringenvironment)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+T,RAccumulableT,R
+accumulable(TinitialValue,
+   AccumulableParamT,Rparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+T,RAccumulableT,R
+accumulable(TinitialValue,
+   Stringname,
+   AccumulableParamT,Rparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+AccumulatorDouble
+accumulator(doubleinitialValue)
+Deprecated.
+use sc().doubleAccumulator(). Since 2.0.0.
+
+
+
+
+AccumulatorDouble
+accumulator(doubleinitialValue,
+   Stringname)
+Deprecated.
+use sc().doubleAccumulator(String). Since 
2.0.0.
+
+
+
+
+AccumulatorInteger
+accumulator(intinitialValue)
+Deprecated.
+use sc().longAccumulator(). Since 2.0.0.
+
+
+
+
+AccumulatorInteger
+accumulator(intinitialValue,
+   Stringname)
+Deprecated.
+use sc().longAccumulator(String). Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   AccumulatorParamTaccumulatorParam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   Stringname,
+   AccumulatorParamTaccumulatorParam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+void
+addFile(Stringpath)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addFile(Stringpath,
+   booleanrecursive)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addJar(Stringpath)
+Adds a JAR dependency for all tasks to be executed on this 
SparkContext in the future.
+
+
+
+String
+appName()
+
+
+JavaPairRDDString,PortableDataStream
+binaryFiles(Stringpath)
+Read a directory of binary files from HDFS, a local file 
system (available on all nodes),
+ or any Hadoop-supported file system URI as a byte array.
+
+
+
+JavaPairRDDString,PortableDataStream
+binaryFiles(Stringpath,
+   intminPartitions)
+Read a directory of binary files from HDFS, a local file 
system (available on all nodes),
+ or any Hadoop-supported file system URI as a byte array.
+
+
+
+JavaRDDbyte[]
+binaryRecords(Stringpath,
+  

[14/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/UnknownReason.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/UnknownReason.html 
b/site/docs/2.1.2/api/java/org/apache/spark/UnknownReason.html
new file mode 100644
index 000..8feb256
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/UnknownReason.html
@@ -0,0 +1,352 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+UnknownReason (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class UnknownReason
+
+
+
+Object
+
+
+org.apache.spark.UnknownReason
+
+
+
+
+
+
+
+
+public class UnknownReason
+extends Object
+:: DeveloperApi ::
+ We don't know why the task ended -- for example, because of a ClassNotFound 
exception when
+ deserializing the task result.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+UnknownReason()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+static boolean
+countTowardsTaskFailures()
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+static String
+toErrorString()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+UnknownReason
+publicUnknownReason()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+toErrorString
+public staticStringtoErrorString()
+
+
+
+
+
+
+
+countTowardsTaskFailures
+public staticbooleancountTowardsTaskFailures()
+
+
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[11/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
new file mode 100644
index 000..6aefe3e
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
@@ -0,0 +1,325 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaNewHadoopRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class 
JavaNewHadoopRDDK,V
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaPairRDDK,V
+
+
+org.apache.spark.api.java.JavaNewHadoopRDDK,V
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikescala.Tuple2K,V,JavaPairRDDK,V
+
+
+
+public class JavaNewHadoopRDDK,V
+extends JavaPairRDDK,V
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaNewHadoopRDD(NewHadoopRDDK,Vrdd,
+scala.reflect.ClassTagKkClassTag,
+scala.reflect.ClassTagVvClassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+scala.reflect.ClassTagK
+kClassTag()
+
+
+RJavaRDDR
+mapPartitionsWithInputSplit(Function2org.apache.hadoop.mapreduce.InputSplit,java.util.Iteratorscala.Tuple2K,V,java.util.IteratorRf,
+   booleanpreservesPartitioning)
+Maps over a partition, providing the InputSplit that was 
used as the base of the partition.
+
+
+
+scala.reflect.ClassTagV
+vClassTag()
+
+
+
+
+
+
+Methods inherited from classorg.apache.spark.api.java.JavaPairRDD
+aggregate,
 aggregateByKey,
 aggregateByKey,
 aggregateByKey,
 cache,
 cartesian, checkpoint,
 classTag,
 coalesce,
 coalesce,
 cogroup,
 cogroup,
 cogroup,
 cogroup, cogroup,
 cogroup,
 cogroup,
 cogroup,
 cogroup,
 collect,
 collectAsMap,
 collectAsync,
 collectPartitions,
 combineByKey,
 combineByKey,
  combineByKey,
 combineByKey,
 context,
 count,
 countApprox,
 countApprox,
 countApproxDistinct,
 countApproxDistinctByKey,
 countApproxDistinctByKey,
 countApproxDistinctByKey,
 countAsync,
 countByKey,
 countByKeyApprox,
 countByKeyApprox,
 countByValue,
 countByValueApprox,
 countByValueApprox,
 distinct,
 distinct,
 filter,
 first,
 flatMap,
 flatMapToDouble, flatMapToPair,
 flatMapValues,
 fold,
 foldByKey,
 foldByKey,
 foldByKey,
 foreach,
 foreachAsync,
 foreachPartition,
 foreachPartitionAsync,
 fromJavaRDD,
 fromRDD,
 fullOuterJoin
 , fullOuterJoin,
 fullOuterJoin,
 getCheckpointFile,
 getNumPartitions,
 getStorageLevel,
 glom,
 groupBy,
 gr
 oupBy, groupByKey,
 groupByKey,
 groupByKey,
 groupWith,
 groupWith,
 groupWith,
 id, 
intersection, isCheckpointed,
 isEmpty,
 iterator,
 join,
 join,
 join,
 keyBy,
 keys, leftOuterJoin,
 leftOuterJoin,
 leftOuterJoin,
 lookup,
 map,
 mapPartitions,
 mapPartitions, mapPartitionsToDouble,
 mapPartitionsToDouble,
 mapPartitionsToPair,
 mapPartitionsToPair,
 mapPartitionsWithIndex,
 mapPartitionsWithIndex$default$2, mapToDouble,
 mapToPair,
 mapValues,
 max,
 min,
 name,
 partitionBy,
 
 partitioner, partitions,
 persist,
 pipe,
 pipe,
 pipe,
 pipe,
 pipe,
 rdd, 
reduce, reduceByKey,
 reduceByKey,
 reduceByKey,
 reduceByKeyLocally,
 repartition,
 repartitionAndSortWithinPartitions,
 repartitionAndSortWithinPartitions,
 rightOuterJoin,
 rightOuterJoin,
 rightOuterJoin,
 sample,
 sample,
 sampleByKey,
 sampleByKey,
 sampleByKeyExact,
 sampleByKeyExact,
 saveAsHadoopDataset,
 saveAsHadoopFile,
 saveAsHadoopFile,
 saveAsHadoopFile,
 saveAsNewAPIHadoopDataset,
 saveAsNewAPIHadoopFile,
 saveAsNewAPIHadoopFile,
 saveAsObjectFile,
 saveAsTextFile,
 saveAsTextFile,
 setName,
 sortByKey,
 sortByKey,
 sortByKey,
 sortByKey,
 sortByKey,
 sortByKey,
 subtract, subtract,
 subtract,
 subtractByKey,
 subtractByKey,
 subtractByKey,
 take,
 takeAsync,
 takeOrdered,
 takeOrdered,
 takeSample,
 takeSample,
 toDebugString,
 toLocalIterator,
 top,
 top,
 toRDD,
 treeAggregate,
 treeAggregate,
 treeReduce,
 treeReduce,
 union,
 unpersist,
 unpersist,
 values, 
wrapRDD,
 zip,
 zipPartitions,
 zipWithIndex,
 

[06/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
new file mode 100644
index 000..a54f037
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
@@ -0,0 +1,323 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaSparkStatusTracker (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class 
JavaSparkStatusTracker
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaSparkStatusTracker
+
+
+
+
+
+
+
+
+public class JavaSparkStatusTracker
+extends Object
+Low-level status reporting APIs for monitoring job and 
stage progress.
+ 
+ These APIs intentionally provide very weak consistency semantics; consumers 
of these APIs should
+ be prepared to handle empty / missing information.  For example, a job's 
stage ids may be known
+ but the status API may not have any information about the details of those 
stages, so
+ getStageInfo could potentially return null for a 
valid stage id.
+ 
+ To limit memory usage, these APIs only provide information on recent jobs / 
stages.  These APIs
+ will provide information for the last spark.ui.retainedStages 
stages and
+ spark.ui.retainedJobs jobs.
+ 
+Note:
+  This class's constructor should be considered private and may be subject 
to change.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int[]
+getActiveJobIds()
+Returns an array containing the ids of all active 
jobs.
+
+
+
+int[]
+getActiveStageIds()
+Returns an array containing the ids of all active 
stages.
+
+
+
+int[]
+getJobIdsForGroup(StringjobGroup)
+Return a list of all known jobs in a particular job 
group.
+
+
+
+SparkJobInfo
+getJobInfo(intjobId)
+Returns job information, or null if the job 
info could not be found or was garbage collected.
+
+
+
+SparkStageInfo
+getStageInfo(intstageId)
+Returns stage information, or null if the 
stage info could not be found or was
+ garbage collected.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+getJobIdsForGroup
+publicint[]getJobIdsForGroup(StringjobGroup)
+Return a list of all known jobs in a particular job group.  
If jobGroup is null, then
+ returns all known jobs that are not associated with a job group.
+ 
+ The returned list may contain running, failed, and completed jobs, and may 
vary across
+ invocations of this method.  This method does not guarantee the order of the 
elements in
+ its result.
+Parameters:jobGroup 
- (undocumented)
+Returns:(undocumented)
+
+
+
+
+
+
+
+getActiveStageIds
+publicint[]getActiveStageIds()
+Returns an array containing the ids of all active stages.
+ 
+ This method does not guarantee the order of the elements in its result.
+Returns:(undocumented)
+
+
+
+
+
+
+
+getActiveJobIds
+publicint[]getActiveJobIds()
+Returns an array containing the ids of all active jobs.
+ 
+ This method does not guarantee the order of the elements in its result.
+Returns:(undocumented)
+
+
+
+
+
+
+
+getJobInfo
+publicSparkJobInfogetJobInfo(intjobId)
+Returns job information, or null if the job 
info could not be found or was garbage collected.
+Parameters:jobId - 
(undocumented)
+Returns:(undocumented)
+
+
+
+
+
+
+
+getStageInfo
+publicSparkStageInfogetStageInfo(intstageId)
+Returns stage information, or null if the 
stage info could not be found or was
+ garbage collected.
+Parameters:stageId - 
(undocumented)
+Returns:(undocumented)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+

[22/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html 
b/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html
new file mode 100644
index 000..be22a87
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html
@@ -0,0 +1,358 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JobExecutionStatus (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Enum Constants|
+Field|
+Method
+
+
+Detail:
+Enum Constants|
+Field|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Enum JobExecutionStatus
+
+
+
+Object
+
+
+EnumJobExecutionStatus
+
+
+org.apache.spark.JobExecutionStatus
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, ComparableJobExecutionStatus
+
+
+
+public enum JobExecutionStatus
+extends EnumJobExecutionStatus
+
+
+
+
+
+
+
+
+
+
+
+Enum Constant Summary
+
+Enum Constants
+
+Enum Constant and Description
+
+
+FAILED
+
+
+RUNNING
+
+
+SUCCEEDED
+
+
+UNKNOWN
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static JobExecutionStatus
+fromString(Stringstr)
+
+
+static JobExecutionStatus
+valueOf(Stringname)
+Returns the enum constant of this type with the specified 
name.
+
+
+
+static JobExecutionStatus[]
+values()
+Returns an array containing the constants of this enum 
type, in
+the order they are declared.
+
+
+
+
+
+
+
+Methods inherited from classEnum
+compareTo, equals, getDeclaringClass, hashCode, name, ordinal, toString, 
valueOf
+
+
+
+
+
+Methods inherited from classObject
+getClass, notify, notifyAll, wait, wait, wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Enum Constant Detail
+
+
+
+
+
+RUNNING
+public static finalJobExecutionStatus RUNNING
+
+
+
+
+
+
+
+SUCCEEDED
+public static finalJobExecutionStatus SUCCEEDED
+
+
+
+
+
+
+
+FAILED
+public static finalJobExecutionStatus FAILED
+
+
+
+
+
+
+
+UNKNOWN
+public static finalJobExecutionStatus UNKNOWN
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+values
+public staticJobExecutionStatus[]values()
+Returns an array containing the constants of this enum 
type, in
+the order they are declared.  This method may be used to iterate
+over the constants as follows:
+
+for (JobExecutionStatus c : JobExecutionStatus.values())
+   System.out.println(c);
+
+Returns:an array containing the 
constants of this enum type, in the order they are declared
+
+
+
+
+
+
+
+valueOf
+public staticJobExecutionStatusvalueOf(Stringname)
+Returns the enum constant of this type with the specified 
name.
+The string must match exactly an identifier used to declare an
+enum constant in this type.  (Extraneous whitespace characters are 
+not permitted.)
+Parameters:name - 
the name of the enum constant to be returned.
+Returns:the enum constant with the 
specified name
+Throws:
+IllegalArgumentException - if this enum type has no constant 
with the specified name
+NullPointerException - if the argument is null
+
+
+
+
+
+
+
+fromString
+public staticJobExecutionStatusfromString(Stringstr)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Enum Constants|
+Field|
+Method
+
+
+Detail:
+Enum Constants|
+Field|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6155a89/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html 
b/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html
new file mode 100644
index 000..84e3160
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html
@@ -0,0 +1,225 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JobSubmitter (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class UnknownReason
+
+
+
+Object
+
+
+org.apache.spark.UnknownReason
+
+
+
+
+
+
+
+
+public class UnknownReason
+extends Object
+:: DeveloperApi ::
+ We don't know why the task ended -- for example, because of a ClassNotFound 
exception when
+ deserializing the task result.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+UnknownReason()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+static boolean
+countTowardsTaskFailures()
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+static String
+toErrorString()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+UnknownReason
+publicUnknownReason()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+toErrorString
+public staticStringtoErrorString()
+
+
+
+
+
+
+
+countTowardsTaskFailures
+public staticbooleancountTowardsTaskFailures()
+
+
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[45/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/date_format.html
--
diff --git a/site/docs/2.1.2/api/R/date_format.html 
b/site/docs/2.1.2/api/R/date_format.html
new file mode 100644
index 000..7f9ad3c
--- /dev/null
+++ b/site/docs/2.1.2/api/R/date_format.html
@@ -0,0 +1,87 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: date_format
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+date_format {SparkR}R 
Documentation
+
+date_format
+
+Description
+
+Converts a date/timestamp/string to a value of string in the format 
specified by the date
+format given by the second argument.
+
+
+
+Usage
+
+
+date_format(y, x)
+
+## S4 method for signature 'Column,character'
+date_format(y, x)
+
+
+
+Arguments
+
+
+y
+
+Column to compute on.
+
+x
+
+date format specification.
+
+
+
+
+Details
+
+A pattern could be for instance 
+dd.MM. and could return a string like '18.03.1993'. All
+pattern letters of java.text.SimpleDateFormat can be used.
+
+Note: Use when ever possible specialized functions like year. 
These benefit from a
+specialized implementation.
+
+
+
+Note
+
+date_format since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_sub,
+datediff, dayofmonth,
+dayofyear, from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: date_format(df$t, MM/dd/yyy)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/date_sub.html
--
diff --git a/site/docs/2.1.2/api/R/date_sub.html 
b/site/docs/2.1.2/api/R/date_sub.html
new file mode 100644
index 000..89d6661
--- /dev/null
+++ b/site/docs/2.1.2/api/R/date_sub.html
@@ -0,0 +1,75 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: date_sub
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+date_sub 
{SparkR}R Documentation
+
+date_sub
+
+Description
+
+Returns the date that is x days before
+
+
+
+Usage
+
+
+date_sub(y, x)
+
+## S4 method for signature 'Column,numeric'
+date_sub(y, x)
+
+
+
+Arguments
+
+
+y
+
+Column to compute on
+
+x
+
+Number of days to substract
+
+
+
+
+Note
+
+date_sub since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+datediff, dayofmonth,
+dayofyear, from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: date_sub(df$d, 1)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/datediff.html
--
diff --git a/site/docs/2.1.2/api/R/datediff.html 
b/site/docs/2.1.2/api/R/datediff.html
new file mode 100644
index 000..78ff84c
--- /dev/null
+++ b/site/docs/2.1.2/api/R/datediff.html
@@ -0,0 +1,75 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: datediff
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+datediff 
{SparkR}R Documentation
+
+datediff
+
+Description
+
+Returns the number of days from start to end.
+
+
+
+Usage
+
+
+datediff(y, x)
+
+## S4 method for signature 'Column'
+datediff(y, x)
+
+
+
+Arguments
+
+
+y
+
+end Column to use.
+
+x
+
+start Column to use.
+
+
+
+
+Note
+
+datediff since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, dayofmonth,
+dayofyear, from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: datediff(df$c, x)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/dayofmonth.html

[35/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/window.html
--
diff --git a/site/docs/2.1.2/api/R/window.html 
b/site/docs/2.1.2/api/R/window.html
new file mode 100644
index 000..fc41b25
--- /dev/null
+++ b/site/docs/2.1.2/api/R/window.html
@@ -0,0 +1,122 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: window
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+window 
{SparkR}R Documentation
+
+window
+
+Description
+
+Bucketize rows into one or more time windows given a timestamp specifying 
column. Window
+starts are inclusive but the window ends are exclusive, e.g. 12:05 will be in 
the window
+[12:05,12:10) but not in [12:00,12:05). Windows can support microsecond 
precision. Windows in
+the order of months are not supported.
+
+
+
+Usage
+
+
+window(x, ...)
+
+## S4 method for signature 'Column'
+window(x, windowDuration, slideDuration = NULL,
+  startTime = NULL)
+
+
+
+Arguments
+
+
+x
+
+a time Column. Must be of TimestampType.
+
+...
+
+further arguments to be passed to or from other methods.
+
+windowDuration
+
+a string specifying the width of the window, e.g. '1 second',
+'1 day 12 hours', '2 minutes'. Valid interval strings are 'week',
+'day', 'hour', 'minute', 'second', 'millisecond', 'microsecond'. Note that
+the duration is a fixed length of time, and does not vary over time
+according to a calendar. For example, '1 day' always means 86,400,000
+milliseconds, not a calendar day.
+
+slideDuration
+
+a string specifying the sliding interval of the window. Same format as
+windowDuration. A new window will be generated every
+slideDuration. Must be less than or equal to
+the windowDuration. This duration is likewise absolute, and does 
not
+vary according to a calendar.
+
+startTime
+
+the offset with respect to 1970-01-01 00:00:00 UTC with which to start
+window intervals. For example, in order to have hourly tumbling windows
+that start 15 minutes past the hour, e.g. 12:15-13:15, 13:15-14:15... provide
+startTime as "15 minutes".
+
+
+
+
+Value
+
+An output column of struct called 'window' by default with the nested 
columns 'start'
+and 'end'.
+
+
+
+Note
+
+window since 2.0.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, datediff,
+dayofmonth, dayofyear,
+from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+second, to_date,
+to_utc_timestamp,
+unix_timestamp, weekofyear,
+year
+
+
+
+Examples
+
+## Not run: 
+##D   # One minute windows every 15 seconds 10 seconds after the minute, e.g. 
09:00:10-09:01:10,
+##D   # 09:00:25-09:01:25, 09:00:40-09:01:40, ...
+##D   window(df$time, 1 minute, 15 seconds, 10 
seconds)
+##D 
+##D   # One minute tumbling windows 15 seconds after the minute, e.g. 
09:00:15-09:01:15,
+##D# 09:01:15-09:02:15...
+##D   window(df$time, 1 minute, startTime = 15 seconds)
+##D 
+##D   # Thirty-second windows every 10 seconds, e.g. 09:00:00-09:00:30, 
09:00:10-09:00:40, ...
+##D   window(df$time, 30 seconds, 10 seconds)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/windowOrderBy.html
--
diff --git a/site/docs/2.1.2/api/R/windowOrderBy.html 
b/site/docs/2.1.2/api/R/windowOrderBy.html
new file mode 100644
index 000..19a48fe
--- /dev/null
+++ b/site/docs/2.1.2/api/R/windowOrderBy.html
@@ -0,0 +1,71 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: windowOrderBy
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+windowOrderBy {SparkR}R 
Documentation
+
+windowOrderBy
+
+Description
+
+Creates a WindowSpec with the ordering defined.
+
+
+
+Usage
+
+
+windowOrderBy(col, ...)
+
+## S4 method for signature 'character'
+windowOrderBy(col, ...)
+
+## S4 method for signature 'Column'
+windowOrderBy(col, ...)
+
+
+
+Arguments
+
+
+col
+
+A column name or Column by which rows are ordered within
+windows.
+
+...
+
+Optional column names or Columns in addition to col, by
+which rows are ordered within windows.
+
+
+
+
+Note
+
+windowOrderBy(character) since 2.0.0
+
+windowOrderBy(Column) since 2.0.0
+
+
+
+Examples
+
+## Not run: 
+##D   ws - windowOrderBy(key1, key2)
+##D   df1 - select(df, over(lead(value, 1), ws))
+##D 
+##D   ws - 

[28/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html 
b/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html
new file mode 100644
index 000..a54423b
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/Accumulable.html
@@ -0,0 +1,460 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Accumulable (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class AccumulableR,T
+
+
+
+Object
+
+
+org.apache.spark.AccumulableR,T
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+Direct Known Subclasses:
+Accumulator
+
+
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+public class AccumulableR,T
+extends Object
+implements java.io.Serializable
+A data type that can be accumulated, i.e. has a commutative 
and associative "add" operation,
+ but where the result type, R, may be different from the element 
type being added, T.
+ 
+ You must define how to add data, and how to merge two of these together.  For 
some data types,
+ such as a counter, these might be the same operation. In that case, you can 
use the simpler
+ Accumulator. They won't always be the same, 
though -- e.g., imagine you are
+ accumulating a set. You will add items to the set, and you will union two 
sets together.
+ 
+ Operations are not thread-safe.
+ 
+ param:  id ID of this accumulator; for internal use only.
+ param:  initialValue initial value of accumulator
+ param:  param helper object defining how to add elements of type 
R and T
+ param:  name human-readable name for use in Spark's web UI
+ param:  countFailedValues whether to accumulate values from failed tasks. 
This is set to true
+  for system and time metrics like serialization time 
or bytes spilled,
+  and false for things with absolute values like 
number of input rows.
+  This should be used for internal metrics only.
+See Also:Serialized 
Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+Accumulable(RinitialValue,
+   AccumulableParamR,Tparam)
+Deprecated.
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+add(Tterm)
+Deprecated.
+Add more data to this accumulator / accumulable
+
+
+
+long
+id()
+Deprecated.
+
+
+
+R
+localValue()
+Deprecated.
+Get the current value of this accumulator from within a 
task.
+
+
+
+void
+merge(Rterm)
+Deprecated.
+Merge two accumulable objects together
+
+
+
+scala.OptionString
+name()
+Deprecated.
+
+
+
+void
+setValue(RnewValue)
+Deprecated.
+Set the accumulator's value.
+
+
+
+String
+toString()
+Deprecated.
+
+
+
+R
+value()
+Deprecated.
+Access the accumulator's current value; only allowed on 
driver.
+
+
+
+R
+zero()
+Deprecated.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+
+
+Accumulable
+publicAccumulable(RinitialValue,
+   AccumulableParamR,Tparam)
+Deprecated.
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+id
+publiclongid()
+Deprecated.
+
+
+
+
+
+
+
+name
+publicscala.OptionStringname()
+Deprecated.
+
+
+
+
+
+
+
+zero
+publicRzero()
+Deprecated.
+
+
+
+
+
+
+
+
+
+add
+publicvoidadd(Tterm)
+Deprecated.
+Add more data to this accumulator / accumulable
+Parameters:term - 
the data to add
+
+
+
+
+
+
+
+
+
+merge
+publicvoidmerge(Rterm)
+Deprecated.
+Merge two accumulable objects together
+ 
+ Normally, a user will not want to use this version, but will instead call 
add.
+Parameters:term - 
the other R that will get merged with this
+
+
+
+
+
+
+
+value
+publicRvalue()
+Deprecated.
+Access the accumulator's current value; only allowed on 
driver.
+Returns:(undocumented)
+
+
+
+
+
+
+
+localValue
+publicRlocalValue()
+Deprecated.
+Get the current value of this accumulator from within a 
task.
+ 
+ This is NOT the global value of the accumulator.  To get the global value 
after a
+ completed operation on the dataset, call value.
+ 
+ The typical use of this method is to 

[43/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/hex.html
--
diff --git a/site/docs/2.1.2/api/R/hex.html b/site/docs/2.1.2/api/R/hex.html
new file mode 100644
index 000..ee4955b
--- /dev/null
+++ b/site/docs/2.1.2/api/R/hex.html
@@ -0,0 +1,79 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: hex
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+hex 
{SparkR}R Documentation
+
+hex
+
+Description
+
+Computes hex value of the given column.
+
+
+
+Usage
+
+
+hex(x)
+
+## S4 method for signature 'Column'
+hex(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+hex since 1.5.0
+
+
+
+See Also
+
+Other math_funcs: acos, asin,
+atan2, atan,
+bin, bround,
+cbrt, ceil,
+conv, corr,
+cosh, cos,
+covar_pop, cov,
+expm1, exp,
+factorial, floor,
+hypot, log10,
+log1p, log2,
+log, pmod,
+rint, round,
+shiftLeft,
+shiftRightUnsigned,
+shiftRight, signum,
+sinh, sin,
+sqrt, tanh,
+tan, toDegrees,
+toRadians, unhex
+
+
+
+Examples
+
+## Not run: hex(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/histogram.html
--
diff --git a/site/docs/2.1.2/api/R/histogram.html 
b/site/docs/2.1.2/api/R/histogram.html
new file mode 100644
index 000..9aedec9
--- /dev/null
+++ b/site/docs/2.1.2/api/R/histogram.html
@@ -0,0 +1,121 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Compute histogram 
statistics for given column
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+histogram 
{SparkR}R Documentation
+
+Compute histogram statistics for given column
+
+Description
+
+This function computes a histogram for a given SparkR Column.
+
+
+
+Usage
+
+
+## S4 method for signature 'SparkDataFrame,characterOrColumn'
+histogram(df, col, nbins = 10)
+
+
+
+Arguments
+
+
+df
+
+the SparkDataFrame containing the Column to build the histogram from.
+
+col
+
+the column as Character string or a Column to build the histogram from.
+
+nbins
+
+the number of bins (optional). Default value is 10.
+
+
+
+
+Value
+
+a data.frame with the histogram statistics, i.e., counts and centroids.
+
+
+
+Note
+
+histogram since 2.0.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, insertInto,
+intersect, isLocal,
+join, limit,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D 
+##D # Create a SparkDataFrame from the Iris dataset
+##D irisDF - createDataFrame(iris)
+##D 
+##D # Compute histogram statistics
+##D histStats - histogram(irisDF, irisDF$Sepal_Length, nbins = 12)
+##D 
+##D # Once SparkR has computed the histogram statistics, the histogram can be
+##D # rendered using the ggplot2 library:
+##D 
+##D require(ggplot2)
+##D plot - ggplot(histStats, aes(x = centroids, y = counts)) +
+##D geom_bar(stat = identity) +
+##D xlab(Sepal_Length) + ylab(Frequency)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/hour.html
--
diff --git a/site/docs/2.1.2/api/R/hour.html b/site/docs/2.1.2/api/R/hour.html
new file mode 100644
index 000..9331aff
--- /dev/null
+++ b/site/docs/2.1.2/api/R/hour.html
@@ -0,0 +1,71 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: hour
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+hour 

[25/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html 
b/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html
new file mode 100644
index 000..85f6c0d
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/ComplexFutureAction.html
@@ -0,0 +1,489 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+ComplexFutureAction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
ComplexFutureActionT
+
+
+
+Object
+
+
+org.apache.spark.ComplexFutureActionT
+
+
+
+
+
+
+
+All Implemented Interfaces:
+FutureActionT, 
scala.concurrent.AwaitableT, scala.concurrent.FutureT
+
+
+
+public class ComplexFutureActionT
+extends Object
+implements FutureActionT
+A FutureAction for actions 
that could trigger multiple Spark jobs. Examples include take,
+ takeSample. Cancellation works by setting the cancelled flag to true and 
cancelling any pending
+ jobs.
+
+
+
+
+
+
+
+
+
+
+
+Nested Class Summary
+
+
+
+
+Nested classes/interfaces inherited from 
interfacescala.concurrent.Future
+scala.concurrent.Future.InternalCallbackExecutor$
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+ComplexFutureAction(scala.Function1JobSubmitter,scala.concurrent.FutureTrun)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+cancel()
+Cancels the execution of this action.
+
+
+
+boolean
+isCancelled()
+Returns whether the action has been cancelled.
+
+
+
+boolean
+isCompleted()
+Returns whether the action has already been completed with 
a value or an exception.
+
+
+
+scala.collection.SeqObject
+jobIds()
+Returns the job IDs run by the underlying async 
operation.
+
+
+
+Uvoid
+onComplete(scala.Function1scala.util.TryT,Ufunc,
+  scala.concurrent.ExecutionContextexecutor)
+When this action is completed, either through an exception, 
or a value, applies the provided
+ function.
+
+
+
+ComplexFutureActionT
+ready(scala.concurrent.duration.DurationatMost,
+ scala.concurrent.CanAwaitpermit)
+Blocks until this action completes.
+
+
+
+T
+result(scala.concurrent.duration.DurationatMost,
+  scala.concurrent.CanAwaitpermit)
+Awaits and returns the result (of type T) of this 
action.
+
+
+
+scala.Optionscala.util.TryT
+value()
+The value of this Future.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+Methods inherited from interfaceorg.apache.spark.FutureAction
+get
+
+
+
+
+
+Methods inherited from interfacescala.concurrent.Future
+andThen, collect, failed, fallbackTo, filter, flatMap, foreach, map, 
mapTo, onFailure, onSuccess, recover, recoverWith, transform, withFilter, 
zip
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+ComplexFutureAction
+publicComplexFutureAction(scala.Function1JobSubmitter,scala.concurrent.FutureTrun)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+cancel
+publicvoidcancel()
+Description copied from interface:FutureAction
+Cancels the execution of this action.
+
+Specified by:
+cancelin
 interfaceFutureActionT
+
+
+
+
+
+
+
+
+isCancelled
+publicbooleanisCancelled()
+Description copied from interface:FutureAction
+Returns whether the action has been cancelled.
+
+Specified by:
+isCancelledin
 interfaceFutureActionT
+Returns:(undocumented)
+
+
+
+
+
+
+
+ready
+publicComplexFutureActionTready(scala.concurrent.duration.DurationatMost,
+   scala.concurrent.CanAwaitpermit)
+ throws InterruptedException,
+java.util.concurrent.TimeoutException
+Description copied from interface:FutureAction
+Blocks until this action completes.
+ 
+
+Specified by:
+readyin
 interfaceFutureActionT
+Specified by:
+readyin 
interfacescala.concurrent.AwaitableT
+Parameters:atMost - 
maximum wait time, which may be negative (no waiting is done), Duration.Inf
+   for unbounded waiting, or a finite positive 
durationpermit - (undocumented)
+Returns:this FutureAction
+Throws:
+InterruptedException

[27/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
new file mode 100644
index 000..8a8d442
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.IntAccumulatorParam$.html
@@ -0,0 +1,365 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+AccumulatorParam.IntAccumulatorParam$ (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No 
Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
AccumulatorParam.IntAccumulatorParam$
+
+
+
+Object
+
+
+org.apache.spark.AccumulatorParam.IntAccumulatorParam$
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, AccumulableParamObject,Object, AccumulatorParamObject
+
+
+Enclosing interface:
+AccumulatorParamT
+
+
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+public static class AccumulatorParam.IntAccumulatorParam$
+extends Object
+implements AccumulatorParamObject
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Nested Class Summary
+
+
+
+
+Nested classes/interfaces inherited from 
interfaceorg.apache.spark.AccumulatorParam
+AccumulatorParam.DoubleAccumulatorParam$, 
AccumulatorParam.FloatAccumulatorParam$, 
AccumulatorParam.IntAccumulatorParam$, AccumulatorParam.LongAccumulatorParam$, 
AccumulatorParam.StringAccumulatorParam$
+
+
+
+
+
+
+
+
+Field Summary
+
+Fields
+
+Modifier and Type
+Field and Description
+
+
+static AccumulatorParam.IntAccumulatorParam$
+MODULE$
+Deprecated.
+Static reference to the singleton instance of this Scala 
object.
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+AccumulatorParam.IntAccumulatorParam$()
+Deprecated.
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+addInPlace(intt1,
+  intt2)
+Deprecated.
+
+
+
+int
+zero(intinitialValue)
+Deprecated.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+Methods inherited from interfaceorg.apache.spark.AccumulatorParam
+addAccumulator
+
+
+
+
+
+Methods inherited from interfaceorg.apache.spark.AccumulableParam
+addInPlace,
 zero
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Field Detail
+
+
+
+
+
+MODULE$
+public static finalAccumulatorParam.IntAccumulatorParam$ 
MODULE$
+Deprecated.
+Static reference to the singleton instance of this Scala 
object.
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+AccumulatorParam.IntAccumulatorParam$
+publicAccumulatorParam.IntAccumulatorParam$()
+Deprecated.
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+addInPlace
+publicintaddInPlace(intt1,
+ intt2)
+Deprecated.
+
+
+
+
+
+
+
+zero
+publicintzero(intinitialValue)
+Deprecated.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No 
Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
new file mode 100644
index 000..e2cd2c9
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/AccumulatorParam.LongAccumulatorParam$.html
@@ -0,0 +1,365 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+AccumulatorParam.LongAccumulatorParam$ (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+
+Constant Field Values
+Contents
+
+org.apache.*
+
+
+
+
+
+org.apache.*
+
+
+
+org.apache.spark.launcher.SparkLauncher
+
+Modifier and Type
+Constant Field
+Value
+
+
+
+
+
+publicstaticfinalString
+CHILD_CONNECTION_TIMEOUT
+"spark.launcher.childConectionTimeout"
+
+
+
+
+publicstaticfinalString
+CHILD_PROCESS_LOGGER_NAME
+"spark.launcher.childProcLoggerName"
+
+
+
+
+publicstaticfinalString
+DEPLOY_MODE
+"spark.submit.deployMode"
+
+
+
+
+publicstaticfinalString
+DRIVER_EXTRA_CLASSPATH
+"spark.driver.extraClassPath"
+
+
+
+
+publicstaticfinalString
+DRIVER_EXTRA_JAVA_OPTIONS
+"spark.driver.extraJavaOptions"
+
+
+
+
+publicstaticfinalString
+DRIVER_EXTRA_LIBRARY_PATH
+"spark.driver.extraLibraryPath"
+
+
+
+
+publicstaticfinalString
+DRIVER_MEMORY
+"spark.driver.memory"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_CORES
+"spark.executor.cores"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_EXTRA_CLASSPATH
+"spark.executor.extraClassPath"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_EXTRA_JAVA_OPTIONS
+"spark.executor.extraJavaOptions"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_EXTRA_LIBRARY_PATH
+"spark.executor.extraLibraryPath"
+
+
+
+
+publicstaticfinalString
+EXECUTOR_MEMORY
+"spark.executor.memory"
+
+
+
+
+publicstaticfinalString
+NO_RESOURCE
+"spark-internal"
+
+
+
+
+publicstaticfinalString
+SPARK_MASTER
+"spark.master"
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/deprecated-list.html
--
diff --git a/site/docs/2.1.2/api/java/deprecated-list.html 
b/site/docs/2.1.2/api/java/deprecated-list.html
new file mode 100644
index 000..51c3c20
--- /dev/null
+++ b/site/docs/2.1.2/api/java/deprecated-list.html
@@ -0,0 +1,611 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Deprecated List (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+
+Deprecated API
+Contents
+
+Deprecated Interfaces
+Deprecated Classes
+Deprecated Methods
+Deprecated Constructors
+
+
+
+
+
+
+
+
+Deprecated Interfaces
+
+Interface and Description
+
+
+
+org.apache.spark.AccumulableParam
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+
+
+
+
+
+
+
+
+Deprecated Classes
+
+Class and Description
+
+
+
+org.apache.spark.Accumulable
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.Accumulator
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.DoubleAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.FloatAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.IntAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.LongAccumulatorParam$
+use AccumulatorV2. Since 2.0.0.
+
+
+
+org.apache.spark.AccumulatorParam.StringAccumulatorParam$
+use 

[07/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html
new file mode 100644
index 000..6927b66
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkContext.html
@@ -0,0 +1,2088 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaSparkContext (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class JavaSparkContext
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaSparkContext
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Closeable, AutoCloseable
+
+
+
+public class JavaSparkContext
+extends Object
+implements java.io.Closeable
+A Java-friendly version of SparkContext that returns
+ JavaRDDs and works with Java 
collections instead of Scala ones.
+ 
+ Only one SparkContext may be active per JVM.  You must stop() 
the active SparkContext before
+ creating a new one.  This limitation may eventually be removed; see 
SPARK-2243 for more details.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaSparkContext()
+Create a JavaSparkContext that loads settings from system 
properties (for instance, when
+ launching with ./bin/spark-submit).
+
+
+
+JavaSparkContext(SparkConfconf)
+
+
+JavaSparkContext(SparkContextsc)
+
+
+JavaSparkContext(Stringmaster,
+StringappName)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+SparkConfconf)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+StringjarFile)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+String[]jars)
+
+
+JavaSparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+String[]jars,
+
java.util.MapString,Stringenvironment)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+T,RAccumulableT,R
+accumulable(TinitialValue,
+   AccumulableParamT,Rparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+T,RAccumulableT,R
+accumulable(TinitialValue,
+   Stringname,
+   AccumulableParamT,Rparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+AccumulatorDouble
+accumulator(doubleinitialValue)
+Deprecated.
+use sc().doubleAccumulator(). Since 2.0.0.
+
+
+
+
+AccumulatorDouble
+accumulator(doubleinitialValue,
+   Stringname)
+Deprecated.
+use sc().doubleAccumulator(String). Since 
2.0.0.
+
+
+
+
+AccumulatorInteger
+accumulator(intinitialValue)
+Deprecated.
+use sc().longAccumulator(). Since 2.0.0.
+
+
+
+
+AccumulatorInteger
+accumulator(intinitialValue,
+   Stringname)
+Deprecated.
+use sc().longAccumulator(String). Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   AccumulatorParamTaccumulatorParam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   Stringname,
+   AccumulatorParamTaccumulatorParam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+void
+addFile(Stringpath)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addFile(Stringpath,
+   booleanrecursive)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addJar(Stringpath)
+Adds a JAR dependency for all tasks to be executed on this 
SparkContext in the future.
+
+
+
+String
+appName()
+
+
+JavaPairRDDString,PortableDataStream
+binaryFiles(Stringpath)
+Read a directory of binary files from HDFS, a local file 
system (available on all nodes),
+ or any Hadoop-supported file system URI as a byte array.
+
+
+
+JavaPairRDDString,PortableDataStream
+binaryFiles(Stringpath,
+   intminPartitions)
+Read a directory of binary files from HDFS, a local file 
system (available on all nodes),
+ or any Hadoop-supported file system URI as a byte array.
+
+
+
+JavaRDDbyte[]
+binaryRecords(Stringpath,
+  

[20/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html
new file mode 100644
index 000..4c33803
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkConf.html
@@ -0,0 +1,1147 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkConf (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkConf
+
+
+
+Object
+
+
+org.apache.spark.SparkConf
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, Cloneable
+
+
+
+public class SparkConf
+extends Object
+implements scala.Cloneable, scala.Serializable
+Configuration for a Spark application. Used to set various 
Spark parameters as key-value pairs.
+ 
+ Most of the time, you would create a SparkConf object with new 
SparkConf(), which will load
+ values from any spark.* Java system properties set in your 
application as well. In this case,
+ parameters you set directly on the SparkConf object take 
priority over system properties.
+ 
+ For unit tests, you can also call new SparkConf(false) to skip 
loading external settings and
+ get the same configuration no matter what the system properties are.
+ 
+ All setter methods in this class support chaining. For example, you can write
+ new SparkConf().setMaster("local").setAppName("My app").
+ 
+ param:  loadDefaults whether to also load values from Java system properties
+ 
+See Also:Serialized 
FormNote:
+  Once a SparkConf object is passed to Spark, it is cloned and can no 
longer be modified
+ by the user. Spark does not support modifying the configuration at 
runtime.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkConf()
+Create a SparkConf that loads defaults from system 
properties and the classpath
+
+
+
+SparkConf(booleanloadDefaults)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+SparkConf
+clone()
+Copy this object
+
+
+
+boolean
+contains(Stringkey)
+Does the configuration contain a given parameter?
+
+
+
+String
+get(Stringkey)
+Get a parameter; throws a NoSuchElementException if it's 
not set
+
+
+
+String
+get(Stringkey,
+   StringdefaultValue)
+Get a parameter, falling back to a default if not set
+
+
+
+scala.Tuple2String,String[]
+getAll()
+Get all parameters as a list of pairs
+
+
+
+scala.Tuple2String,String[]
+getAllWithPrefix(Stringprefix)
+Get all parameters that start with prefix
+
+
+
+String
+getAppId()
+Returns the Spark application id, valid in the Driver after 
TaskScheduler registration and
+ from the start in the Executor.
+
+
+
+scala.collection.immutable.MapObject,String
+getAvroSchema()
+Gets all the avro schemas in the configuration used in the 
generic Avro record serializer
+
+
+
+boolean
+getBoolean(Stringkey,
+  booleandefaultValue)
+Get a parameter as a boolean, falling back to a default if 
not set
+
+
+
+static scala.OptionString
+getDeprecatedConfig(Stringkey,
+   SparkConfconf)
+Looks for available deprecated keys for the given config 
option, and return the first
+ value available.
+
+
+
+double
+getDouble(Stringkey,
+ doubledefaultValue)
+Get a parameter as a double, falling back to a default if 
not set
+
+
+
+scala.collection.Seqscala.Tuple2String,String
+getExecutorEnv()
+Get all executor environment variables set on this 
SparkConf
+
+
+
+int
+getInt(Stringkey,
+  intdefaultValue)
+Get a parameter as an integer, falling back to a default if 
not set
+
+
+
+long
+getLong(Stringkey,
+   longdefaultValue)
+Get a parameter as a long, falling back to a default if not 
set
+
+
+
+scala.OptionString
+getOption(Stringkey)
+Get a parameter as an Option
+
+
+
+long
+getSizeAsBytes(Stringkey)
+Get a size parameter as bytes; throws a 
NoSuchElementException if it's not set.
+
+
+
+long
+getSizeAsBytes(Stringkey,
+  longdefaultValue)
+Get a size parameter as bytes, falling back to a default if 
not set.
+
+
+
+long
+getSizeAsBytes(Stringkey,
+  StringdefaultValue)
+Get a size parameter as bytes, falling back to a 

[23/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html 
b/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html
new file mode 100644
index 000..b236e74
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/InternalAccumulator.html
@@ -0,0 +1,499 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+InternalAccumulator (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
InternalAccumulator
+
+
+
+Object
+
+
+org.apache.spark.InternalAccumulator
+
+
+
+
+
+
+
+
+public class InternalAccumulator
+extends Object
+A collection of fields and methods concerned with internal 
accumulators that represent
+ task level metrics.
+
+
+
+
+
+
+
+
+
+
+
+Nested Class Summary
+
+Nested Classes
+
+Modifier and Type
+Class and Description
+
+
+static class
+InternalAccumulator.input$
+
+
+static class
+InternalAccumulator.output$
+
+
+static class
+InternalAccumulator.shuffleRead$
+
+
+static class
+InternalAccumulator.shuffleWrite$
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+InternalAccumulator()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static String
+DISK_BYTES_SPILLED()
+
+
+static String
+EXECUTOR_CPU_TIME()
+
+
+static String
+EXECUTOR_DESERIALIZE_CPU_TIME()
+
+
+static String
+EXECUTOR_DESERIALIZE_TIME()
+
+
+static String
+EXECUTOR_RUN_TIME()
+
+
+static String
+INPUT_METRICS_PREFIX()
+
+
+static String
+JVM_GC_TIME()
+
+
+static String
+MEMORY_BYTES_SPILLED()
+
+
+static String
+METRICS_PREFIX()
+
+
+static String
+OUTPUT_METRICS_PREFIX()
+
+
+static String
+PEAK_EXECUTION_MEMORY()
+
+
+static String
+RESULT_SERIALIZATION_TIME()
+
+
+static String
+RESULT_SIZE()
+
+
+static String
+SHUFFLE_READ_METRICS_PREFIX()
+
+
+static String
+SHUFFLE_WRITE_METRICS_PREFIX()
+
+
+static String
+TEST_ACCUM()
+
+
+static String
+UPDATED_BLOCK_STATUSES()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+InternalAccumulator
+publicInternalAccumulator()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+METRICS_PREFIX
+public staticStringMETRICS_PREFIX()
+
+
+
+
+
+
+
+SHUFFLE_READ_METRICS_PREFIX
+public staticStringSHUFFLE_READ_METRICS_PREFIX()
+
+
+
+
+
+
+
+SHUFFLE_WRITE_METRICS_PREFIX
+public staticStringSHUFFLE_WRITE_METRICS_PREFIX()
+
+
+
+
+
+
+
+OUTPUT_METRICS_PREFIX
+public staticStringOUTPUT_METRICS_PREFIX()
+
+
+
+
+
+
+
+INPUT_METRICS_PREFIX
+public staticStringINPUT_METRICS_PREFIX()
+
+
+
+
+
+
+
+EXECUTOR_DESERIALIZE_TIME
+public staticStringEXECUTOR_DESERIALIZE_TIME()
+
+
+
+
+
+
+
+EXECUTOR_DESERIALIZE_CPU_TIME
+public staticStringEXECUTOR_DESERIALIZE_CPU_TIME()
+
+
+
+
+
+
+
+EXECUTOR_RUN_TIME
+public staticStringEXECUTOR_RUN_TIME()
+
+
+
+
+
+
+
+EXECUTOR_CPU_TIME
+public staticStringEXECUTOR_CPU_TIME()
+
+
+
+
+
+
+
+RESULT_SIZE
+public staticStringRESULT_SIZE()
+
+
+
+
+
+
+
+JVM_GC_TIME
+public staticStringJVM_GC_TIME()
+
+
+
+
+
+
+
+RESULT_SERIALIZATION_TIME
+public staticStringRESULT_SERIALIZATION_TIME()
+
+
+
+
+
+
+
+MEMORY_BYTES_SPILLED
+public staticStringMEMORY_BYTES_SPILLED()
+
+
+
+
+
+
+
+DISK_BYTES_SPILLED
+public staticStringDISK_BYTES_SPILLED()
+
+
+
+
+
+
+
+PEAK_EXECUTION_MEMORY
+public staticStringPEAK_EXECUTION_MEMORY()
+
+
+
+
+
+
+
+UPDATED_BLOCK_STATUSES
+public staticStringUPDATED_BLOCK_STATUSES()
+
+
+
+
+
+
+
+TEST_ACCUM
+public staticStringTEST_ACCUM()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+


[38/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/spark.gbt.html
--
diff --git a/site/docs/2.1.2/api/R/spark.gbt.html 
b/site/docs/2.1.2/api/R/spark.gbt.html
new file mode 100644
index 000..98b2b03
--- /dev/null
+++ b/site/docs/2.1.2/api/R/spark.gbt.html
@@ -0,0 +1,244 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Gradient Boosted Tree 
Model for Regression and Classification
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+spark.gbt 
{SparkR}R Documentation
+
+Gradient Boosted Tree Model for Regression and Classification
+
+Description
+
+spark.gbt fits a Gradient Boosted Tree Regression model or 
Classification model on a
+SparkDataFrame. Users can call summary to get a summary of the 
fitted
+Gradient Boosted Tree model, predict to make predictions on new 
data, and
+write.ml/read.ml to save/load fitted models.
+For more details, see
+http://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-tree-regression;>
+GBT Regression and
+http://spark.apache.org/docs/latest/ml-classification-regression.html#gradient-boosted-tree-classifier;>
+GBT Classification
+
+
+
+Usage
+
+
+spark.gbt(data, formula, ...)
+
+## S4 method for signature 'SparkDataFrame,formula'
+spark.gbt(data, formula,
+  type = c("regression", "classification"), maxDepth = 5, maxBins = 32,
+  maxIter = 20, stepSize = 0.1, lossType = NULL, seed = NULL,
+  subsamplingRate = 1, minInstancesPerNode = 1, minInfoGain = 0,
+  checkpointInterval = 10, maxMemoryInMB = 256, cacheNodeIds = FALSE)
+
+## S4 method for signature 'GBTRegressionModel'
+predict(object, newData)
+
+## S4 method for signature 'GBTClassificationModel'
+predict(object, newData)
+
+## S4 method for signature 'GBTRegressionModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+## S4 method for signature 'GBTClassificationModel,character'
+write.ml(object, path,
+  overwrite = FALSE)
+
+## S4 method for signature 'GBTRegressionModel'
+summary(object)
+
+## S4 method for signature 'GBTClassificationModel'
+summary(object)
+
+## S3 method for class 'summary.GBTRegressionModel'
+print(x, ...)
+
+## S3 method for class 'summary.GBTClassificationModel'
+print(x, ...)
+
+
+
+Arguments
+
+
+data
+
+a SparkDataFrame for training.
+
+formula
+
+a symbolic description of the model to be fitted. Currently only a few 
formula
+operators are supported, including '~', ':', '+', and '-'.
+
+...
+
+additional arguments passed to the method.
+
+type
+
+type of model, one of regression or classification, 
to fit
+
+maxDepth
+
+Maximum depth of the tree (= 0).
+
+maxBins
+
+Maximum number of bins used for discretizing continuous features and for 
choosing
+how to split on features at each node. More bins give higher granularity. Must 
be
+= 2 and = number of categories in any categorical feature.
+
+maxIter
+
+Param for maximum number of iterations (= 0).
+
+stepSize
+
+Param for Step size to be used for each iteration of optimization.
+
+lossType
+
+Loss function which GBT tries to minimize.
+For classification, must be logistic. For regression, must be one 
of
+squared (L2) and absolute (L1), default is 
squared.
+
+seed
+
+integer seed for random number generation.
+
+subsamplingRate
+
+Fraction of the training data used for learning each decision tree, in
+range (0, 1].
+
+minInstancesPerNode
+
+Minimum number of instances each child must have after split. If a
+split causes the left or right child to have fewer than
+minInstancesPerNode, the split will be discarded as invalid. Should be
+= 1.
+
+minInfoGain
+
+Minimum information gain for a split to be considered at a tree node.
+
+checkpointInterval
+
+Param for set checkpoint interval (= 1) or disable checkpoint (-1).
+
+maxMemoryInMB
+
+Maximum memory in MB allocated to histogram aggregation.
+
+cacheNodeIds
+
+If FALSE, the algorithm will pass trees to executors to match instances with
+nodes. If TRUE, the algorithm will cache node IDs for each instance. Caching
+can speed up training of deeper trees. Users can set how often should the
+cache be checkpointed or disable it by setting checkpointInterval.
+
+object
+
+A fitted Gradient Boosted Tree regression model or classification model.
+
+newData
+
+a SparkDataFrame for testing.
+
+path
+
+The directory where the model is saved.
+
+overwrite
+
+Overwrites or not if the output path already exists. Default is FALSE
+which means throw exception if the output path exists.
+
+x
+
+summary object of Gradient Boosted Tree regression model or classification 
model
+returned by summary.
+
+
+
+
+Value
+
+spark.gbt returns a fitted Gradient Boosted Tree model.
+

[01/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
Repository: spark-website
Updated Branches:
  refs/heads/apache-asf-site [created] a6d9cbdef


http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html
new file mode 100644
index 000..19fea40
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/r/RRDD.html
@@ -0,0 +1,2158 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+RRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.r
+Class RRDDT
+
+
+
+Object
+
+
+org.apache.spark.rdd.RDDU
+
+
+org.apache.spark.api.r.BaseRRDDT,byte[]
+
+
+org.apache.spark.api.r.RRDDT
+
+
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+
+public class RRDDT
+extends BaseRRDDT,byte[]
+An RDD that stores serialized R objects as 
Array[Byte].
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+RRDD(RDDTparent,
+byte[]func,
+Stringdeserializer,
+Stringserializer,
+byte[]packageNames,
+Object[]broadcastVars,
+scala.reflect.ClassTagTevidence$4)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static RDDT
+$plus$plus(RDDTother)
+
+
+static UU
+aggregate(UzeroValue,
+ scala.Function2U,T,UseqOp,
+ scala.Function2U,U,UcombOp,
+ scala.reflect.ClassTagUevidence$30)
+
+
+JavaRDDbyte[]
+asJavaRDD()
+
+
+static RDDT
+cache()
+
+
+static URDDscala.Tuple2T,U
+cartesian(RDDUother,
+ scala.reflect.ClassTagUevidence$5)
+
+
+static void
+checkpoint()
+
+
+static RDDT
+coalesce(intnumPartitions,
+booleanshuffle,
+scala.OptionPartitionCoalescerpartitionCoalescer,
+scala.math.OrderingTord)
+
+
+static boolean
+coalesce$default$2()
+
+
+static scala.OptionPartitionCoalescer
+coalesce$default$3()
+
+
+static scala.math.OrderingT
+coalesce$default$4(intnumPartitions,
+  booleanshuffle,
+  scala.OptionPartitionCoalescerpartitionCoalescer)
+
+
+static Object
+collect()
+
+
+static URDDU
+collect(scala.PartialFunctionT,Uf,
+   scala.reflect.ClassTagUevidence$29)
+
+
+static 
scala.collection.IteratorU
+compute(Partitionpartition,
+   TaskContextcontext)
+
+
+static SparkContext
+context()
+
+
+static long
+count()
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+
+
+static double
+countApprox$default$2()
+
+
+static long
+countApproxDistinct(doublerelativeSD)
+
+
+static long
+countApproxDistinct(intp,
+   intsp)
+
+
+static double
+countApproxDistinct$default$1()
+
+
+static 
scala.collection.MapT,Object
+countByValue(scala.math.OrderingTord)
+
+
+static scala.math.OrderingT
+countByValue$default$1()
+
+
+static PartialResultscala.collection.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence,
+  scala.math.OrderingTord)
+
+
+static double
+countByValueApprox$default$2()
+
+
+static scala.math.OrderingT
+countByValueApprox$default$3(longtimeout,
+doubleconfidence)
+
+
+static JavaRDDbyte[]
+createRDDFromArray(JavaSparkContextjsc,
+  byte[][]arr)
+Create an RRDD given a sequence of byte arrays.
+
+
+
+static JavaRDDbyte[]
+createRDDFromFile(JavaSparkContextjsc,
+ StringfileName,
+ intparallelism)
+Create an RRDD given a temporary file name.
+
+
+
+static JavaSparkContext
+createSparkContext(Stringmaster,
+  StringappName,
+  StringsparkHome,
+  String[]jars,
+  java.util.MapObject,ObjectsparkEnvirMap,
+  
java.util.MapObject,ObjectsparkExecutorEnvMap)
+
+
+static scala.collection.SeqDependency?
+dependencies()
+
+
+static RDDT
+distinct()
+
+
+static RDDT
+distinct(intnumPartitions,
+scala.math.OrderingTord)
+
+
+static scala.math.OrderingT
+distinct$default$2(intnumPartitions)
+
+
+static RDDT
+filter(scala.Function1T,Objectf)
+
+
+static T
+first()
+
+
+static URDDU

[31/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/index-all.html
--
diff --git a/site/docs/2.1.2/api/java/index-all.html 
b/site/docs/2.1.2/api/java/index-all.html
new file mode 100644
index 000..67505d4
--- /dev/null
+++ b/site/docs/2.1.2/api/java/index-all.html
@@ -0,0 +1,46481 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Index (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev
+Next
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+
+
+$ABCDEFGHIJKLMNOPQRSTUVWXYZ_
+
+
+$
+
+$colon$bslash(B,
 Function2A, B, B) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$colon$plus(B,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$div$colon(B,
 Function2B, A, B) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$greater(A)
 - Static method in class org.apache.spark.sql.types.Decimal
+
+$greater(A)
 - Static method in class org.apache.spark.storage.RDDInfo
+
+$greater$eq(A)
 - Static method in class org.apache.spark.sql.types.Decimal
+
+$greater$eq(A)
 - Static method in class org.apache.spark.storage.RDDInfo
+
+$less(A) - 
Static method in class org.apache.spark.sql.types.Decimal
+
+$less(A) - 
Static method in class org.apache.spark.storage.RDDInfo
+
+$less$eq(A)
 - Static method in class org.apache.spark.sql.types.Decimal
+
+$less$eq(A)
 - Static method in class org.apache.spark.storage.RDDInfo
+
+$minus$greater(T)
 - Static method in class org.apache.spark.ml.param.DoubleParam
+
+$minus$greater(T)
 - Static method in class org.apache.spark.ml.param.FloatParam
+
+$plus$colon(B,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$eq(T) - 
Static method in class org.apache.spark.Accumulator
+
+Deprecated.
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.api.r.RRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.EdgeRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.impl.EdgeRDDImpl
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.impl.VertexRDDImpl
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.graphx.VertexRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.HadoopRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.JdbcRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.NewHadoopRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.PartitionPruningRDD
+
+$plus$plus(RDDT)
 - Static method in class org.apache.spark.rdd.UnionRDD
+
+$plus$plus(GenTraversableOnceB,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$plus$colon(TraversableOnceB,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$plus$colon(TraversableB,
 CanBuildFromRepr, B, That) - Static method in class 
org.apache.spark.sql.types.StructType
+
+$plus$plus$eq(R)
 - Static method in class org.apache.spark.Accumulator
+
+Deprecated.
+
+
+
+
+
+A
+
+abortJob(JobContext)
 - Method in class org.apache.spark.internal.io.FileCommitProtocol
+
+Aborts a job after the writes fail.
+
+abortJob(JobContext)
 - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
+
+abortTask(TaskAttemptContext)
 - Method in class org.apache.spark.internal.io.FileCommitProtocol
+
+Aborts a task after the writes have failed.
+
+abortTask(TaskAttemptContext)
 - Method in class org.apache.spark.internal.io.HadoopMapReduceCommitProtocol
+
+abs(Column)
 - Static method in class org.apache.spark.sql.functions
+
+Computes the absolute value.
+
+abs() - 
Method in class org.apache.spark.sql.types.Decimal
+
+absent() - 
Static method in class org.apache.spark.api.java.Optional
+
+AbsoluteError - Class in org.apache.spark.mllib.tree.loss
+
+:: DeveloperApi ::
+ Class for absolute error loss calculation (for regression).
+
+AbsoluteError()
 - Constructor for class org.apache.spark.mllib.tree.loss.AbsoluteError
+
+accept(Parsers)
 - Static method in class org.apache.spark.ml.feature.RFormulaParser
+
+accept(ES,
 Function1ES, ListObject) - Static method in class 
org.apache.spark.ml.feature.RFormulaParser
+
+accept(String,
 PartialFunctionObject, U) - Static method in class 
org.apache.spark.ml.feature.RFormulaParser
+
+acceptIf(Function1Object,
 

[39/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/sd.html
--
diff --git a/site/docs/2.1.2/api/R/sd.html b/site/docs/2.1.2/api/R/sd.html
new file mode 100644
index 000..1018604
--- /dev/null
+++ b/site/docs/2.1.2/api/R/sd.html
@@ -0,0 +1,85 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: sd
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sd {SparkR}R Documentation
+
+sd
+
+Description
+
+Aggregate function: alias for stddev_samp
+
+
+
+Usage
+
+
+sd(x, na.rm = FALSE)
+
+stddev(x)
+
+## S4 method for signature 'Column'
+sd(x)
+
+## S4 method for signature 'Column'
+stddev(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+na.rm
+
+currently not used.
+
+
+
+
+Note
+
+sd since 1.6.0
+
+stddev since 1.6.0
+
+
+
+See Also
+
+stddev_pop, stddev_samp
+
+Other agg_funcs: agg, avg,
+countDistinct, count,
+first, kurtosis,
+last, max,
+mean, min,
+skewness, stddev_pop,
+stddev_samp, sumDistinct,
+sum, var_pop,
+var_samp, var
+
+
+
+Examples
+
+## Not run: 
+##D stddev(df$c)
+##D select(df, stddev(df$age))
+##D agg(df, sd(df$age))
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/second.html
--
diff --git a/site/docs/2.1.2/api/R/second.html 
b/site/docs/2.1.2/api/R/second.html
new file mode 100644
index 000..25cf1e6
--- /dev/null
+++ b/site/docs/2.1.2/api/R/second.html
@@ -0,0 +1,71 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: second
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+second 
{SparkR}R Documentation
+
+second
+
+Description
+
+Extracts the seconds as an integer from a given date/timestamp/string.
+
+
+
+Usage
+
+
+second(x)
+
+## S4 method for signature 'Column'
+second(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+second since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, datediff,
+dayofmonth, dayofyear,
+from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+next_day, quarter,
+to_date, to_utc_timestamp,
+unix_timestamp, weekofyear,
+window, year
+
+
+
+Examples
+
+## Not run: second(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/select.html
--
diff --git a/site/docs/2.1.2/api/R/select.html 
b/site/docs/2.1.2/api/R/select.html
new file mode 100644
index 000..82cc5b6
--- /dev/null
+++ b/site/docs/2.1.2/api/R/select.html
@@ -0,0 +1,150 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Select
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+select 
{SparkR}R Documentation
+
+Select
+
+Description
+
+Selects a set of columns with names or Column expressions.
+
+
+
+Usage
+
+
+select(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame'
+x$name
+
+## S4 replacement method for signature 'SparkDataFrame'
+x$name - value
+
+## S4 method for signature 'SparkDataFrame,character'
+select(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame,Column'
+select(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame,list'
+select(x, col)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+col
+
+a list of columns or single Column or name.
+
+...
+
+additional column(s) if only one column is specified in col.
+If more than one column is assigned in col, ...
+should be left empty.
+
+name
+
+name of a Column (without being wrapped by "").
+
+value
+
+a Column or an atomic vector in the length of 1 as literal value, or 
NULL.
+If NULL, the specified Column is dropped.
+
+
+
+
+Value
+
+A new SparkDataFrame with selected columns.
+
+
+
+Note
+
+$ since 1.4.0
+
+$- since 1.4.0
+
+select(SparkDataFrame, character) since 1.4.0
+
+select(SparkDataFrame, Column) since 1.4.0
+
+select(SparkDataFrame, list) since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,

[03/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html
new file mode 100644
index 000..8153556
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction.html
@@ -0,0 +1,219 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+VoidFunction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
VoidFunctionT
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface VoidFunctionT
+extends java.io.Serializable
+A function with no return value.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+call(Tt)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+voidcall(Tt)
+  throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
new file mode 100644
index 000..1dad476
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/VoidFunction2.html
@@ -0,0 +1,221 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+VoidFunction2 (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
VoidFunction2T1,T2
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface VoidFunction2T1,T2
+extends java.io.Serializable
+A two-argument function that takes arguments of type T1 and 
T2 with no return value.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+void
+call(T1v1,
+T2v2)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+voidcall(T1v1,
+T2v2)
+  throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/package-frame.html
--
diff --git 

[22/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html 
b/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html
new file mode 100644
index 000..be22a87
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/JobExecutionStatus.html
@@ -0,0 +1,358 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JobExecutionStatus (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Enum Constants|
+Field|
+Method
+
+
+Detail:
+Enum Constants|
+Field|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Enum JobExecutionStatus
+
+
+
+Object
+
+
+EnumJobExecutionStatus
+
+
+org.apache.spark.JobExecutionStatus
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, ComparableJobExecutionStatus
+
+
+
+public enum JobExecutionStatus
+extends EnumJobExecutionStatus
+
+
+
+
+
+
+
+
+
+
+
+Enum Constant Summary
+
+Enum Constants
+
+Enum Constant and Description
+
+
+FAILED
+
+
+RUNNING
+
+
+SUCCEEDED
+
+
+UNKNOWN
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static JobExecutionStatus
+fromString(Stringstr)
+
+
+static JobExecutionStatus
+valueOf(Stringname)
+Returns the enum constant of this type with the specified 
name.
+
+
+
+static JobExecutionStatus[]
+values()
+Returns an array containing the constants of this enum 
type, in
+the order they are declared.
+
+
+
+
+
+
+
+Methods inherited from classEnum
+compareTo, equals, getDeclaringClass, hashCode, name, ordinal, toString, 
valueOf
+
+
+
+
+
+Methods inherited from classObject
+getClass, notify, notifyAll, wait, wait, wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Enum Constant Detail
+
+
+
+
+
+RUNNING
+public static finalJobExecutionStatus RUNNING
+
+
+
+
+
+
+
+SUCCEEDED
+public static finalJobExecutionStatus SUCCEEDED
+
+
+
+
+
+
+
+FAILED
+public static finalJobExecutionStatus FAILED
+
+
+
+
+
+
+
+UNKNOWN
+public static finalJobExecutionStatus UNKNOWN
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+values
+public staticJobExecutionStatus[]values()
+Returns an array containing the constants of this enum 
type, in
+the order they are declared.  This method may be used to iterate
+over the constants as follows:
+
+for (JobExecutionStatus c : JobExecutionStatus.values())
+   System.out.println(c);
+
+Returns:an array containing the 
constants of this enum type, in the order they are declared
+
+
+
+
+
+
+
+valueOf
+public staticJobExecutionStatusvalueOf(Stringname)
+Returns the enum constant of this type with the specified 
name.
+The string must match exactly an identifier used to declare an
+enum constant in this type.  (Extraneous whitespace characters are 
+not permitted.)
+Parameters:name - 
the name of the enum constant to be returned.
+Returns:the enum constant with the 
specified name
+Throws:
+IllegalArgumentException - if this enum type has no constant 
with the specified name
+NullPointerException - if the argument is null
+
+
+
+
+
+
+
+fromString
+public staticJobExecutionStatusfromString(Stringstr)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Enum Constants|
+Field|
+Method
+
+
+Detail:
+Enum Constants|
+Field|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html 
b/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html
new file mode 100644
index 000..84e3160
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/JobSubmitter.html
@@ -0,0 +1,225 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JobSubmitter (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
CoGroupFunctionK,V1,V2,R
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface CoGroupFunctionK,V1,V2,R
+extends java.io.Serializable
+A function that returns zero or more output records from 
each grouping key and its values from 2
+ Datasets.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+java.util.IteratorR
+call(Kkey,
+java.util.IteratorV1left,
+java.util.IteratorV2right)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+java.util.IteratorRcall(Kkey,
+ java.util.IteratorV1left,
+ java.util.IteratorV2right)
+   throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
new file mode 100644
index 000..628dfa8
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/DoubleFlatMapFunction.html
@@ -0,0 +1,219 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+DoubleFlatMapFunction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
DoubleFlatMapFunctionT
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface DoubleFlatMapFunctionT
+extends java.io.Serializable
+A function that returns zero or more records of type Double 
from each input record.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+java.util.IteratorDouble
+call(Tt)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+java.util.IteratorDoublecall(Tt)
+throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class ExecutorRemoved
+
+
+
+Object
+
+
+org.apache.spark.ExecutorRemoved
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, scala.Equals, scala.Product
+
+
+
+public class ExecutorRemoved
+extends Object
+implements scala.Product, scala.Serializable
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+ExecutorRemoved(StringexecutorId)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+String
+executorId()
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+Methods inherited from interfacescala.Product
+productArity, productElement, productIterator, productPrefix
+
+
+
+
+
+Methods inherited from interfacescala.Equals
+canEqual, equals
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+ExecutorRemoved
+publicExecutorRemoved(StringexecutorId)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+executorId
+publicStringexecutorId()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html 
b/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html
new file mode 100644
index 000..4edc799
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/ExpireDeadHosts.html
@@ -0,0 +1,323 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+ExpireDeadHosts (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class ExpireDeadHosts
+
+
+
+Object
+
+

[19/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html
new file mode 100644
index 000..38c0b01
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkContext.html
@@ -0,0 +1,2527 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkContext (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkContext
+
+
+
+Object
+
+
+org.apache.spark.SparkContext
+
+
+
+
+
+
+
+
+public class SparkContext
+extends Object
+Main entry point for Spark functionality. A SparkContext 
represents the connection to a Spark
+ cluster, and can be used to create RDDs, accumulators and broadcast variables 
on that cluster.
+ 
+ Only one SparkContext may be active per JVM.  You must stop() 
the active SparkContext before
+ creating a new one.  This limitation may eventually be removed; see 
SPARK-2243 for more details.
+ 
+ param:  config a Spark Config object describing the application 
configuration. Any settings in
+   this config overrides the default configs as well as system 
properties.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkContext()
+Create a SparkContext that loads settings from system 
properties (for instance, when
+ launching with ./bin/spark-submit).
+
+
+
+SparkContext(SparkConfconfig)
+
+
+SparkContext(Stringmaster,
+StringappName,
+SparkConfconf)
+Alternative constructor that allows setting common Spark 
properties directly
+
+
+
+SparkContext(Stringmaster,
+StringappName,
+StringsparkHome,
+scala.collection.SeqStringjars,
+scala.collection.MapString,Stringenvironment)
+Alternative constructor that allows setting common Spark 
properties directly
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+R,TAccumulableR,T
+accumulable(RinitialValue,
+   AccumulableParamR,Tparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+R,TAccumulableR,T
+accumulable(RinitialValue,
+   Stringname,
+   AccumulableParamR,Tparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+R,TAccumulableR,T
+accumulableCollection(RinitialValue,
+ 
scala.Function1R,scala.collection.generic.GrowableTevidence$9,
+ scala.reflect.ClassTagRevidence$10)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   AccumulatorParamTparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+TAccumulatorT
+accumulator(TinitialValue,
+   Stringname,
+   AccumulatorParamTparam)
+Deprecated.
+use AccumulatorV2. Since 2.0.0.
+
+
+
+
+void
+addFile(Stringpath)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addFile(Stringpath,
+   booleanrecursive)
+Add a file to be downloaded with this Spark job on every 
node.
+
+
+
+void
+addJar(Stringpath)
+Adds a JAR dependency for all tasks to be executed on this 
SparkContext in the future.
+
+
+
+void
+addSparkListener(org.apache.spark.scheduler.SparkListenerInterfacelistener)
+:: DeveloperApi ::
+ Register a listener to receive up-calls from events that happen during 
execution.
+
+
+
+scala.OptionString
+applicationAttemptId()
+
+
+String
+applicationId()
+A unique identifier for the Spark application.
+
+
+
+String
+appName()
+
+
+RDDscala.Tuple2String,PortableDataStream
+binaryFiles(Stringpath,
+   intminPartitions)
+Get an RDD for a Hadoop-readable dataset as 
PortableDataStream for each file
+ (useful for binary data)
+
+
+
+RDDbyte[]
+binaryRecords(Stringpath,
+ intrecordLength,
+ org.apache.hadoop.conf.Configurationconf)
+Load data from a flat binary file, assuming the length of 
each record is constant.
+
+
+
+TBroadcastT
+broadcast(Tvalue,
+ scala.reflect.ClassTagTevidence$11)
+Broadcast a read-only variable to the cluster, returning a
+ Broadcast object for reading it in 
distributed functions.
+
+
+
+void

[42/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/limit.html
--
diff --git a/site/docs/2.1.2/api/R/limit.html b/site/docs/2.1.2/api/R/limit.html
new file mode 100644
index 000..309b624
--- /dev/null
+++ b/site/docs/2.1.2/api/R/limit.html
@@ -0,0 +1,109 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Limit
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+limit 
{SparkR}R Documentation
+
+Limit
+
+Description
+
+Limit the resulting SparkDataFrame to the number of rows specified.
+
+
+
+Usage
+
+
+limit(x, num)
+
+## S4 method for signature 'SparkDataFrame,numeric'
+limit(x, num)
+
+
+
+Arguments
+
+
+x
+
+A SparkDataFrame
+
+num
+
+The number of rows to return
+
+
+
+
+Value
+
+A new SparkDataFrame containing the number of rows specified.
+
+
+
+Note
+
+limit since 1.4.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, histogram,
+insertInto, intersect,
+isLocal, join,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D limitedDF - limit(df, 10)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/lit.html
--
diff --git a/site/docs/2.1.2/api/R/lit.html b/site/docs/2.1.2/api/R/lit.html
new file mode 100644
index 000..e227436
--- /dev/null
+++ b/site/docs/2.1.2/api/R/lit.html
@@ -0,0 +1,72 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: lit
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+lit 
{SparkR}R Documentation
+
+lit
+
+Description
+
+A new Column is created to represent the literal 
value.
+If the parameter is a Column, it is returned 
unchanged.
+
+
+
+Usage
+
+
+lit(x)
+
+## S4 method for signature 'ANY'
+lit(x)
+
+
+
+Arguments
+
+
+x
+
+a literal value or a Column.
+
+
+
+
+Note
+
+lit since 1.5.0
+
+
+
+See Also
+
+Other normal_funcs: abs,
+bitwiseNOT, coalesce,
+column, expr,
+greatest, ifelse,
+isnan, least,
+nanvl, negate,
+randn, rand,
+struct, when
+
+
+
+Examples
+
+## Not run: 
+##D lit(df$name)
+##D select(df, lit(x))
+##D select(df, lit(2015-01-01))
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/locate.html
--
diff --git a/site/docs/2.1.2/api/R/locate.html 
b/site/docs/2.1.2/api/R/locate.html
new file mode 100644
index 000..8240ffd
--- /dev/null
+++ b/site/docs/2.1.2/api/R/locate.html
@@ -0,0 +1,92 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: locate
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+locate 
{SparkR}R Documentation
+
+locate
+
+Description
+
+Locate the position of the first occurrence of substr.
+
+
+
+Usage
+
+
+locate(substr, str, ...)
+
+## S4 method for signature 'character,Column'
+locate(substr, str, pos = 1)
+
+
+
+Arguments
+
+
+substr
+
+a character string to be matched.
+
+str
+
+a Column where matches are sought for each entry.
+
+...
+
+further arguments to be passed to or from other methods.
+
+pos
+
+start position of search.
+
+
+
+
+Details
+
+Note: The position is not zero based, but 1 based index. Returns 0 if substr
+could not be found in str.
+
+
+
+Note
+
+locate since 1.5.0
+
+
+
+See Also
+
+Other string_funcs: ascii,
+base64, concat_ws,

[08/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html
new file mode 100644
index 000..c249acb
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDDLike.html
@@ -0,0 +1,1786 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaRDDLike (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Interface 
JavaRDDLikeT,This extends JavaRDDLikeT,This
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+All Known Implementing Classes:
+JavaDoubleRDD, JavaHadoopRDD, JavaNewHadoopRDD, JavaPairRDD, JavaRDD
+
+
+
+public interface JavaRDDLikeT,This extends 
JavaRDDLikeT,This
+extends scala.Serializable
+Defines operations common to several Java RDD 
implementations.
+ 
+Note:
+  This trait is not intended to be implemented by user code.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+UU
+aggregate(UzeroValue,
+ Function2U,T,UseqOp,
+ Function2U,U,UcombOp)
+Aggregate the elements of each partition, and then the 
results for all the partitions, using
+ given combine functions and a neutral "zero value".
+
+
+
+UJavaPairRDDT,U
+cartesian(JavaRDDLikeU,?other)
+Return the Cartesian product of this RDD and another one, 
that is, the RDD of all pairs of
+ elements (a, b) where a is in this and b is in 
other.
+
+
+
+void
+checkpoint()
+Mark this RDD for checkpointing.
+
+
+
+scala.reflect.ClassTagT
+classTag()
+
+
+java.util.ListT
+collect()
+Return an array that contains all of the elements in this 
RDD.
+
+
+
+JavaFutureActionjava.util.ListT
+collectAsync()
+The asynchronous version of collect, which 
returns a future for
+ retrieving an array containing all of the elements in this RDD.
+
+
+
+java.util.ListT[]
+collectPartitions(int[]partitionIds)
+Return an array that contains all of the elements in a 
specific partition of this RDD.
+
+
+
+SparkContext
+context()
+The SparkContext that this RDD was created 
on.
+
+
+
+long
+count()
+Return the number of elements in the RDD.
+
+
+
+PartialResultBoundedDouble
+countApprox(longtimeout)
+Approximate version of count() that returns a potentially 
incomplete result
+ within a timeout, even if not all tasks have finished.
+
+
+
+PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+Approximate version of count() that returns a potentially 
incomplete result
+ within a timeout, even if not all tasks have finished.
+
+
+
+long
+countApproxDistinct(doublerelativeSD)
+Return approximate number of distinct elements in the 
RDD.
+
+
+
+JavaFutureActionLong
+countAsync()
+The asynchronous version of count, which 
returns a
+ future for counting the number of elements in this RDD.
+
+
+
+java.util.MapT,Long
+countByValue()
+Return the count of each unique value in this RDD as a map 
of (value, count) pairs.
+
+
+
+PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout)
+Approximate version of countByValue().
+
+
+
+PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence)
+Approximate version of countByValue().
+
+
+
+T
+first()
+Return the first element in this RDD.
+
+
+
+UJavaRDDU
+flatMap(FlatMapFunctionT,Uf)
+Return a new RDD by first applying a function to all 
elements of this
+  RDD, and then flattening the results.
+
+
+
+JavaDoubleRDD
+flatMapToDouble(DoubleFlatMapFunctionTf)
+Return a new RDD by first applying a function to all 
elements of this
+  RDD, and then flattening the results.
+
+
+
+K2,V2JavaPairRDDK2,V2
+flatMapToPair(PairFlatMapFunctionT,K2,V2f)
+Return a new RDD by first applying a function to all 
elements of this
+  RDD, and then flattening the results.
+
+
+
+T
+fold(TzeroValue,
+Function2T,T,Tf)
+Aggregate the elements of each partition, and then the 
results for all the partitions, using a
+ given associative function and a neutral "zero value".
+
+
+
+void
+foreach(VoidFunctionTf)
+Applies a function f to all elements of this RDD.
+
+
+

[49/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/00Index.html
--
diff --git a/site/docs/2.1.2/api/R/00Index.html 
b/site/docs/2.1.2/api/R/00Index.html
new file mode 100644
index 000..541a952
--- /dev/null
+++ b/site/docs/2.1.2/api/R/00Index.html
@@ -0,0 +1,1585 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>
+http://www.w3.org/1999/xhtml;>
+R: R Frontend for Apache Spark
+
+
+
+ R Frontend for Apache Spark
+http://stat.ethz.ch/R-manual/R-devel/doc/html/Rlogo.svg; alt="[R logo]" />
+
+
+
+http://stat.ethz.ch/R-manual/R-devel/doc/html/packages.html;>http://stat.ethz.ch/R-manual/R-devel/doc/html/left.jpg; 
alt="[Up]" />
+http://stat.ethz.ch/R-manual/R-devel/doc/html/index.html;>http://stat.ethz.ch/R-manual/R-devel/doc/html/up.jpg; 
alt="[Top]" />
+Documentation for package SparkR version 2.1.2
+
+DESCRIPTION file.
+
+
+Help Pages
+
+
+
+A
+B
+C
+D
+E
+F
+G
+H
+I
+J
+K
+L
+M
+N
+O
+P
+Q
+R
+S
+T
+U
+V
+W
+Y
+misc
+
+
+
+-- A --
+
+
+abs
+abs
+abs-method
+abs
+acos
+acos
+acos-method
+acos
+add_months
+add_months
+add_months-method
+add_months
+AFTSurvivalRegressionModel-class
+S4 class that represents a AFTSurvivalRegressionModel
+agg
+summarize
+agg-method
+summarize
+alias
+alias
+alias-method
+alias
+ALSModel-class
+S4 class that represents an ALSModel
+approxCountDistinct
+Returns the approximate number of distinct items in a group
+approxCountDistinct-method
+Returns the approximate number of distinct items in a group
+approxQuantile
+Calculates the approximate quantiles of a numerical column of a 
SparkDataFrame
+approxQuantile-method
+Calculates the approximate quantiles of a numerical column of a 
SparkDataFrame
+arrange
+Arrange Rows by Variables
+arrange-method
+Arrange Rows by Variables
+array_contains
+array_contains
+array_contains-method
+array_contains
+as.data.frame
+Download data from a SparkDataFrame into a R data.frame
+as.data.frame-method
+Download data from a SparkDataFrame into a R data.frame
+as.DataFrame
+Create a SparkDataFrame
+as.DataFrame.default
+Create a SparkDataFrame
+asc
+A set of operations working with SparkDataFrame columns
+ascii
+ascii
+ascii-method
+ascii
+asin
+asin
+asin-method
+asin
+atan
+atan
+atan-method
+atan
+atan2
+atan2
+atan2-method
+atan2
+attach
+Attach SparkDataFrame to R search path
+attach-method
+Attach SparkDataFrame to R search path
+avg
+avg
+avg-method
+avg
+
+
+-- B --
+
+
+base64
+base64
+base64-method
+base64
+between
+between
+between-method
+between
+bin
+bin
+bin-method
+bin
+bitwiseNOT
+bitwiseNOT
+bitwiseNOT-method
+bitwiseNOT
+bround
+bround
+bround-method
+bround
+
+
+-- C --
+
+
+cache
+Cache
+cache-method
+Cache
+cacheTable
+Cache Table
+cacheTable.default
+Cache Table
+cancelJobGroup
+Cancel active jobs for the specified group
+cancelJobGroup.default
+Cancel active jobs for the specified group
+cast
+Casts the column to a different data type.
+cast-method
+Casts the column to a different data type.
+cbrt
+cbrt
+cbrt-method
+cbrt
+ceil
+Computes the ceiling of the given value
+ceil-method
+Computes the ceiling of the given value
+ceiling
+Computes the ceiling of the given value
+ceiling-method
+Computes the ceiling of the given value
+clearCache
+Clear Cache
+clearCache.default
+Clear Cache
+clearJobGroup
+Clear current job group ID and its description
+clearJobGroup.default
+Clear current job group ID and its description
+coalesce
+Coalesce
+coalesce-method
+Coalesce
+collect
+Collects all the elements of a SparkDataFrame and coerces them into an R 
data.frame.
+collect-method
+Collects all the elements of a SparkDataFrame and coerces them into an R 
data.frame.
+colnames
+Column Names of SparkDataFrame
+colnames-method
+Column Names of SparkDataFrame
+colnames-
+Column Names of SparkDataFrame
+colnames--method
+Column Names of SparkDataFrame
+coltypes
+coltypes
+coltypes-method
+coltypes
+coltypes-
+coltypes
+coltypes--method
+coltypes
+column
+S4 class that represents a SparkDataFrame column
+Column-class
+S4 class that represents a SparkDataFrame column
+column-method
+S4 class that represents a SparkDataFrame column
+columnfunctions
+A set of operations working with SparkDataFrame columns
+columns
+Column Names of SparkDataFrame
+columns-method
+Column Names of SparkDataFrame
+concat
+concat
+concat-method
+concat
+concat_ws
+concat_ws
+concat_ws-method
+concat_ws
+contains
+A set of operations working with SparkDataFrame columns
+conv
+conv
+conv-method
+conv
+corr
+corr
+corr-method
+corr
+cos
+cos
+cos-method
+cos
+cosh
+cosh
+cosh-method
+cosh
+count
+Count
+count-method
+Count
+count-method
+Returns the number of rows in a SparkDataFrame
+countDistinct
+Count Distinct Values
+countDistinct-method
+Count Distinct Values
+cov
+cov
+cov-method
+cov
+covar_pop
+covar_pop
+covar_pop-method
+covar_pop
+covar_samp
+cov
+covar_samp-method
+cov
+crc32
+crc32
+crc32-method
+crc32
+createDataFrame
+Create a 

[30/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/index.html
--
diff --git a/site/docs/2.1.2/api/java/index.html 
b/site/docs/2.1.2/api/java/index.html
new file mode 100644
index 000..08d3ceb
--- /dev/null
+++ b/site/docs/2.1.2/api/java/index.html
@@ -0,0 +1,75 @@
+http://www.w3.org/TR/html4/frameset.dtd;>
+
+
+
+
+Spark 2.1.2 JavaDoc
+
+tmpTargetPage = "" + window.location.search;
+if (tmpTargetPage != "" && tmpTargetPage != "undefined")
+tmpTargetPage = tmpTargetPage.substring(1);
+if (tmpTargetPage.indexOf(":") != -1 || (tmpTargetPage != "" && 
!validURL(tmpTargetPage)))
+tmpTargetPage = "undefined";
+targetPage = tmpTargetPage;
+function validURL(url) {
+try {
+url = decodeURIComponent(url);
+}
+catch (error) {
+return false;
+}
+var pos = url.indexOf(".html");
+if (pos == -1 || pos != url.length - 5)
+return false;
+var allowNumber = false;
+var allowSep = false;
+var seenDot = false;
+for (var i = 0; i < url.length - 5; i++) {
+var ch = url.charAt(i);
+if ('a' <= ch && ch <= 'z' ||
+'A' <= ch && ch <= 'Z' ||
+ch == '$' ||
+ch == '_' ||
+ch.charCodeAt(0) > 127) {
+allowNumber = true;
+allowSep = true;
+} else if ('0' <= ch && ch <= '9'
+|| ch == '-') {
+if (!allowNumber)
+ return false;
+} else if (ch == '/' || ch == '.') {
+if (!allowSep)
+return false;
+allowNumber = false;
+allowSep = false;
+if (ch == '.')
+ seenDot = true;
+if (ch == '/' && seenDot)
+ return false;
+} else {
+return false;
+}
+}
+return true;
+}
+function loadFrames() {
+if (targetPage != "" && targetPage != "undefined")
+ top.classFrame.location = top.targetPage;
+}
+
+
+
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+Frame Alert
+This document is designed to be viewed using the frames feature. If you see 
this message, you are using a non-frame-capable web client. Link to Non-frame version.
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/lib/api-javadocs.js
--
diff --git a/site/docs/2.1.2/api/java/lib/api-javadocs.js 
b/site/docs/2.1.2/api/java/lib/api-javadocs.js
new file mode 100644
index 000..ead13d6
--- /dev/null
+++ b/site/docs/2.1.2/api/java/lib/api-javadocs.js
@@ -0,0 +1,60 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+/* Dynamically injected post-processing code for the API docs */
+
+$(document).ready(function() {
+  addBadges(":: AlphaComponent ::", 'Alpha 
Component');
+  addBadges(":: DeveloperApi ::", 'Developer 
API');
+  addBadges(":: Experimental ::", 'Experimental');
+});
+
+function addBadges(tag, html) {
+  var tags = $(".block:contains(" + tag + ")")
+
+  // Remove identifier tags
+  tags.each(function(index) {
+var oldHTML = $(this).html();
+var newHTML = oldHTML.replace(tag, "");
+$(this).html(newHTML);
+  });
+
+  // Add html badge tags
+  tags.each(function(index) {
+if ($(this).parent().is('td.colLast')) {
+  $(this).parent().prepend(html);
+} else if ($(this).parent('li.blockList')
+  .parent('ul.blockList')
+  .parent('div.description')
+  .parent().is('div.contentContainer')) {
+  var contentContainer = $(this).parent('li.blockList')
+.parent('ul.blockList')
+.parent('div.description')
+.parent('div.contentContainer')
+  var header = contentContainer.prev('div.header');
+  if (header.length > 0) {
+

[17/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html
new file mode 100644
index 000..a311bae
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfo.html
@@ -0,0 +1,247 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkJobInfo (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Interface SparkJobInfo
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+All Known Implementing Classes:
+SparkJobInfoImpl
+
+
+
+public interface SparkJobInfo
+extends java.io.Serializable
+Exposes information about Spark Jobs.
+
+ This interface is not designed to be implemented outside of Spark.  We may 
add additional methods
+ which may break binary compatibility with outside implementations.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+jobId()
+
+
+int[]
+stageIds()
+
+
+JobExecutionStatus
+status()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+jobId
+intjobId()
+
+
+
+
+
+
+
+stageIds
+int[]stageIds()
+
+
+
+
+
+
+
+status
+JobExecutionStatusstatus()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html 
b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html
new file mode 100644
index 000..1a94cdf
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/SparkJobInfoImpl.html
@@ -0,0 +1,306 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+SparkJobInfoImpl (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class SparkJobInfoImpl
+
+
+
+Object
+
+
+org.apache.spark.SparkJobInfoImpl
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, SparkJobInfo
+
+
+
+public class SparkJobInfoImpl
+extends Object
+implements SparkJobInfo
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+SparkJobInfoImpl(intjobId,
+int[]stageIds,
+JobExecutionStatusstatus)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int
+jobId()
+
+
+int[]
+stageIds()
+
+
+JobExecutionStatus
+status()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+SparkJobInfoImpl
+publicSparkJobInfoImpl(intjobId,
+int[]stageIds,
+JobExecutionStatusstatus)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+jobId
+publicintjobId()
+
+Specified by:
+jobIdin
 interfaceSparkJobInfo
+
+
+
+
+
+
+
+
+stageIds
+publicint[]stageIds()
+
+Specified by:

[33/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/allclasses-noframe.html
--
diff --git a/site/docs/2.1.2/api/java/allclasses-noframe.html 
b/site/docs/2.1.2/api/java/allclasses-noframe.html
new file mode 100644
index 000..413f8db
--- /dev/null
+++ b/site/docs/2.1.2/api/java/allclasses-noframe.html
@@ -0,0 +1,1138 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+All Classes (Spark 2.1.2 JavaDoc)
+
+
+
+
+All Classes
+
+
+AbsoluteError
+Accumulable
+AccumulableInfo
+AccumulableInfo
+AccumulableParam
+Accumulator
+AccumulatorContext
+AccumulatorParam
+AccumulatorParam.DoubleAccumulatorParam$
+AccumulatorParam.FloatAccumulatorParam$
+AccumulatorParam.IntAccumulatorParam$
+AccumulatorParam.LongAccumulatorParam$
+AccumulatorParam.StringAccumulatorParam$
+AccumulatorV2
+AFTAggregator
+AFTCostFun
+AFTSurvivalRegression
+AFTSurvivalRegressionModel
+AggregatedDialect
+AggregatingEdgeContext
+Aggregator
+Aggregator
+Algo
+AllJobsCancelled
+AllReceiverIds
+ALS
+ALS
+ALS.InBlock$
+ALS.Rating
+ALS.Rating$
+ALS.RatingBlock$
+ALSModel
+AnalysisException
+And
+AnyDataType
+ApplicationAttemptInfo
+ApplicationInfo
+ApplicationsListResource
+ApplicationStatus
+ApplyInPlace
+AreaUnderCurve
+ArrayType
+AskPermissionToCommitOutput
+AssociationRules
+AssociationRules.Rule
+AsyncRDDActions
+Attribute
+AttributeGroup
+AttributeKeys
+AttributeType
+BaseRelation
+BaseRRDD
+BatchInfo
+BernoulliCellSampler
+BernoulliSampler
+Binarizer
+BinaryAttribute
+BinaryClassificationEvaluator
+BinaryClassificationMetrics
+BinaryLogisticRegressionSummary
+BinaryLogisticRegressionTrainingSummary
+BinarySample
+BinaryType
+BinomialBounds
+BisectingKMeans
+BisectingKMeans
+BisectingKMeansModel
+BisectingKMeansModel
+BisectingKMeansModel.SaveLoadV1_0$
+BisectingKMeansSummary
+BlacklistTracker
+BLAS
+BLAS
+BlockId
+BlockManagerId
+BlockManagerMessages
+BlockManagerMessages.BlockManagerHeartbeat
+BlockManagerMessages.BlockManagerHeartbeat$
+BlockManagerMessages.GetBlockStatus
+BlockManagerMessages.GetBlockStatus$
+BlockManagerMessages.GetExecutorEndpointRef
+BlockManagerMessages.GetExecutorEndpointRef$
+BlockManagerMessages.GetLocations
+BlockManagerMessages.GetLocations$
+BlockManagerMessages.GetLocationsMultipleBlockIds
+BlockManagerMessages.GetLocationsMultipleBlockIds$
+BlockManagerMessages.GetMatchingBlockIds
+BlockManagerMessages.GetMatchingBlockIds$
+BlockManagerMessages.GetMemoryStatus$
+BlockManagerMessages.GetPeers
+BlockManagerMessages.GetPeers$
+BlockManagerMessages.GetStorageStatus$
+BlockManagerMessages.HasCachedBlocks
+BlockManagerMessages.HasCachedBlocks$
+BlockManagerMessages.RegisterBlockManager
+BlockManagerMessages.RegisterBlockManager$
+BlockManagerMessages.RemoveBlock
+BlockManagerMessages.RemoveBlock$
+BlockManagerMessages.RemoveBroadcast
+BlockManagerMessages.RemoveBroadcast$
+BlockManagerMessages.RemoveExecutor
+BlockManagerMessages.RemoveExecutor$
+BlockManagerMessages.RemoveRdd
+BlockManagerMessages.RemoveRdd$
+BlockManagerMessages.RemoveShuffle
+BlockManagerMessages.RemoveShuffle$
+BlockManagerMessages.StopBlockManagerMaster$
+BlockManagerMessages.ToBlockManagerMaster
+BlockManagerMessages.ToBlockManagerSlave
+BlockManagerMessages.TriggerThreadDump$
+BlockManagerMessages.UpdateBlockInfo
+BlockManagerMessages.UpdateBlockInfo$
+BlockMatrix
+BlockNotFoundException
+BlockReplicationPolicy
+BlockStatus
+BlockUpdatedInfo
+BloomFilter
+BloomFilter.Version
+BooleanParam
+BooleanType
+BoostingStrategy
+BoundedDouble
+BreezeUtil
+Broadcast
+BroadcastBlockId
+Broker
+BucketedRandomProjectionLSH
+BucketedRandomProjectionLSHModel
+Bucketizer
+BufferReleasingInputStream
+BytecodeUtils
+ByteType
+CalendarIntervalType
+Catalog
+CatalystScan
+CategoricalSplit
+CausedBy
+CharType
+CheckpointReader
+CheckpointState
+ChiSqSelector
+ChiSqSelector
+ChiSqSelectorModel
+ChiSqSelectorModel
+ChiSqSelectorModel.SaveLoadV1_0$
+ChiSqTest
+ChiSqTest.Method
+ChiSqTest.Method$
+ChiSqTest.NullHypothesis$
+ChiSqTestResult
+CholeskyDecomposition
+ChunkedByteBufferInputStream
+ClassificationModel
+ClassificationModel
+Classifier
+CleanAccum
+CleanBroadcast
+CleanCheckpoint
+CleanRDD
+CleanShuffle
+CleanupTask
+CleanupTaskWeakReference
+ClosureCleaner
+ClusteringSummary
+CoarseGrainedClusterMessages
+CoarseGrainedClusterMessages.AddWebUIFilter
+CoarseGrainedClusterMessages.AddWebUIFilter$
+CoarseGrainedClusterMessages.GetExecutorLossReason
+CoarseGrainedClusterMessages.GetExecutorLossReason$
+CoarseGrainedClusterMessages.KillExecutors
+CoarseGrainedClusterMessages.KillExecutors$
+CoarseGrainedClusterMessages.KillTask
+CoarseGrainedClusterMessages.KillTask$
+CoarseGrainedClusterMessages.LaunchTask
+CoarseGrainedClusterMessages.LaunchTask$
+CoarseGrainedClusterMessages.RegisterClusterManager
+CoarseGrainedClusterMessages.RegisterClusterManager$
+CoarseGrainedClusterMessages.RegisteredExecutor$
+CoarseGrainedClusterMessages.RegisterExecutor

[41/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/ncol.html
--
diff --git a/site/docs/2.1.2/api/R/ncol.html b/site/docs/2.1.2/api/R/ncol.html
new file mode 100644
index 000..8677389
--- /dev/null
+++ b/site/docs/2.1.2/api/R/ncol.html
@@ -0,0 +1,97 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Returns the number of 
columns in a SparkDataFrame
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+ncol 
{SparkR}R Documentation
+
+Returns the number of columns in a SparkDataFrame
+
+Description
+
+Returns the number of columns in a SparkDataFrame
+
+
+
+Usage
+
+
+## S4 method for signature 'SparkDataFrame'
+ncol(x)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame
+
+
+
+
+Note
+
+ncol since 1.5.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, histogram,
+insertInto, intersect,
+isLocal, join,
+limit, merge,
+mutate, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D ncol(df)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/negate.html
--
diff --git a/site/docs/2.1.2/api/R/negate.html 
b/site/docs/2.1.2/api/R/negate.html
new file mode 100644
index 000..cc0e06d
--- /dev/null
+++ b/site/docs/2.1.2/api/R/negate.html
@@ -0,0 +1,67 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: negate
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+negate 
{SparkR}R Documentation
+
+negate
+
+Description
+
+Unary minus, i.e. negate the expression.
+
+
+
+Usage
+
+
+negate(x)
+
+## S4 method for signature 'Column'
+negate(x)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on.
+
+
+
+
+Note
+
+negate since 1.5.0
+
+
+
+See Also
+
+Other normal_funcs: abs,
+bitwiseNOT, coalesce,
+column, expr,
+greatest, ifelse,
+isnan, least,
+lit, nanvl,
+randn, rand,
+struct, when
+
+
+
+Examples
+
+## Not run: negate(df$c)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/next_day.html
--
diff --git a/site/docs/2.1.2/api/R/next_day.html 
b/site/docs/2.1.2/api/R/next_day.html
new file mode 100644
index 000..736d7e4
--- /dev/null
+++ b/site/docs/2.1.2/api/R/next_day.html
@@ -0,0 +1,89 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: next_day
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+next_day 
{SparkR}R Documentation
+
+next_day
+
+Description
+
+Given a date column, returns the first date which is later than the value 
of the date column
+that is on the specified day of the week.
+
+
+
+Usage
+
+
+next_day(y, x)
+
+## S4 method for signature 'Column,character'
+next_day(y, x)
+
+
+
+Arguments
+
+
+y
+
+Column to compute on.
+
+x
+
+Day of the week string.
+
+
+
+
+Details
+
+For example, next_day('2015-07-27', "Sunday") returns 
2015-08-02 because that is the first
+Sunday after 2015-07-27.
+
+Day of the week parameter is case insensitive, and accepts first three or 
two characters:
+Mon, Tue, Wed, Thu, 
Fri, Sat, Sun.
+
+
+
+Note
+
+next_day since 1.5.0
+
+
+
+See Also
+
+Other datetime_funcs: add_months,
+date_add, date_format,
+date_sub, datediff,
+dayofmonth, dayofyear,
+from_unixtime,
+from_utc_timestamp, 
hour,
+last_day, minute,
+months_between, month,
+quarter, second,

[46/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/columns.html
--
diff --git a/site/docs/2.1.2/api/R/columns.html 
b/site/docs/2.1.2/api/R/columns.html
new file mode 100644
index 000..3e7242a
--- /dev/null
+++ b/site/docs/2.1.2/api/R/columns.html
@@ -0,0 +1,137 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Column Names of 
SparkDataFrame
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+colnames 
{SparkR}R Documentation
+
+Column Names of SparkDataFrame
+
+Description
+
+Return all column names as a list.
+
+
+
+Usage
+
+
+colnames(x, do.NULL = TRUE, prefix = "col")
+
+colnames(x) - value
+
+columns(x)
+
+## S4 method for signature 'SparkDataFrame'
+columns(x)
+
+## S4 method for signature 'SparkDataFrame'
+names(x)
+
+## S4 replacement method for signature 'SparkDataFrame'
+names(x) - value
+
+## S4 method for signature 'SparkDataFrame'
+colnames(x)
+
+## S4 replacement method for signature 'SparkDataFrame'
+colnames(x) - value
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame.
+
+do.NULL
+
+currently not used.
+
+prefix
+
+currently not used.
+
+value
+
+a character vector. Must have the same length as the number
+of columns in the SparkDataFrame.
+
+
+
+
+Note
+
+columns since 1.4.0
+
+names since 1.5.0
+
+names- since 1.5.0
+
+colnames since 1.6.0
+
+colnames- since 1.6.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, describe,
+dim, distinct,
+dropDuplicates, dropna,
+drop, dtypes,
+except, explain,
+filter, first,
+gapplyCollect, gapply,
+getNumPartitions, group_by,
+head, histogram,
+insertInto, intersect,
+isLocal, join,
+limit, merge,
+mutate, ncol,
+nrow, persist,
+printSchema, randomSplit,
+rbind, registerTempTable,
+rename, repartition,
+sample, saveAsTable,
+schema, selectExpr,
+select, showDF,
+show, storageLevel,
+str, subset,
+take, union,
+unpersist, withColumn,
+with, write.df,
+write.jdbc, write.json,
+write.orc, write.parquet,
+write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D columns(df)
+##D colnames(df)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/concat.html
--
diff --git a/site/docs/2.1.2/api/R/concat.html 
b/site/docs/2.1.2/api/R/concat.html
new file mode 100644
index 000..6d226c8
--- /dev/null
+++ b/site/docs/2.1.2/api/R/concat.html
@@ -0,0 +1,77 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: concat
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+concat 
{SparkR}R Documentation
+
+concat
+
+Description
+
+Concatenates multiple input string columns together into a single string 
column.
+
+
+
+Usage
+
+
+concat(x, ...)
+
+## S4 method for signature 'Column'
+concat(x, ...)
+
+
+
+Arguments
+
+
+x
+
+Column to compute on
+
+...
+
+other columns
+
+
+
+
+Note
+
+concat since 1.5.0
+
+
+
+See Also
+
+Other string_funcs: ascii,
+base64, concat_ws,
+decode, encode,
+format_number, format_string,
+initcap, instr,
+length, levenshtein,
+locate, lower,
+lpad, ltrim,
+regexp_extract,
+regexp_replace, reverse,
+rpad, rtrim,
+soundex, substring_index,
+translate, trim,
+unbase64, upper
+
+
+
+Examples
+
+## Not run: concat(df$strings, df$strings2)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/concat_ws.html
--
diff --git a/site/docs/2.1.2/api/R/concat_ws.html 
b/site/docs/2.1.2/api/R/concat_ws.html
new file mode 100644
index 000..e30c492
--- /dev/null
+++ b/site/docs/2.1.2/api/R/concat_ws.html
@@ -0,0 +1,82 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: concat_ws
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+concat_ws 
{SparkR}R Documentation
+
+concat_ws
+

[21/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html 
b/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html
new file mode 100644
index 000..75e4293
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/RangePartitioner.html
@@ -0,0 +1,394 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+RangePartitioner (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
RangePartitionerK,V
+
+
+
+Object
+
+
+org.apache.spark.Partitioner
+
+
+org.apache.spark.RangePartitionerK,V
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+
+public class RangePartitionerK,V
+extends Partitioner
+A Partitioner that partitions 
sortable records by range into roughly
+ equal ranges. The ranges are determined by sampling the content of the RDD 
passed in.
+ 
+See Also:Serialized
 FormNote:
+  The actual number of partitions created by the RangePartitioner might 
not be the same
+ as the partitions parameter, in the case where the number of 
sampled records is less than
+ the value of partitions.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+RangePartitioner(intpartitions,
+RDD? extends scala.Product2K,Vrdd,
+booleanascending,
+scala.math.OrderingKevidence$1,
+scala.reflect.ClassTagKevidence$2)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static KObject
+determineBounds(scala.collection.mutable.ArrayBufferscala.Tuple2K,Objectcandidates,
+   intpartitions,
+   scala.math.OrderingKevidence$4,
+   scala.reflect.ClassTagKevidence$5)
+Determines the bounds for range partitioning from 
candidates with weights indicating how many
+ items each represents.
+
+
+
+boolean
+equals(Objectother)
+
+
+int
+getPartition(Objectkey)
+
+
+int
+hashCode()
+
+
+int
+numPartitions()
+
+
+static 
Kscala.Tuple2Object,scala.Tuple3Object,Object,Object[]
+sketch(RDDKrdd,
+  intsampleSizePerPartition,
+  scala.reflect.ClassTagKevidence$3)
+Sketches the input RDD via reservoir sampling on each 
partition.
+
+
+
+
+
+
+
+Methods inherited from classorg.apache.spark.Partitioner
+defaultPartitioner
+
+
+
+
+
+Methods inherited from classObject
+getClass, notify, notifyAll, toString, wait, wait, wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+RangePartitioner
+publicRangePartitioner(intpartitions,
+RDD? extends scala.Product2K,Vrdd,
+booleanascending,
+scala.math.OrderingKevidence$1,
+scala.reflect.ClassTagKevidence$2)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+sketch
+public 
staticKscala.Tuple2Object,scala.Tuple3Object,Object,Object[]sketch(RDDKrdd,
+   
intsampleSizePerPartition,
+   
scala.reflect.ClassTagKevidence$3)
+Sketches the input RDD via reservoir sampling on each 
partition.
+ 
+Parameters:rdd - the 
input RDD to sketchsampleSizePerPartition - max sample 
size per partitionevidence$3 - (undocumented)
+Returns:(total number of items, an 
array of (partitionId, number of items, sample))
+
+
+
+
+
+
+
+determineBounds
+public 
staticKObjectdetermineBounds(scala.collection.mutable.ArrayBufferscala.Tuple2K,Objectcandidates,
+ intpartitions,
+ scala.math.OrderingKevidence$4,
+ scala.reflect.ClassTagKevidence$5)
+Determines the bounds for range partitioning from 
candidates with weights indicating how many
+ items each represents. Usually this is 1 over the probability used to sample 
this candidate.
+ 
+Parameters:candidates - unordered 
candidates with weightspartitions - number of 
partitionsevidence$4 - 
(undocumented)evidence$5 - (undocumented)
+Returns:selected bounds
+
+
+
+
+
+
+
+numPartitions
+publicintnumPartitions()
+
+Specified by:
+numPartitionsin
 classPartitioner
+
+
+
+
+
+
+
+

[16/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html 
b/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html
new file mode 100644
index 000..ffb5849
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/StopMapOutputTracker.html
@@ -0,0 +1,323 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+StopMapOutputTracker (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class 
StopMapOutputTracker
+
+
+
+Object
+
+
+org.apache.spark.StopMapOutputTracker
+
+
+
+
+
+
+
+
+public class StopMapOutputTracker
+extends Object
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+StopMapOutputTracker()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static Object
+productElement(intn)
+
+
+static 
scala.collection.IteratorObject
+productIterator()
+
+
+static String
+productPrefix()
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+StopMapOutputTracker
+publicStopMapOutputTracker()
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+canEqual
+public abstract staticbooleancanEqual(Objectthat)
+
+
+
+
+
+
+
+equals
+public abstract staticbooleanequals(Objectthat)
+
+
+
+
+
+
+
+productElement
+public abstract staticObjectproductElement(intn)
+
+
+
+
+
+
+
+productArity
+public abstract staticintproductArity()
+
+
+
+
+
+
+
+productIterator
+public 
staticscala.collection.IteratorObjectproductIterator()
+
+
+
+
+
+
+
+productPrefix
+public staticStringproductPrefix()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/Success.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/Success.html 
b/site/docs/2.1.2/api/java/org/apache/spark/Success.html
new file mode 100644
index 000..830b4d8
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/Success.html
@@ -0,0 +1,325 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Success (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark
+Class Success
+
+
+
+Object
+
+
+org.apache.spark.Success
+
+
+
+
+
+
+
+
+public class Success
+extends Object
+:: DeveloperApi ::
+ Task succeeded.
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+Success()
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+abstract static boolean
+canEqual(Objectthat)
+
+
+abstract static boolean
+equals(Objectthat)
+
+
+abstract static int
+productArity()
+
+
+abstract static 

[11/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
new file mode 100644
index 000..6aefe3e
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaNewHadoopRDD.html
@@ -0,0 +1,325 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaNewHadoopRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class 
JavaNewHadoopRDDK,V
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaPairRDDK,V
+
+
+org.apache.spark.api.java.JavaNewHadoopRDDK,V
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikescala.Tuple2K,V,JavaPairRDDK,V
+
+
+
+public class JavaNewHadoopRDDK,V
+extends JavaPairRDDK,V
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaNewHadoopRDD(NewHadoopRDDK,Vrdd,
+scala.reflect.ClassTagKkClassTag,
+scala.reflect.ClassTagVvClassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+scala.reflect.ClassTagK
+kClassTag()
+
+
+RJavaRDDR
+mapPartitionsWithInputSplit(Function2org.apache.hadoop.mapreduce.InputSplit,java.util.Iteratorscala.Tuple2K,V,java.util.IteratorRf,
+   booleanpreservesPartitioning)
+Maps over a partition, providing the InputSplit that was 
used as the base of the partition.
+
+
+
+scala.reflect.ClassTagV
+vClassTag()
+
+
+
+
+
+
+Methods inherited from classorg.apache.spark.api.java.JavaPairRDD
+aggregate,
 aggregateByKey,
 aggregateByKey,
 aggregateByKey,
 cache,
 cartesian, checkpoint,
 classTag,
 coalesce,
 coalesce,
 cogroup,
 cogroup,
 cogroup,
 cogroup, cogroup,
 cogroup,
 cogroup,
 cogroup,
 cogroup,
 collect,
 collectAsMap,
 collectAsync,
 collectPartitions,
 combineByKey,
 combineByKey,
  combineByKey,
 combineByKey,
 context,
 count,
 countApprox,
 countApprox,
 countApproxDistinct,
 countApproxDistinctByKey,
 countApproxDistinctByKey,
 countApproxDistinctByKey,
 countAsync,
 countByKey,
 countByKeyApprox,
 countByKeyApprox,
 countByValue,
 countByValueApprox,
 countByValueApprox,
 distinct,
 distinct,
 filter,
 first,
 flatMap,
 flatMapToDouble, flatMapToPair,
 flatMapValues,
 fold,
 foldByKey,
 foldByKey,
 foldByKey,
 foreach,
 foreachAsync,
 foreachPartition,
 foreachPartitionAsync,
 fromJavaRDD,
 fromRDD,
 fullOuterJoin
 , fullOuterJoin,
 fullOuterJoin,
 getCheckpointFile,
 getNumPartitions,
 getStorageLevel,
 glom,
 groupBy,
 gr
 oupBy, groupByKey,
 groupByKey,
 groupByKey,
 groupWith,
 groupWith,
 groupWith,
 id, 
intersection, isCheckpointed,
 isEmpty,
 iterator,
 join,
 join,
 join,
 keyBy,
 keys, leftOuterJoin,
 leftOuterJoin,
 leftOuterJoin,
 lookup,
 map,
 mapPartitions,
 mapPartitions, mapPartitionsToDouble,
 mapPartitionsToDouble,
 mapPartitionsToPair,
 mapPartitionsToPair,
 mapPartitionsWithIndex,
 mapPartitionsWithIndex$default$2, mapToDouble,
 mapToPair,
 mapValues,
 max,
 min,
 name,
 partitionBy,
 
 partitioner, partitions,
 persist,
 pipe,
 pipe,
 pipe,
 pipe,
 pipe,
 rdd, 
reduce, reduceByKey,
 reduceByKey,
 reduceByKey,
 reduceByKeyLocally,
 repartition,
 repartitionAndSortWithinPartitions,
 repartitionAndSortWithinPartitions,
 rightOuterJoin,
 rightOuterJoin,
 rightOuterJoin,
 sample,
 sample,
 sampleByKey,
 sampleByKey,
 sampleByKeyExact,
 sampleByKeyExact,
 saveAsHadoopDataset,
 saveAsHadoopFile,
 saveAsHadoopFile,
 saveAsHadoopFile,
 saveAsNewAPIHadoopDataset,
 saveAsNewAPIHadoopFile,
 saveAsNewAPIHadoopFile,
 saveAsObjectFile,
 saveAsTextFile,
 saveAsTextFile,
 setName,
 sortByKey,
 sortByKey,
 sortByKey,
 sortByKey,
 sortByKey,
 sortByKey,
 subtract, subtract,
 subtract,
 subtractByKey,
 subtractByKey,
 subtractByKey,
 take,
 takeAsync,
 takeOrdered,
 takeOrdered,
 takeSample,
 takeSample,
 toDebugString,
 toLocalIterator,
 top,
 top,
 toRDD,
 treeAggregate,
 treeAggregate,
 treeReduce,
 treeReduce,
 union,
 unpersist,
 unpersist,
 values, 
wrapRDD,
 zip,
 zipPartitions,
 zipWithIndex,
 

[29/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/lib/jquery.js
--
diff --git a/site/docs/2.1.2/api/java/lib/jquery.js 
b/site/docs/2.1.2/api/java/lib/jquery.js
new file mode 100644
index 000..bc3fbc8
--- /dev/null
+++ b/site/docs/2.1.2/api/java/lib/jquery.js
@@ -0,0 +1,2 @@
+/*! jQuery v1.8.2 jquery.com | jquery.org/license */
+(function(a,b){function G(a){var b=F[a]={};return 
p.each(a.split(s),function(a,c){b[c]=!0}),b}function 
J(a,c,d){if(d===b&===1){var 
e="data-"+c.replace(I,"-$1").toLowerCase();d=a.getAttribute(e);if(typeof 
d=="string"){try{d=d==="true"?!0:d==="false"?!1:d==="null"?null:+d+""===d?+d:H.test(d)?p.parseJSON(d):d}catch(f){}p.data(a,c,d)}else
 d=b}return d}function K(a){var b;for(b in 
a){if(b==="data"&(a[b]))continue;if(b!=="toJSON")return!1}return!0}function
 ba(){return!1}function bb(){return!0}function 
bh(a){return!a||!a.parentNode||a.parentNode.nodeType===11}function bi(a,b){do 
a=a[b];while(a&!==1);return a}function 
bj(a,b,c){b=b||0;if(p.isFunction(b))return p.grep(a,function(a,d){var 
e=!!b.call(a,d,a);return e===c});if(b.nodeType)return 
p.grep(a,function(a,d){return a===b===c});if(typeof b=="string"){var 
d=p.grep(a,function(a){return a.nodeType===1});if(be.test(b))return 
p.filter(b,d,!c);b=p.filter(b,d)}return p.grep(a,function(a,d){return p.inArray(
 a,b)>=0===c})}function bk(a){var 
b=bl.split("|"),c=a.createDocumentFragment();if(c.createElement)while(b.length)c.createElement(b.pop());return
 c}function bC(a,b){return 
a.getElementsByTagName(b)[0]||a.appendChild(a.ownerDocument.createElement(b))}function
 bD(a,b){if(b.nodeType!==1||!p.hasData(a))return;var 
c,d,e,f=p._data(a),g=p._data(b,f),h=f.events;if(h){delete 
g.handle,g.events={};for(c in 
h)for(d=0,e=h[c].length;d").appendTo(e.body),c=b.css("display");b.remove();if(c==="none"||c===""){bI=e.body.appendChild(bI||p.extend(e.createElement("iframe"),{frameBorder:0,width:0,height:0}));if(!bJ||!bI.
 
createElement)bJ=(bI.contentWindow||bI.contentDocument).document,bJ.write(""),bJ.close();b=bJ.body.appendChild(bJ.createElement(a)),c=bH(b,"display"),e.body.removeChild(bI)}return
 bS[a]=c,c}function ci(a,b,c,d){var 
e;if(p.isArray(b))p.each(b,function(b,e){c||ce.test(a)?d(a,e):ci(a+"["+(typeof 
e=="object"?b:"")+"]",e,c,d)});else if(!c&(b)==="object")for(e in 
b)ci(a+"["+e+"]",b[e],c,d);else d(a,b)}function cz(a){return 
function(b,c){typeof b!="string"&&(c=b,b="*");var 

[48/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/00frame_toc.html
--
diff --git a/site/docs/2.1.2/api/R/00frame_toc.html 
b/site/docs/2.1.2/api/R/00frame_toc.html
new file mode 100644
index 000..1c62835
--- /dev/null
+++ b/site/docs/2.1.2/api/R/00frame_toc.html
@@ -0,0 +1,406 @@
+
+
+
+
+
+R Documentation of SparkR
+
+
+window.onload = function() {
+  var imgs = document.getElementsByTagName('img'), i, img;
+  for (i = 0; i < imgs.length; i++) {
+img = imgs[i];
+// center an image if it is the only element of its parent
+if (img.parentElement.childElementCount === 1)
+  img.parentElement.style.textAlign = 'center';
+  }
+};
+
+
+
+
+
+
+
+* {
+   font-family: "Trebuchet MS", "Lucida Grande", "Lucida Sans Unicode", 
"Lucida Sans", Arial, sans-serif;
+   font-size: 14px;
+}
+body {
+  padding: 0 5px; 
+  margin: 0 auto; 
+  width: 80%;
+  max-width: 60em; /* 960px */
+}
+
+h1, h2, h3, h4, h5, h6 {
+   color: #666;
+}
+h1, h2 {
+   text-align: center;
+}
+h1 {
+   font-size: x-large;
+}
+h2, h3 {
+   font-size: large;
+}
+h4, h6 {
+   font-style: italic;
+}
+h3 {
+   border-left: solid 5px #ddd;
+   padding-left: 5px;
+   font-variant: small-caps;
+}
+
+p img {
+   display: block;
+   margin: auto;
+}
+
+span, code, pre {
+   font-family: Monaco, "Lucida Console", "Courier New", Courier, 
monospace;
+}
+span.acronym {}
+span.env {
+   font-style: italic;
+}
+span.file {}
+span.option {}
+span.pkg {
+   font-weight: bold;
+}
+span.samp{}
+
+dt, p code {
+   background-color: #F7F7F7;
+}
+
+
+
+
+
+
+
+
+SparkR
+
+
+AFTSurvivalRegressionModel-class
+ALSModel-class
+GBTClassificationModel-class
+GBTRegressionModel-class
+GaussianMixtureModel-class
+GeneralizedLinearRegressionModel-class
+GroupedData
+IsotonicRegressionModel-class
+KMeansModel-class
+KSTest-class
+LDAModel-class
+LogisticRegressionModel-class
+MultilayerPerceptronClassificationModel-class
+NaiveBayesModel-class
+RandomForestClassificationModel-class
+RandomForestRegressionModel-class
+SparkDataFrame
+WindowSpec
+abs
+acos
+add_months
+alias
+approxCountDistinct
+approxQuantile
+arrange
+array_contains
+as.data.frame
+ascii
+asin
+atan
+atan2
+attach
+avg
+base64
+between
+bin
+bitwiseNOT
+bround
+cache
+cacheTable
+cancelJobGroup
+cast
+cbrt
+ceil
+clearCache
+clearJobGroup
+coalesce
+collect
+coltypes
+column
+columnfunctions
+columns
+concat
+concat_ws
+conv
+corr
+cos
+cosh
+count
+countDistinct
+cov
+covar_pop
+crc32
+createDataFrame
+createExternalTable
+createOrReplaceTempView
+crossJoin
+crosstab
+cume_dist
+dapply
+dapplyCollect
+date_add
+date_format
+date_sub
+datediff
+dayofmonth
+dayofyear
+decode
+dense_rank
+dim
+distinct
+drop
+dropDuplicates
+dropTempTable-deprecated
+dropTempView
+dtypes
+encode
+endsWith
+except
+exp
+explain
+explode
+expm1
+expr
+factorial
+filter
+first
+fitted
+floor
+format_number
+format_string
+freqItems
+from_unixtime
+fromutctimestamp
+gapply
+gapplyCollect
+generateAliasesForIntersectedCols
+getNumPartitions
+glm
+greatest
+groupBy
+hash
+hashCode
+head
+hex
+histogram
+hour
+hypot
+ifelse
+initcap
+insertInto
+install.spark
+instr
+intersect
+is.nan
+isLocal
+join
+kurtosis
+lag
+last
+last_day
+lead
+least
+length
+levenshtein
+limit
+lit
+locate
+log
+log10
+log1p
+log2
+lower
+lpad
+ltrim
+match
+max
+md5
+mean
+merge
+min
+minute
+monotonicallyincreasingid
+month
+months_between
+mutate
+nafunctions
+nanvl
+ncol
+negate
+next_day
+nrow
+ntile
+orderBy
+otherwise
+over
+partitionBy
+percent_rank
+persist
+pivot
+pmod
+posexplode
+predict
+print.jobj
+print.structField
+print.structType
+printSchema
+quarter
+rand
+randn
+randomSplit
+rangeBetween
+rank
+rbind
+read.df
+read.jdbc
+read.json
+read.ml
+read.orc
+read.parquet
+read.text
+regexp_extract
+regexp_replace
+registerTempTable-deprecated
+rename
+repartition
+reverse
+rint
+round
+row_number
+rowsBetween
+rpad
+rtrim
+sample
+sampleBy
+saveAsTable
+schema
+sd
+second
+select
+selectExpr
+setJobGroup
+setLogLevel
+sha1
+sha2
+shiftLeft
+shiftRight
+shiftRightUnsigned
+show
+showDF
+sign
+sin
+sinh
+size
+skewness
+sort_array
+soundex
+spark.addFile
+spark.als
+spark.gaussianMixture
+spark.gbt
+spark.getSparkFiles
+spark.getSparkFilesRootDirectory
+spark.glm
+spark.isoreg
+spark.kmeans
+spark.kstest
+spark.lapply
+spark.lda
+spark.logit
+spark.mlp
+spark.naiveBayes
+spark.randomForest
+spark.survreg
+sparkR.callJMethod
+sparkR.callJStatic
+sparkR.conf
+sparkR.init-deprecated
+sparkR.newJObject
+sparkR.session
+sparkR.session.stop
+sparkR.uiWebUrl
+sparkR.version
+sparkRHive.init-deprecated
+sparkRSQL.init-deprecated
+sparkpartitionid
+sql
+sqrt
+startsWith
+stddev_pop
+stddev_samp
+storageLevel
+str
+struct
+structField
+structType
+subset
+substr
+substring_index
+sum
+sumDistinct
+summarize
+summary
+tableNames
+tableToDF
+tables
+take
+tan
+tanh

[09/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html
new file mode 100644
index 000..a7004a5
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaRDD.html
@@ -0,0 +1,1854 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class JavaRDDT
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaRDDT
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikeT,JavaRDDT
+
+
+
+public class JavaRDDT
+extends Object
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaRDD(RDDTrdd,
+   scala.reflect.ClassTagTclassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+static UU
+aggregate(UzeroValue,
+ Function2U,T,UseqOp,
+ Function2U,U,UcombOp)
+
+
+JavaRDDT
+cache()
+Persist this RDD with the default storage level 
(MEMORY_ONLY).
+
+
+
+static UJavaPairRDDT,U
+cartesian(JavaRDDLikeU,?other)
+
+
+static void
+checkpoint()
+
+
+scala.reflect.ClassTagT
+classTag()
+
+
+JavaRDDT
+coalesce(intnumPartitions)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+JavaRDDT
+coalesce(intnumPartitions,
+booleanshuffle)
+Return a new RDD that is reduced into 
numPartitions partitions.
+
+
+
+static java.util.ListT
+collect()
+
+
+static JavaFutureActionjava.util.ListT
+collectAsync()
+
+
+static java.util.ListT[]
+collectPartitions(int[]partitionIds)
+
+
+static SparkContext
+context()
+
+
+static long
+count()
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout)
+
+
+static PartialResultBoundedDouble
+countApprox(longtimeout,
+   doubleconfidence)
+
+
+static long
+countApproxDistinct(doublerelativeSD)
+
+
+static JavaFutureActionLong
+countAsync()
+
+
+static java.util.MapT,Long
+countByValue()
+
+
+static PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout)
+
+
+static PartialResultjava.util.MapT,BoundedDouble
+countByValueApprox(longtimeout,
+  doubleconfidence)
+
+
+JavaRDDT
+distinct()
+Return a new RDD containing the distinct elements in this 
RDD.
+
+
+
+JavaRDDT
+distinct(intnumPartitions)
+Return a new RDD containing the distinct elements in this 
RDD.
+
+
+
+JavaRDDT
+filter(FunctionT,Booleanf)
+Return a new RDD containing only the elements that satisfy 
a predicate.
+
+
+
+static T
+first()
+
+
+static UJavaRDDU
+flatMap(FlatMapFunctionT,Uf)
+
+
+static JavaDoubleRDD
+flatMapToDouble(DoubleFlatMapFunctionTf)
+
+
+static K2,V2JavaPairRDDK2,V2
+flatMapToPair(PairFlatMapFunctionT,K2,V2f)
+
+
+static T
+fold(TzeroValue,
+Function2T,T,Tf)
+
+
+static void
+foreach(VoidFunctionTf)
+
+
+static JavaFutureActionVoid
+foreachAsync(VoidFunctionTf)
+
+
+static void
+foreachPartition(VoidFunctionjava.util.IteratorTf)
+
+
+static JavaFutureActionVoid
+foreachPartitionAsync(VoidFunctionjava.util.IteratorTf)
+
+
+static TJavaRDDT
+fromRDD(RDDTrdd,
+   scala.reflect.ClassTagTevidence$1)
+
+
+static OptionalString
+getCheckpointFile()
+
+
+static int
+getNumPartitions()
+
+
+static StorageLevel
+getStorageLevel()
+
+
+static JavaRDDjava.util.ListT
+glom()
+
+
+static UJavaPairRDDU,IterableT
+groupBy(FunctionT,Uf)
+
+
+static UJavaPairRDDU,IterableT
+groupBy(FunctionT,Uf,
+   intnumPartitions)
+
+
+static int
+id()
+
+
+JavaRDDT
+intersection(JavaRDDTother)
+Return the intersection of this RDD and another one.
+
+
+
+static boolean
+isCheckpointed()
+
+
+static boolean
+isEmpty()
+
+
+static java.util.IteratorT
+iterator(Partitionsplit,
+TaskContexttaskContext)
+
+
+static UJavaPairRDDU,T
+keyBy(FunctionT,Uf)
+
+
+static RJavaRDDR
+map(FunctionT,Rf)
+
+
+static UJavaRDDU
+mapPartitions(FlatMapFunctionjava.util.IteratorT,Uf)
+
+
+static UJavaRDDU
+mapPartitions(FlatMapFunctionjava.util.IteratorT,Uf,
+ booleanpreservesPartitioning)
+
+
+static JavaDoubleRDD

[06/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
new file mode 100644
index 000..a54f037
--- /dev/null
+++ 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaSparkStatusTracker.html
@@ -0,0 +1,323 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaSparkStatusTracker (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class 
JavaSparkStatusTracker
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaSparkStatusTracker
+
+
+
+
+
+
+
+
+public class JavaSparkStatusTracker
+extends Object
+Low-level status reporting APIs for monitoring job and 
stage progress.
+ 
+ These APIs intentionally provide very weak consistency semantics; consumers 
of these APIs should
+ be prepared to handle empty / missing information.  For example, a job's 
stage ids may be known
+ but the status API may not have any information about the details of those 
stages, so
+ getStageInfo could potentially return null for a 
valid stage id.
+ 
+ To limit memory usage, these APIs only provide information on recent jobs / 
stages.  These APIs
+ will provide information for the last spark.ui.retainedStages 
stages and
+ spark.ui.retainedJobs jobs.
+ 
+Note:
+  This class's constructor should be considered private and may be subject 
to change.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+int[]
+getActiveJobIds()
+Returns an array containing the ids of all active 
jobs.
+
+
+
+int[]
+getActiveStageIds()
+Returns an array containing the ids of all active 
stages.
+
+
+
+int[]
+getJobIdsForGroup(StringjobGroup)
+Return a list of all known jobs in a particular job 
group.
+
+
+
+SparkJobInfo
+getJobInfo(intjobId)
+Returns job information, or null if the job 
info could not be found or was garbage collected.
+
+
+
+SparkStageInfo
+getStageInfo(intstageId)
+Returns stage information, or null if the 
stage info could not be found or was
+ garbage collected.
+
+
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+getJobIdsForGroup
+publicint[]getJobIdsForGroup(StringjobGroup)
+Return a list of all known jobs in a particular job group.  
If jobGroup is null, then
+ returns all known jobs that are not associated with a job group.
+ 
+ The returned list may contain running, failed, and completed jobs, and may 
vary across
+ invocations of this method.  This method does not guarantee the order of the 
elements in
+ its result.
+Parameters:jobGroup 
- (undocumented)
+Returns:(undocumented)
+
+
+
+
+
+
+
+getActiveStageIds
+publicint[]getActiveStageIds()
+Returns an array containing the ids of all active stages.
+ 
+ This method does not guarantee the order of the elements in its result.
+Returns:(undocumented)
+
+
+
+
+
+
+
+getActiveJobIds
+publicint[]getActiveJobIds()
+Returns an array containing the ids of all active jobs.
+ 
+ This method does not guarantee the order of the elements in its result.
+Returns:(undocumented)
+
+
+
+
+
+
+
+getJobInfo
+publicSparkJobInfogetJobInfo(intjobId)
+Returns job information, or null if the job 
info could not be found or was garbage collected.
+Parameters:jobId - 
(undocumented)
+Returns:(undocumented)
+
+
+
+
+
+
+
+getStageInfo
+publicSparkStageInfogetStageInfo(intstageId)
+Returns stage information, or null if the 
stage info could not be found or was
+ garbage collected.
+Parameters:stageId - 
(undocumented)
+Returns:(undocumented)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+

[37/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/sparkR.conf.html
--
diff --git a/site/docs/2.1.2/api/R/sparkR.conf.html 
b/site/docs/2.1.2/api/R/sparkR.conf.html
new file mode 100644
index 000..322fbfd
--- /dev/null
+++ b/site/docs/2.1.2/api/R/sparkR.conf.html
@@ -0,0 +1,68 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Get Runtime Config from 
the current active SparkSession
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sparkR.conf {SparkR}R 
Documentation
+
+Get Runtime Config from the current active SparkSession
+
+Description
+
+Get Runtime Config from the current active SparkSession.
+To change SparkSession Runtime Config, please see 
sparkR.session().
+
+
+
+Usage
+
+
+sparkR.conf(key, defaultValue)
+
+
+
+Arguments
+
+
+key
+
+(optional) The key of the config to get, if omitted, all config is 
returned
+
+defaultValue
+
+(optional) The default value of the config to return if they config is not
+set, if omitted, the call fails if the config key is not set
+
+
+
+
+Value
+
+a list of config values with keys as their names
+
+
+
+Note
+
+sparkR.conf since 2.0.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D allConfigs - sparkR.conf()
+##D masterValue - unlist(sparkR.conf(spark.master))
+##D namedConfig - sparkR.conf(spark.executor.memory, 
0g)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/sparkR.init-deprecated.html
--
diff --git a/site/docs/2.1.2/api/R/sparkR.init-deprecated.html 
b/site/docs/2.1.2/api/R/sparkR.init-deprecated.html
new file mode 100644
index 000..d86da83
--- /dev/null
+++ b/site/docs/2.1.2/api/R/sparkR.init-deprecated.html
@@ -0,0 +1,92 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: (Deprecated) Initialize a 
new Spark Context
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sparkR.init {SparkR}R 
Documentation
+
+(Deprecated) Initialize a new Spark Context
+
+Description
+
+This function initializes a new SparkContext.
+
+
+
+Usage
+
+
+sparkR.init(master = "", appName = "SparkR",
+  sparkHome = Sys.getenv("SPARK_HOME"), sparkEnvir = list(),
+  sparkExecutorEnv = list(), sparkJars = "", sparkPackages = "")
+
+
+
+Arguments
+
+
+master
+
+The Spark master URL
+
+appName
+
+Application name to register with cluster manager
+
+sparkHome
+
+Spark Home directory
+
+sparkEnvir
+
+Named list of environment variables to set on worker nodes
+
+sparkExecutorEnv
+
+Named list of environment variables to be used when launching executors
+
+sparkJars
+
+Character vector of jar files to pass to the worker nodes
+
+sparkPackages
+
+Character vector of package coordinates
+
+
+
+
+Note
+
+sparkR.init since 1.4.0
+
+
+
+See Also
+
+sparkR.session
+
+
+
+Examples
+
+## Not run: 
+##D sc - sparkR.init(local[2], SparkR, 
/home/spark)
+##D sc - sparkR.init(local[2], SparkR, 
/home/spark,
+##D  list(spark.executor.memory=1g))
+##D sc - sparkR.init(yarn-client, SparkR, 
/home/spark,
+##D  list(spark.executor.memory=4g),
+##D  list(LD_LIBRARY_PATH=/directory of JVM libraries 
(libjvm.so) on workers/),
+##D  c(one.jar, two.jar, 
three.jar),
+##D  c(com.databricks:spark-avro_2.10:2.0.1))
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/sparkR.newJObject.html
--
diff --git a/site/docs/2.1.2/api/R/sparkR.newJObject.html 
b/site/docs/2.1.2/api/R/sparkR.newJObject.html
new file mode 100644
index 000..8ecda40
--- /dev/null
+++ b/site/docs/2.1.2/api/R/sparkR.newJObject.html
@@ -0,0 +1,86 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create Java Objects
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+sparkR.newJObject {SparkR}R Documentation
+
+Create Java Objects
+
+Description
+
+Create a new Java object in 

[02/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html
--
diff --git a/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html
new file mode 100644
index 000..29034e8
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/r/BaseRRDD.html
@@ -0,0 +1,330 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+BaseRRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.r
+Class BaseRRDDT,U
+
+
+
+Object
+
+
+org.apache.spark.rdd.RDDU
+
+
+org.apache.spark.api.r.BaseRRDDT,U
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable
+
+
+Direct Known Subclasses:
+PairwiseRRDD, RRDD, StringRRDD
+
+
+
+public abstract class BaseRRDDT,U
+extends RDDU
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+BaseRRDD(RDDTparent,
+intnumPartitions,
+byte[]func,
+Stringdeserializer,
+Stringserializer,
+byte[]packageNames,
+BroadcastObject[]broadcastVars,
+scala.reflect.ClassTagTevidence$1,
+scala.reflect.ClassTagUevidence$2)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+scala.collection.IteratorU
+compute(Partitionpartition,
+   TaskContextcontext)
+:: DeveloperApi ::
+ Implemented by subclasses to compute a given partition.
+
+
+
+Partition[]
+getPartitions()
+Implemented by subclasses to return the set of partitions 
in this RDD.
+
+
+
+
+
+
+
+Methods inherited from classorg.apache.spark.rdd.RDD
+aggregate,
 cache, cartesian,
 checkpoint,
 coalesce,
 collect, 
collect,
 context, 
count, countApprox, countApproxDistinct,
 countApproxDistinct,
 countByValue,
 countByValueApprox,
 dependencies,
 distinct, distinct,
 doubleRDDToDoubleRDDFunctions,
 
 filter, first, flatMap,
 fold,
 foreach,
 foreachPartition,
 getCheckpointFile,
 getNumPartitions,
 getStorageLevel,
 glom, groupBy,
 groupBy,
 groupBy,
 id, intersection,
 intersection,
 intersection,
 isCheckpointed,
 isEmpty, 
iterator, keyBy,
 localCheckpoint,
 map,
 mapPartitions,
 mapPartitionsWithIndex,
 max,
 min,
 name, numericRDDToDoubleRDDFunctions, 
partitioner,
 partitions,
 persist, 
persist,
 pipe,
 pipe,
 pipe,
 preferredLocations,
 randomSplit, rddToAsyncRDDActions,
 rddToOrderedRDDFunctions,
 rddToPairRDDFunctions,
 rddToSequenceFileRDDFunctions,
 reduce,
 repartition, sample,
 saveAsObjectFile,
 saveAsTextFile,
 saveAsTextFile,
 setName,
 sortBy,
 sparkContext,
 subtract,
 subtract, subtract,
 take, takeOrdered,
 takeSample,
 toDebugString,
 toJavaRDD, 
toLocalIterator,
 top,
 toString, treeAggregate,
 treeReduce,
 union,
 unpersist,
 zip,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipPartitions,
 zipWithIndex,
 zipWithUniqueId
+
+
+
+
+
+Methods inherited from classObject
+equals, getClass, hashCode, notify, notifyAll, wait, wait, 
wait
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Constructor Detail
+
+
+
+
+
+BaseRRDD
+publicBaseRRDD(RDDTparent,
+intnumPartitions,
+byte[]func,
+Stringdeserializer,
+Stringserializer,
+byte[]packageNames,
+BroadcastObject[]broadcastVars,
+scala.reflect.ClassTagTevidence$1,
+scala.reflect.ClassTagUevidence$2)
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+getPartitions
+publicPartition[]getPartitions()
+Description copied from class:RDD
+Implemented by subclasses to return the set of partitions 
in this RDD. This method will only
+ be called once, so it is safe to implement a time-consuming computation in it.
+ 
+ The partitions in this array must satisfy the following property:
+   rdd.partitions.zipWithIndex.forall { case (partition, index) = 
partition.index == index }
+Returns:(undocumented)
+
+
+
+
+
+
+
+compute
+publicscala.collection.IteratorUcompute(Partitionpartition,
+   TaskContextcontext)
+Description copied from class:RDD
+:: DeveloperApi ::
+ Implemented by subclasses to compute a given partition.
+
+Specified by:
+computein
 classRDDU
+Parameters:partition - 
(undocumented)context 

[36/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/summary.html
--
diff --git a/site/docs/2.1.2/api/R/summary.html 
b/site/docs/2.1.2/api/R/summary.html
new file mode 100644
index 000..d074b00
--- /dev/null
+++ b/site/docs/2.1.2/api/R/summary.html
@@ -0,0 +1,132 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: summary
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+describe 
{SparkR}R Documentation
+
+summary
+
+Description
+
+Computes statistics for numeric and string columns.
+If no columns are given, this function computes statistics for all numerical 
or string columns.
+
+
+
+Usage
+
+
+describe(x, col, ...)
+
+summary(object, ...)
+
+## S4 method for signature 'SparkDataFrame,character'
+describe(x, col, ...)
+
+## S4 method for signature 'SparkDataFrame,ANY'
+describe(x)
+
+## S4 method for signature 'SparkDataFrame'
+summary(object, ...)
+
+
+
+Arguments
+
+
+x
+
+a SparkDataFrame to be computed.
+
+col
+
+a string of name.
+
+...
+
+additional expressions.
+
+object
+
+a SparkDataFrame to be summarized.
+
+
+
+
+Value
+
+A SparkDataFrame.
+
+
+
+Note
+
+describe(SparkDataFrame, character) since 1.4.0
+
+describe(SparkDataFrame) since 1.4.0
+
+summary(SparkDataFrame) since 1.5.0
+
+
+
+See Also
+
+Other SparkDataFrame functions: SparkDataFrame-class,
+agg, arrange,
+as.data.frame, attach,
+cache, coalesce,
+collect, colnames,
+coltypes,
+createOrReplaceTempView,
+crossJoin, dapplyCollect,
+dapply, dim,
+distinct, dropDuplicates,
+dropna, drop,
+dtypes, except,
+explain, filter,
+first, gapplyCollect,
+gapply, getNumPartitions,
+group_by, head,
+histogram, insertInto,
+intersect, isLocal,
+join, limit,
+merge, mutate,
+ncol, nrow,
+persist, printSchema,
+randomSplit, rbind,
+registerTempTable, rename,
+repartition, sample,
+saveAsTable, schema,
+selectExpr, select,
+showDF, show,
+storageLevel, str,
+subset, take,
+union, unpersist,
+withColumn, with,
+write.df, write.jdbc,
+write.json, write.orc,
+write.parquet, write.text
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D path - path/to/file.json
+##D df - read.json(path)
+##D describe(df)
+##D describe(df, col1)
+##D describe(df, col1, col2)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/tableNames.html
--
diff --git a/site/docs/2.1.2/api/R/tableNames.html 
b/site/docs/2.1.2/api/R/tableNames.html
new file mode 100644
index 000..cf05272
--- /dev/null
+++ b/site/docs/2.1.2/api/R/tableNames.html
@@ -0,0 +1,61 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Table Names
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+tableNames 
{SparkR}R Documentation
+
+Table Names
+
+Description
+
+Returns the names of tables in the given database as an array.
+
+
+
+Usage
+
+
+## Default S3 method:
+tableNames(databaseName = NULL)
+
+
+
+Arguments
+
+
+databaseName
+
+name of the database
+
+
+
+
+Value
+
+a list of table names
+
+
+
+Note
+
+tableNames since 1.4.0
+
+
+
+Examples
+
+## Not run: 
+##D sparkR.session()
+##D tableNames(hive)
+## End(Not run)
+
+
+
+[Package SparkR version 2.1.2 
Index]
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/R/tableToDF.html
--
diff --git a/site/docs/2.1.2/api/R/tableToDF.html 
b/site/docs/2.1.2/api/R/tableToDF.html
new file mode 100644
index 000..3164dae
--- /dev/null
+++ b/site/docs/2.1.2/api/R/tableToDF.html
@@ -0,0 +1,64 @@
+http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd;>http://www.w3.org/1999/xhtml;>R: Create a SparkDataFrame 
from a SparkSQL Table
+
+
+
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/styles/github.min.css;>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/highlight.min.js";>
+https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.3/languages/r.min.js";>
+hljs.initHighlightingOnLoad();
+
+
+tableToDF 
{SparkR}R Documentation
+
+Create a SparkDataFrame from a SparkSQL Table
+
+Description
+
+Returns the specified Table as a SparkDataFrame.  The Table must have 
already been registered
+in the SparkSession.
+
+
+
+Usage
+
+
+tableToDF(tableName)
+
+
+
+Arguments
+
+
+tableName
+
+The SparkSQL Table to 

[04/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html
new file mode 100644
index 000..35cf9d7
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function0.html
@@ -0,0 +1,217 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Function0 (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface Function0R
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface Function0R
+extends java.io.Serializable
+A zero-argument function that returns an R.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+R
+call()
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+call
+Rcall()
+   throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html
new file mode 100644
index 000..ef10f45
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function2.html
@@ -0,0 +1,221 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+Function2 (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java.function
+Interface 
Function2T1,T2,R
+
+
+
+
+
+
+All Superinterfaces:
+java.io.Serializable
+
+
+
+public interface Function2T1,T2,R
+extends java.io.Serializable
+A two-argument function that takes arguments of type T1 and 
T2 and returns an R.
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+R
+call(T1v1,
+T2v2)
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+
+
+call
+Rcall(T1v1,
+ T2v2)
+   throws Exception
+Throws:
+Exception
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev Class
+Next Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function3.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/function/Function3.html 

[50/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/README.md
--
diff --git a/site/docs/2.1.2/README.md b/site/docs/2.1.2/README.md
new file mode 100644
index 000..ffd3b57
--- /dev/null
+++ b/site/docs/2.1.2/README.md
@@ -0,0 +1,72 @@
+Welcome to the Spark documentation!
+
+This readme will walk you through navigating and building the Spark 
documentation, which is included
+here with the Spark source code. You can also find documentation specific to 
release versions of
+Spark at http://spark.apache.org/documentation.html.
+
+Read on to learn more about viewing documentation in plain text (i.e., 
markdown) or building the
+documentation yourself. Why build it yourself? So that you have the docs that 
corresponds to
+whichever version of Spark you currently have checked out of revision control.
+
+## Prerequisites
+The Spark documentation build uses a number of tools to build HTML docs and 
API docs in Scala,
+Python and R.
+
+You need to have 
[Ruby](https://www.ruby-lang.org/en/documentation/installation/) and
+[Python](https://docs.python.org/2/using/unix.html#getting-and-installing-the-latest-version-of-python)
+installed. Also install the following libraries:
+```sh
+$ sudo gem install jekyll jekyll-redirect-from pygments.rb
+$ sudo pip install Pygments
+# Following is needed only for generating API docs
+$ sudo pip install sphinx pypandoc
+$ sudo Rscript -e 'install.packages(c("knitr", "devtools", "roxygen2", 
"testthat", "rmarkdown"), repos="http://cran.stat.ucla.edu/;)'
+```
+(Note: If you are on a system with both Ruby 1.9 and Ruby 2.0 you may need to 
replace gem with gem2.0)
+
+## Generating the Documentation HTML
+
+We include the Spark documentation as part of the source (as opposed to using 
a hosted wiki, such as
+the github wiki, as the definitive documentation) to enable the documentation 
to evolve along with
+the source code and be captured by revision control (currently git). This way 
the code automatically
+includes the version of the documentation that is relevant regardless of which 
version or release
+you have checked out or downloaded.
+
+In this directory you will find textfiles formatted using Markdown, with an 
".md" suffix. You can
+read those text files directly if you want. Start with index.md.
+
+Execute `jekyll build` from the `docs/` directory to compile the site. 
Compiling the site with
+Jekyll will create a directory called `_site` containing index.html as well as 
the rest of the
+compiled files.
+
+$ cd docs
+$ jekyll build
+
+You can modify the default Jekyll build as follows:
+```sh
+# Skip generating API docs (which takes a while)
+$ SKIP_API=1 jekyll build
+
+# Serve content locally on port 4000
+$ jekyll serve --watch
+
+# Build the site with extra features used on the live page
+$ PRODUCTION=1 jekyll build
+```
+
+## API Docs (Scaladoc, Sphinx, roxygen2)
+
+You can build just the Spark scaladoc by running `build/sbt unidoc` from the 
SPARK_PROJECT_ROOT directory.
+
+Similarly, you can build just the PySpark docs by running `make html` from the
+SPARK_PROJECT_ROOT/python/docs directory. Documentation is only generated for 
classes that are listed as
+public in `__init__.py`. The SparkR docs can be built by running 
SPARK_PROJECT_ROOT/R/create-docs.sh.
+
+When you run `jekyll` in the `docs` directory, it will also copy over the 
scaladoc for the various
+Spark subprojects into the `docs` directory (and then also into the `_site` 
directory). We use a
+jekyll plugin to run `build/sbt unidoc` before building the site so if you 
haven't run it (recently) it
+may take some time as it generates all of the scaladoc.  The jekyll plugin 
also generates the
+PySpark docs using [Sphinx](http://sphinx-doc.org/).
+
+NOTE: To skip the step of building and copying over the Scala, Python, R API 
docs, run `SKIP_API=1
+jekyll`.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api.html
--
diff --git a/site/docs/2.1.2/api.html b/site/docs/2.1.2/api.html
new file mode 100644
index 000..2496122
--- /dev/null
+++ b/site/docs/2.1.2/api.html
@@ -0,0 +1,178 @@
+
+
+
+
+
+  
+
+
+
+Spark API Documentation - Spark 2.1.2 Documentation
+
+
+
+
+
+
+body {
+padding-top: 60px;
+padding-bottom: 40px;
+}
+
+
+
+
+
+
+
+
+
+
+
+
+  var _gaq = _gaq || [];
+  _gaq.push(['_setAccount', 'UA-32518208-2']);
+  _gaq.push(['_trackPageview']);
+
+  (function() {
+var ga = document.createElement('script'); ga.type = 
'text/javascript'; ga.async = true;
+ga.src = ('https:' == 

[12/51] [partial] spark-website git commit: Add 2.1.2 docs

2017-10-17 Thread holden
http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html
new file mode 100644
index 000..073fb95
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaFutureAction.html
@@ -0,0 +1,226 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaFutureAction (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Interface 
JavaFutureActionT
+
+
+
+
+
+
+All Superinterfaces:
+java.util.concurrent.FutureT
+
+
+
+public interface JavaFutureActionT
+extends java.util.concurrent.FutureT
+
+
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+java.util.ListInteger
+jobIds()
+Returns the job IDs run by the underlying async 
operation.
+
+
+
+
+
+
+
+Methods inherited from interfacejava.util.concurrent.Future
+cancel, get, get, isCancelled, isDone
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Method Detail
+
+
+
+
+
+jobIds
+java.util.ListIntegerjobIds()
+Returns the job IDs run by the underlying async operation.
+
+ This returns the current snapshot of the job list. Certain operations may run 
multiple
+ jobs, so multiple calls to this method may return different lists.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+

http://git-wip-us.apache.org/repos/asf/spark-website/blob/a6d9cbde/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
--
diff --git 
a/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html 
b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
new file mode 100644
index 000..60fc727
--- /dev/null
+++ b/site/docs/2.1.2/api/java/org/apache/spark/api/java/JavaHadoopRDD.html
@@ -0,0 +1,325 @@
+http://www.w3.org/TR/html4/loose.dtd;>
+
+
+
+
+JavaHadoopRDD (Spark 2.1.2 JavaDoc)
+
+
+
+
+
+
+
+JavaScript is disabled on your browser.
+
+
+
+
+
+
+
+
+Overview
+Package
+Class
+Tree
+Deprecated
+Index
+Help
+
+
+
+
+Prev 
Class
+Next 
Class
+
+
+Frames
+No Frames
+
+
+All Classes
+
+
+
+
+
+
+
+Summary:
+Nested|
+Field|
+Constr|
+Method
+
+
+Detail:
+Field|
+Constr|
+Method
+
+
+
+
+
+
+
+
+org.apache.spark.api.java
+Class 
JavaHadoopRDDK,V
+
+
+
+Object
+
+
+org.apache.spark.api.java.JavaPairRDDK,V
+
+
+org.apache.spark.api.java.JavaHadoopRDDK,V
+
+
+
+
+
+
+
+
+
+All Implemented Interfaces:
+java.io.Serializable, JavaRDDLikescala.Tuple2K,V,JavaPairRDDK,V
+
+
+
+public class JavaHadoopRDDK,V
+extends JavaPairRDDK,V
+See Also:Serialized
 Form
+
+
+
+
+
+
+
+
+
+
+
+Constructor Summary
+
+Constructors
+
+Constructor and Description
+
+
+JavaHadoopRDD(HadoopRDDK,Vrdd,
+ scala.reflect.ClassTagKkClassTag,
+ scala.reflect.ClassTagVvClassTag)
+
+
+
+
+
+
+
+
+
+Method Summary
+
+Methods
+
+Modifier and Type
+Method and Description
+
+
+scala.reflect.ClassTagK
+kClassTag()
+
+
+RJavaRDDR
+mapPartitionsWithInputSplit(Function2org.apache.hadoop.mapred.InputSplit,java.util.Iteratorscala.Tuple2K,V,java.util.IteratorRf,
+   booleanpreservesPartitioning)
+Maps over a partition, providing the InputSplit that was 
used as the base of the 

  1   2   >