Repository: spark-website
Updated Branches:
  refs/heads/asf-site fe49ab1ef -> ae58782ba


http://git-wip-us.apache.org/repos/asf/spark-website/blob/ae58782b/site/releases/spark-release-2-1-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-2-1-0.html 
b/site/releases/spark-release-2-1-0.html
index 26e65a2..7c7df63 100644
--- a/site/releases/spark-release-2-1-0.html
+++ b/site/releases/spark-release-2-1-0.html
@@ -200,15 +200,15 @@
 <p>To download Apache Spark 2.1.0, visit the <a 
href="/downloads.html">downloads</a> page. You can consult JIRA for the <a 
href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&amp;version=12335644";>detailed
 changes</a>. We have curated a list of high level changes here, grouped by 
major modules.</p>
 
 <ul id="markdown-toc">
-  <li><a href="#core-and-spark-sql">Core and Spark SQL</a></li>
-  <li><a href="#structured-streaming">Structured Streaming</a></li>
-  <li><a href="#mllib">MLlib</a></li>
-  <li><a href="#sparkr">SparkR</a></li>
-  <li><a href="#graphx">GraphX</a></li>
-  <li><a href="#deprecations">Deprecations</a></li>
-  <li><a href="#changes-of-behavior">Changes of behavior</a></li>
-  <li><a href="#known-issues">Known Issues</a></li>
-  <li><a href="#credits">Credits</a></li>
+  <li><a href="#core-and-spark-sql" id="markdown-toc-core-and-spark-sql">Core 
and Spark SQL</a></li>
+  <li><a href="#structured-streaming" 
id="markdown-toc-structured-streaming">Structured Streaming</a></li>
+  <li><a href="#mllib" id="markdown-toc-mllib">MLlib</a></li>
+  <li><a href="#sparkr" id="markdown-toc-sparkr">SparkR</a></li>
+  <li><a href="#graphx" id="markdown-toc-graphx">GraphX</a></li>
+  <li><a href="#deprecations" 
id="markdown-toc-deprecations">Deprecations</a></li>
+  <li><a href="#changes-of-behavior" 
id="markdown-toc-changes-of-behavior">Changes of behavior</a></li>
+  <li><a href="#known-issues" id="markdown-toc-known-issues">Known 
Issues</a></li>
+  <li><a href="#credits" id="markdown-toc-credits">Credits</a></li>
 </ul>
 
 <h3 id="core-and-spark-sql">Core and Spark SQL</h3>
@@ -216,7 +216,7 @@
 <ul>
   <li><strong>API updates</strong>
     <ul>
-      <li>SPARK-17864: Data type APIs are stable APIs. </li>
+      <li>SPARK-17864: Data type APIs are stable APIs.</li>
       <li>SPARK-18351: from_json and to_json for parsing JSON for string 
columns</li>
       <li>SPARK-16700: When creating a DataFrame in PySpark, Python 
dictionaries can be used as values of a StructType.</li>
     </ul>
@@ -318,9 +318,9 @@
       <li>SPARK-18377: spark.sql.warehouse.dir is a static configuration now. 
Users need to set it before the start of the first SparkSession and its value 
is shared by sessions in the same application.</li>
       <li>SPARK-14393: Values generated by non-deterministic functions will 
not change after coalesce or union.</li>
       <li>SPARK-18076: Fix default Locale used in DateFormat, NumberFormat to 
Locale.US</li>
-      <li>SPARK-16216: CSV and JSON data sources write timestamp and date 
values in <a href="https://www.w3.org/TR/NOTE-datetime";>ISO 8601 formatted 
string</a>. Two options, timestampFormat and dateFormat, are added to these two 
data sources to let users control the format of timestamp and date value in 
string representation, respectively. Please refer to the API doc of <a 
href="/docs/2.1.0/api/scala/index.html#org.apache.spark.sql.DataFrameReader">DataFrameReader</a>
 and <a 
href="/docs/2.1.0/api/scala/index.html#org.apache.spark.sql.DataFrameWriter">DataFrameWriter</a>
 for more details about these two configurations. </li>
+      <li>SPARK-16216: CSV and JSON data sources write timestamp and date 
values in <a href="https://www.w3.org/TR/NOTE-datetime";>ISO 8601 formatted 
string</a>. Two options, timestampFormat and dateFormat, are added to these two 
data sources to let users control the format of timestamp and date value in 
string representation, respectively. Please refer to the API doc of <a 
href="/docs/2.1.0/api/scala/index.html#org.apache.spark.sql.DataFrameReader">DataFrameReader</a>
 and <a 
href="/docs/2.1.0/api/scala/index.html#org.apache.spark.sql.DataFrameWriter">DataFrameWriter</a>
 for more details about these two configurations.</li>
       <li>SPARK-17427: Function SIZE returns -1 when its input parameter is 
null.</li>
-      <li>SPARK-16498: LazyBinaryColumnarSerDe is fixed as the the SerDe for 
RCFile. </li>
+      <li>SPARK-16498: LazyBinaryColumnarSerDe is fixed as the the SerDe for 
RCFile.</li>
       <li>SPARK-16552: If a user does not specify the schema to a table and 
relies on schema inference, the inferred schema will be stored in the 
metastore. The schema will be not inferred again when this table is used.</li>
     </ul>
   </li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/ae58782b/site/sitemap.xml
----------------------------------------------------------------------
diff --git a/site/sitemap.xml b/site/sitemap.xml
index 1ed4c74..67a171b 100644
--- a/site/sitemap.xml
+++ b/site/sitemap.xml
@@ -632,27 +632,27 @@
   <changefreq>weekly</changefreq>
 </url>
 <url>
-  <loc>http://spark.apache.org/sql/</loc>
+  <loc>http://spark.apache.org/mllib/</loc>
   <changefreq>weekly</changefreq>
 </url>
 <url>
-  <loc>http://spark.apache.org/screencasts/</loc>
+  <loc>http://spark.apache.org/news/</loc>
   <changefreq>weekly</changefreq>
 </url>
 <url>
-  <loc>http://spark.apache.org/streaming/</loc>
+  <loc>http://spark.apache.org/screencasts/</loc>
   <changefreq>weekly</changefreq>
 </url>
 <url>
-  <loc>http://spark.apache.org/</loc>
+  <loc>http://spark.apache.org/sql/</loc>
   <changefreq>weekly</changefreq>
 </url>
 <url>
-  <loc>http://spark.apache.org/mllib/</loc>
+  <loc>http://spark.apache.org/streaming/</loc>
   <changefreq>weekly</changefreq>
 </url>
 <url>
-  <loc>http://spark.apache.org/news/</loc>
+  <loc>http://spark.apache.org/</loc>
   <changefreq>weekly</changefreq>
 </url>
 <url>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/ae58782b/site/third-party-projects.html
----------------------------------------------------------------------
diff --git a/site/third-party-projects.html b/site/third-party-projects.html
index 74db810..8cb21d1 100644
--- a/site/third-party-projects.html
+++ b/site/third-party-projects.html
@@ -212,7 +212,7 @@ for details)</li>
   <li><a href="http://mesos.apache.org/";>Apache Mesos</a> - Cluster management 
system that supports 
 running Spark</li>
   <li><a href="http://alluxio.org/";>Alluxio</a> (née Tachyon) - Memory speed 
virtual distributed 
-storage system that supports running Spark    </li>
+storage system that supports running Spark</li>
   <li><a href="https://github.com/datastax/spark-cassandra-connector";>Spark 
Cassandra Connector</a> - 
 Easily load your Cassandra data into Spark and Spark SQL; from Datastax</li>
   <li><a href="http://github.com/tuplejump/FiloDB";>FiloDB</a> - a Spark 
integrated analytical/columnar 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/ae58782b/site/versioning-policy.html
----------------------------------------------------------------------
diff --git a/site/versioning-policy.html b/site/versioning-policy.html
index 4a9115a..01bbb37 100644
--- a/site/versioning-policy.html
+++ b/site/versioning-policy.html
@@ -231,7 +231,7 @@ try to. Once they are marked &#8220;stable&#8221; they have 
to follow these guid
 &#8220;experimental&#8221;. Release A is API compatible with release B if code 
compiled against release A 
 <em>compiles cleanly</em> against B. Currently, does not guarantee that a 
compiled application that is 
 linked against version A will link cleanly against version B without 
re-compiling. Link-level 
-compatibility is something we&#8217;ll try to guarantee in future releases. 
</p>
+compatibility is something we&#8217;ll try to guarantee in future releases.</p>
 
 <p>Note, however, that even for features &#8220;developer API&#8221; and 
&#8220;experimental&#8221;, we strive to maintain 
 maximum compatibility. Code should not be merged into the project as 
&#8220;experimental&#8221; if there is 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to