minor, remove beta for spark cubing

Project: http://git-wip-us.apache.org/repos/asf/kylin/repo
Commit: http://git-wip-us.apache.org/repos/asf/kylin/commit/52932cf0
Tree: http://git-wip-us.apache.org/repos/asf/kylin/tree/52932cf0
Diff: http://git-wip-us.apache.org/repos/asf/kylin/diff/52932cf0

Branch: refs/heads/document
Commit: 52932cf0efd395c256a2cdecdfeb12b9320eaf88
Parents: 4283077
Author: lidongsjtu <lid...@apache.org>
Authored: Sun Oct 29 15:44:57 2017 +0800
Committer: lidongsjtu <lid...@apache.org>
Committed: Sun Oct 29 15:44:57 2017 +0800

----------------------------------------------------------------------
 website/_docs21/index.md               | 8 ++++++--
 website/_docs21/tutorial/cube_spark.md | 8 ++++----
 2 files changed, 10 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kylin/blob/52932cf0/website/_docs21/index.md
----------------------------------------------------------------------
diff --git a/website/_docs21/index.md b/website/_docs21/index.md
index 12bf1b1..5515cac 100644
--- a/website/_docs21/index.md
+++ b/website/_docs21/index.md
@@ -5,6 +5,7 @@ categories: docs
 permalink: /docs21/index.html
 ---
 
+
 Welcome to Apache Kylin™: Extreme OLAP Engine for Big Data
 ------------  
 
@@ -16,6 +17,7 @@ Document of prior versions:
 * [v1.6.x document](/docs16/)
 * [v1.5.x document](/docs15/)
 
+
 Installation & Setup
 ------------  
 1. [Hadoop Env](install/hadoop_env.html)
@@ -25,15 +27,16 @@ Installation & Setup
 5. [Run Kylin with Docker](install/kylin_docker.html)
 
 
+
 Tutorial
 ------------  
 1. [Quick Start with Sample Cube](tutorial/kylin_sample.html)
 2. [Web Interface](tutorial/web.html)
 3. [Cube Wizard](tutorial/create_cube.html)
-3. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
+4. [Cube Build and Job Monitoring](tutorial/cube_build_job.html)
 5. [SQL reference: by Apache 
Calcite](http://calcite.apache.org/docs/reference.html)
 6. [Build Cube with Streaming Data](tutorial/cube_streaming.html)
-7. [Build Cube with Spark Engine (beta)](tutorial/cube_spark.html)
+7. [Build Cube with Spark Engine](tutorial/cube_spark.html)
 8. [Cube Build Tuning](tutorial/cube_build_performance.html)
 9. [Enable Query Pushdown](tutorial/query_pushdown.html)
 
@@ -53,6 +56,7 @@ Connectivity and APIs
 10. [Connect from Apache Flink](tutorial/flink.html)
 11. [Connect from Apache Hue](tutorial/hue.html)
 
+
 Operations
 ------------  
 1. [Backup/restore Kylin metadata](howto/howto_backup_metadata.html)

http://git-wip-us.apache.org/repos/asf/kylin/blob/52932cf0/website/_docs21/tutorial/cube_spark.md
----------------------------------------------------------------------
diff --git a/website/_docs21/tutorial/cube_spark.md 
b/website/_docs21/tutorial/cube_spark.md
index 9af44cd..5400309 100644
--- a/website/_docs21/tutorial/cube_spark.md
+++ b/website/_docs21/tutorial/cube_spark.md
@@ -1,6 +1,6 @@
 ---
 layout: docs21
-title:  Build Cube with Spark (beta)
+title:  Build Cube with Spark
 categories: tutorial
 permalink: /docs21/tutorial/cube_spark.html
 ---
@@ -10,7 +10,7 @@ Kylin v2.0 introduces the Spark cube engine, it uses Apache 
Spark to replace Map
 ## Preparation
 To finish this tutorial, you need a Hadoop environment which has Kylin v2.1.0 
or above installed. Here we will use Hortonworks HDP 2.4 Sandbox VM, the Hadoop 
components as well as Hive/HBase has already been started. 
 
-## Install Kylin v2.1.0
+## Install Kylin v2.1.0 or above
 
 Download the Kylin v2.1.0 for HBase 1.x from Kylin's download page, and then 
uncompress the tar ball into */usr/local/* folder:
 
@@ -104,7 +104,7 @@ $KYLIN_HOME/bin/kylin.sh start
 
 {% endhighlight %}
 
-After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the 
"Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark 
(Beta)":
+After Kylin is started, access Kylin web, edit the "kylin_sales" cube, in the 
"Advanced Setting" page, change the "Cube Engine" from "MapReduce" to "Spark":
 
 
    ![](/images/tutorial/2.0/Spark-Cubing-Tutorial/1_cube_engine.png)
@@ -166,4 +166,4 @@ Click a specific job, there you will see the detail runtime 
information, that is
 
 If you're a Kylin administrator but new to Spark, suggest you go through 
[Spark documents](https://spark.apache.org/docs/2.1.0/), and don't forget to 
update the configurations accordingly. You can enable Spark [Dynamic Resource 
Allocation](https://spark.apache.org/docs/2.1.0/job-scheduling.html#dynamic-resource-allocation)
 so that it can auto scale/shrink for different work load. Spark's performance 
relies on Cluster's memory and CPU resource, while Kylin's Cube build is a 
heavy task when having a complex data model and a huge dataset to build at one 
time. If your cluster resource couldn't fulfill, errors like "OutOfMemorry" 
will be thrown in Spark executors, so please use it properly. For Cube which 
has UHC dimension, many combinations (e.g, a full cube with more than 12 
dimensions), or memory hungry measures (Count Distinct, Top-N), suggest to use 
the MapReduce engine. If your Cube model is simple, all measures are 
SUM/MIN/MAX/COUNT, source data is small to medium scale, Spark eng
 ine would be a good choice. Besides, Streaming build isn't supported in this 
engine so far (KYLIN-2484).
 
-Now the Spark engine is in public beta; If you have any question, comment, or 
bug fix, welcome to discuss in d...@kylin.apache.org.
+If you have any question, comment, or bug fix, welcome to discuss in 
d...@kylin.apache.org.

Reply via email to