http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesAPI.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesAPI.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesAPI.md
deleted file mode 100644
index f56139a..0000000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesAPI.md
+++ /dev/null
@@ -1,606 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-# YARN Simplified API layer for services
-
-## Overview
-Bringing a new service on YARN today is not a simple experience. The APIs of
-existing frameworks are either too low level (native YARN), require writing
-new code (for frameworks with programmatic APIs) or writing a complex spec
-(for declarative frameworks). In addition to building critical building blocks
-inside YARN (as part of other efforts at
-[YARN-4692](https://issues.apache.org/jira/browse/YARN-4692)), there is a need 
for
-simplifying the user facing story for building services. Experience of projects
-like Apache Slider running real-life services like HBase, Storm, Accumulo,
-Solr etc, gives us some very good insights on how simplified APIs for services
-should look like.
-
-To this end, we should look at a new simple-services API layer backed by REST
-interfaces. This API can be used to create and manage the lifecycle of YARN
-services. Services here can range from simple single-component service to
-complex multi-component assemblies needing orchestration.
-[YARN-4793](https://issues.apache.org/jira/browse/YARN-4793) tracks this
-effort.
-
-This document spotlights on this specification. In most of the cases, the
-application owner will not be forced to make any changes to their applications.
-This is primarily true if the application is packaged with containerization
-technologies like docker. Irrespective of how complex the application is,
-there will be hooks provided at appropriate layers to allow pluggable and
-customizable application behavior.
-
-
-### Version information
-Version: 1.0.0
-
-### License information
-License: Apache 2.0
-License URL: http://www.apache.org/licenses/LICENSE-2.0.html
-
-### URI scheme
-Host: host.mycompany.com
-
-BasePath: /ws/v1/
-
-Schemes: HTTP
-
-### Consumes
-
-* application/json
-
-
-### Produces
-
-* application/json
-
-
-## Paths
-### Create a service
-```
-POST /services
-```
-
-#### Description
-
-Create a service. The request JSON is a service object with details required 
for creation. If the request is successful it returns 202 Accepted. A success 
of this API only confirms success in submission of the service creation 
request. There is no guarantee that the service will actually reach a RUNNING 
state. Resource availability and several other factors determines if the 
service will be deployed in the cluster. It is expected that clients would 
subsequently call the GET API to get details of the service and determine its 
state.
-
-#### Parameters
-|Type|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|----|
-|BodyParameter|Service|Service request object|true|Service||
-
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|202|The request to create a service is accepted|No Content|
-|400|Invalid service definition provided in the request body|No Content|
-|500|Failed to create a service|No Content|
-|default|Unexpected error|ServiceStatus|
-
-
-### (TBD) List of services running in the cluster.
-```
-GET /services
-```
-
-#### Description
-
-Get a list of all currently running services (response includes a minimal 
projection of the service info). For more details do a GET on a specific 
service name.
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|200|An array of services|Service array|
-|default|Unexpected error|ServiceStatus|
-
-
-### Get current version of the API server.
-```
-GET /services/version
-```
-
-#### Description
-
-Get current version of the API server.
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|200|Successful request|No Content|
-
-
-### Update a service or upgrade the binary version of the components of a 
running service
-```
-PUT /services/{service_name}
-```
-
-#### Description
-
-Update the runtime properties of a service. Currently the following operations 
are supported - update lifetime, stop/start a service. The PUT operation is 
also used to orchestrate an upgrade of the service containers to a newer 
version of their artifacts (TBD).
-
-#### Parameters
-|Type|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|----|
-|PathParameter|service_name|Service name|true|string||
-|BodyParameter|Service|The updated service definition. It can contain the 
updated lifetime of a service or the desired state (STOPPED/STARTED) of a 
service to initiate a start/stop operation against the specified 
service|true|Service||
-
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|204|Update or upgrade was successful|No Content|
-|404|Service does not exist|No Content|
-|default|Unexpected error|ServiceStatus|
-
-
-### Destroy a service
-```
-DELETE /services/{service_name}
-```
-
-#### Description
-
-Destroy a service and release all resources. This API might have to return 
JSON data providing location of logs (TBD), etc.
-
-#### Parameters
-|Type|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|----|
-|PathParameter|service_name|Service name|true|string||
-
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|204|Destroy was successful|No Content|
-|404|Service does not exist|No Content|
-|default|Unexpected error|ServiceStatus|
-
-
-### Get details of a service.
-```
-GET /services/{service_name}
-```
-
-#### Description
-
-Return the details (including containers) of a running service
-
-#### Parameters
-|Type|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|----|
-|PathParameter|service_name|Service name|true|string||
-
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|200|a service object|object|
-|404|Service does not exist|No Content|
-|default|Unexpected error|ServiceStatus|
-
-
-### Flex a component's number of instances.
-```
-PUT /services/{service_name}/components/{component_name}
-```
-
-#### Description
-
-Set a component's desired number of instanes
-
-#### Parameters
-|Type|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|----|
-|PathParameter|service_name|Service name|true|string||
-|PathParameter|component_name|Component name|true|string||
-|BodyParameter|Component|The definition of a component which contains the 
updated number of instances.|true|Component||
-
-
-#### Responses
-|HTTP Code|Description|Schema|
-|----|----|----|
-|200|Flex was successful|No Content|
-|404|Service does not exist|No Content|
-|default|Unexpected error|ServiceStatus|
-
-
-## Definitions
-### Artifact
-
-Artifact of a service component. If not specified, component will just run the 
bare launch command and no artifact will be localized.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|id|Artifact id. Examples are package location uri for tarball based services, 
image name for docker, name of service, etc.|true|string||
-|type|Artifact type, like docker, tarball, etc. (optional). For TARBALL type, 
the specified tarball will be localized to the container local working 
directory under a folder named lib. For SERVICE type, the service specified 
will be read and its components will be added into this service. The original 
component with artifact type SERVICE will be removed (any properties specified 
in the original component will be ignored).|false|enum (DOCKER, TARBALL, 
SERVICE)|DOCKER|
-|uri|Artifact location to support multiple artifact stores 
(optional).|false|string||
-
-
-### Component
-
-One or more components of the service. If the service is HBase say, then the 
component can be a simple role like master or regionserver. If the service is a 
complex business webapp then a component can be other services say Kafka or 
Storm. Thereby it opens up the support for complex and nested services.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|name|Name of the service component (mandatory). If Registry DNS is enabled, 
the max length is 63 characters. If unique component support is enabled, the 
max length is lowered to 44 characters.|true|string||
-|dependencies|An array of service components which should be in READY state 
(as defined by readiness check), before this component can be started. The 
dependencies across all components of a service should be represented as a 
DAG.|false|string array||
-|readiness_check|Readiness check for this component.|false|ReadinessCheck||
-|artifact|Artifact of the component (optional). If not specified, the service 
level global artifact takes effect.|false|Artifact||
-|launch_command|The custom launch command of this component (optional for 
DOCKER component, required otherwise). When specified at the component level, 
it overrides the value specified at the global level (if any).|false|string||
-|resource|Resource of this component (optional). If not specified, the service 
level global resource takes effect.|false|Resource||
-|number_of_containers|Number of containers for this component (optional). If 
not specified, the service level global number_of_containers takes 
effect.|false|integer (int64)||
-|run_privileged_container|Run all containers of this component in privileged 
mode (YARN-4262).|false|boolean||
-|placement_policy|Advanced scheduling and placement policies for all 
containers of this component (optional). If not specified, the service level 
placement_policy takes effect. Refer to the description at the global level for 
more details.|false|PlacementPolicy||
-|configuration|Config properties for this component.|false|Configuration||
-|quicklinks|A list of quicklink keys defined at the service level, and to be 
resolved by this component.|false|string array||
-
-
-### ConfigFile
-
-A config file that needs to be created and made available as a volume in a 
service component container.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|type|Config file in the standard format like xml, properties, json, yaml, 
template.|false|enum (XML, PROPERTIES, JSON, YAML, TEMPLATE, ENV, HADOOP_XML)||
-|dest_file|The path that this configuration file should be created as. If it 
is an absolute path, it will be mounted into the DOCKER container. Absolute 
paths are only allowed for DOCKER containers.  If it is a relative path, only 
the file name should be provided, and the file will be created in the container 
local working directory under a folder named conf.|false|string||
-|src_file|This provides the source location of the configuration file, the 
content of which is dumped to dest_file post property substitutions, in the 
format as specified in type. Typically the src_file would point to a source 
controlled network accessible file maintained by tools like puppet, chef, or 
hdfs etc. Currently, only hdfs is supported.|false|string||
-|props|A blob of key value pairs that will be dumped in the dest_file in the 
format as specified in type. If src_file is specified, src_file content are 
dumped in the dest_file and these properties will overwrite, if any, existing 
properties in src_file or be added as new properties in src_file.|false|object||
-
-
-### Configuration
-
-Set of configuration properties that can be injected into the service 
components via envs, files and custom pluggable helper docker containers. Files 
of several standard formats like xml, properties, json, yaml and templates will 
be supported.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|properties|A blob of key-value pairs of common service 
properties.|false|object||
-|env|A blob of key-value pairs which will be appended to the default system 
properties and handed off to the service at start time. All placeholder 
references to properties will be substituted before injection.|false|object||
-|files|Array of list of files that needs to be created and made available as 
volumes in the service component containers.|false|ConfigFile array||
-
-
-### Container
-
-An instance of a running service container.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|id|Unique container id of a running service, e.g. 
container_e3751_1458061340047_0008_01_000002.|false|string||
-|launch_time|The time when the container was created, e.g. 
2016-03-16T01:01:49.000Z. This will most likely be different from cluster 
launch time.|false|string (date)||
-|ip|IP address of a running container, e.g. 172.31.42.141. The IP address and 
hostname attribute values are dependent on the cluster/docker network setup as 
per YARN-4007.|false|string||
-|hostname|Fully qualified hostname of a running container, e.g. 
ctr-e3751-1458061340047-0008-01-000002.examplestg.site. The IP address and 
hostname attribute values are dependent on the cluster/docker network setup as 
per YARN-4007.|false|string||
-|bare_host|The bare node or host in which the container is running, e.g. 
cn008.example.com.|false|string||
-|state|State of the container of a service.|false|ContainerState||
-|component_name|Name of the component that this container instance belongs 
to.|false|string||
-|resource|Resource used for this container.|false|Resource||
-|artifact|Artifact used for this container.|false|Artifact||
-|privileged_container|Container running in privileged mode or 
not.|false|boolean||
-
-
-### ContainerState
-
-The current state of the container of a service.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|state|enum of the state of the container|false|enum (INIT, STARTED, READY)||
-
-
-### PlacementPolicy
-
-Placement policy of an instance of a service. This feature is in the works in 
YARN-6592.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|label|Assigns a service to a named partition of the cluster where the service 
desires to run (optional). If not specified all services are submitted to a 
default label of the service owner. One or more labels can be setup for each 
service owner account with required constraints like no-preemption, sla-99999, 
preemption-ok, etc.|false|string||
-
-
-### ReadinessCheck
-
-A custom command or a pluggable helper container to determine the readiness of 
a container of a component. Readiness for every service is different. Hence the 
need for a simple interface, with scope to support advanced usecases.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|type|E.g. HTTP (YARN will perform a simple REST call at a regular interval 
and expect a 204 No content).|true|enum (HTTP, PORT)||
-|props|A blob of key value pairs that will be used to configure the 
check.|false|object||
-|artifact|Artifact of the pluggable readiness check helper container 
(optional). If specified, this helper container typically hosts the http uri 
and encapsulates the complex scripts required to perform actual container 
readiness check. At the end it is expected to respond a 204 No content just 
like the simplified use case. This pluggable framework benefits service owners 
who can run services without any packaging modifications. Note, artifacts of 
type docker only is supported for now. NOT IMPLEMENTED YET|false|Artifact||
-
-
-### Resource
-
-Resource determines the amount of resources (vcores, memory, network, etc.) 
usable by a container. This field determines the resource to be applied for all 
the containers of a component or service. The resource specified at the service 
(or global) level can be overriden at the component level. Only one of profile 
OR cpu & memory are expected. It raises a validation exception otherwise.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|profile|Each resource profile has a unique id which is associated with a 
cluster-level predefined memory, cpus, etc.|false|string||
-|cpus|Amount of vcores allocated to each container (optional but overrides 
cpus in profile if specified).|false|integer (int32)||
-|memory|Amount of memory allocated to each container (optional but overrides 
memory in profile if specified). Currently accepts only an integer value and 
default unit is in MB.|false|string||
-
-
-### Service
-
-a service resource has the following attributes.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|name|A unique service name. If Registry DNS is enabled, the max length is 63 
characters.|true|string||
-|id|A unique service id.|false|string||
-|artifact|Artifact of single-component service.|false|Artifact||
-|resource|Resource of single-component service or the global default for 
multi-component services. Mandatory if it is a single-component service and if 
cpus and memory are not specified at the Service level.|false|Resource||
-|launch_command|The custom launch command of a service component (optional). 
If not specified for services with docker images say, it will default to the 
default start command of the image. If there is a single component in this 
service, you can specify this without the need to have a 'components' 
section.|false|string||
-|launch_time|The time when the service was created, e.g. 
2016-03-16T01:01:49.000Z.|false|string (date)||
-|number_of_containers|Number of containers for each component in the service. 
Each component can further override this service-level global 
default.|false|integer (int64)||
-|number_of_running_containers|In get response this provides the total number 
of running containers for this service (across all components) at the time of 
request. Note, a subsequent request can return a different number as and when 
more containers get allocated until it reaches the total number of containers 
or if a flex request has been made between the two requests.|false|integer 
(int64)||
-|lifetime|Life time (in seconds) of the service from the time it reaches the 
STARTED state (after which it is automatically destroyed by YARN). For 
unlimited lifetime do not set a lifetime value.|false|integer (int64)||
-|placement_policy|(TBD) Advanced scheduling and placement policies. If not 
specified, it defaults to the default placement policy of the service owner. 
The design of placement policies are in the works. It is not very clear at this 
point, how policies in conjunction with labels be exposed to service owners. 
This is a placeholder for now. The advanced structure of this attribute will be 
determined by YARN-4902.|false|PlacementPolicy||
-|components|Components of a service.|false|Component array||
-|configuration|Config properties of a service. Configurations provided at the 
service/global level are available to all the components. Specific properties 
can be overridden at the component level.|false|Configuration||
-|containers|Containers of a started service. Specifying a value for this 
attribute for the POST payload raises a validation error. This blob is 
available only in the GET response of a started service.|false|Container array||
-|state|State of the service. Specifying a value for this attribute for the 
POST payload raises a validation error. This attribute is available only in the 
GET response of a started service.|false|ServiceState||
-|quicklinks|A blob of key-value pairs of quicklinks to be exported for a 
service.|false|object||
-|queue|The YARN queue that this service should be submitted to.|false|string||
-
-
-### ServiceState
-
-The current state of a service.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|state|enum of the state of the service|false|enum (ACCEPTED, STARTED, READY, 
STOPPED, FAILED)||
-
-
-### ServiceStatus
-
-The current status of a submitted service, returned as a response to the GET 
API.
-
-|Name|Description|Required|Schema|Default|
-|----|----|----|----|----|
-|diagnostics|Diagnostic information (if any) for the reason of the current 
state of the service. It typically has a non-null value, if the service is in a 
non-running state.|false|string||
-|state|Service state.|false|ServiceState||
-|code|An error code specific to a scenario which service owners should be able 
to use to understand the failure in addition to the diagnostic 
information.|false|integer (int32)||
-
-
-
-## Examples
-
-### Create a simple single-component service with most attribute values as 
defaults
-POST URL - http://localhost:9191/ws/v1/services
-
-##### POST Request JSON
-```json
-{
-  "name": "hello-world",
-  "components" :
-    [
-      {
-        "name": "hello",
-        "number_of_containers": 1,
-        "artifact": {
-          "id": "nginx:latest",
-          "type": "DOCKER"
-        },
-        "launch_command": "./start_nginx.sh",
-        "resource": {
-          "cpus": 1,
-          "memory": "256"
-       }
-      }
-    ]
-}
-```
-
-##### GET Response JSON
-GET URL - http://localhost:9191/ws/v1/services/hello-world
-
-Note, lifetime value of -1 means unlimited lifetime.
-
-```json
-{
-    "name": "hello-world",
-    "id": "application_1503963985568_0002",
-    "lifetime": -1,
-    "components": [
-        {
-            "name": "hello",
-            "dependencies": [],
-            "resource": {
-                "cpus": 1,
-                "memory": "256"
-            },
-            "configuration": {
-                "properties": {},
-                "env": {},
-                "files": []
-            },
-            "quicklinks": [],
-            "containers": [
-                {
-                    "id": "container_e03_1503963985568_0002_01_000001",
-                    "ip": "10.22.8.143",
-                    "hostname": "myhost.local",
-                    "state": "READY",
-                    "launch_time": 1504051512412,
-                    "bare_host": "10.22.8.143",
-                    "component_name": "hello-0"
-                },
-                {
-                    "id": "container_e03_1503963985568_0002_01_000002",
-                    "ip": "10.22.8.143",
-                    "hostname": "myhost.local",
-                    "state": "READY",
-                    "launch_time": 1504051536450,
-                    "bare_host": "10.22.8.143",
-                    "component_name": "hello-1"
-                }
-            ],
-            "launch_command": "./start_nginx.sh",
-            "number_of_containers": 1,
-            "run_privileged_container": false
-        }
-    ],
-    "configuration": {
-        "properties": {},
-        "env": {},
-        "files": []
-    },
-    "quicklinks": {}
-}
-
-```
-### Update to modify the lifetime of a service
-PUT URL - http://localhost:9191/ws/v1/services/hello-world
-
-##### PUT Request JSON
-
-Note, irrespective of what the current lifetime value is, this update request 
will set the lifetime of the service to be 3600 seconds (1 hour) from the time 
the request is submitted. Hence, if a a service has remaining lifetime of 5 
mins (say) and would like to extend it to an hour OR if an application has 
remaining lifetime of 5 hours (say) and would like to reduce it down to an 
hour, then for both scenarios you need to submit the same request below.
-
-```json
-{
-  "lifetime": 3600
-}
-```
-### Stop a service
-PUT URL - http://localhost:9191/ws/v1/services/hello-world
-
-##### PUT Request JSON
-```json
-{
-    "state": "STOPPED"
-}
-```
-
-### Start a service
-PUT URL - http://localhost:9191/ws/v1/services/hello-world
-
-##### PUT Request JSON
-```json
-{
-    "state": "STARTED"
-}
-```
-
-### Update to flex up/down the no of containers (instances) of a component of 
a service
-PUT URL - http://localhost:9191/ws/v1/services/hello-world/components/hello
-
-##### PUT Request JSON
-```json
-{
-    "name": "hello",
-    "number_of_containers": 3
-}
-```
-
-### Destroy a service
-DELETE URL - http://localhost:9191/ws/v1/services/hello-world
-
-***
-
-### Create a complicated service  - HBase
-POST URL - http://localhost:9191:/ws/v1/services/hbase-app-1
-
-##### POST Request JSON
-
-```json
-{
-  "name": "hbase-app-1",
-  "lifetime": "3600",
-  "components": [
-    {
-      "name": "hbasemaster",
-      "number_of_containers": 1,
-      "artifact": {
-        "id": "hbase:latest",
-        "type": "DOCKER"
-      },
-      "launch_command": "/usr/hdp/current/hbase-master/bin/hbase master start",
-      "resource": {
-        "cpus": 1,
-        "memory": "2048"
-      },
-      "configuration": {
-        "env": {
-          "HBASE_LOG_DIR": "<LOG_DIR>"
-        },
-        "files": [
-          {
-            "type": "XML",
-            "dest_file": "/etc/hadoop/conf/core-site.xml",
-            "props": {
-              "fs.defaultFS": "${CLUSTER_FS_URI}"
-            }
-          },
-          {
-            "type": "XML",
-            "dest_file": "/etc/hbase/conf/hbase-site.xml",
-            "props": {
-              "hbase.cluster.distributed": "true",
-              "hbase.zookeeper.quorum": "${CLUSTER_ZK_QUORUM}",
-              "hbase.rootdir": "${SERVICE_HDFS_DIR}/hbase",
-              "zookeeper.znode.parent": "${SERVICE_ZK_PATH}",
-              "hbase.master.hostname": 
"hbasemaster.${SERVICE_NAME}.${USER}.${DOMAIN}",
-              "hbase.master.info.port": "16010"
-            }
-          }
-        ]
-      }
-    },
-    {
-      "name": "regionserver",
-      "number_of_containers": 3,
-      "unique_component_support": "true",
-      "artifact": {
-        "id": "hbase:latest",
-        "type": "DOCKER"
-      },
-      "launch_command": "/usr/hdp/current/hbase-regionserver/bin/hbase 
regionserver start",
-      "resource": {
-        "cpus": 1,
-        "memory": "2048"
-      },
-      "configuration": {
-        "env": {
-          "HBASE_LOG_DIR": "<LOG_DIR>"
-        },
-        "files": [
-          {
-            "type": "XML",
-            "dest_file": "/etc/hadoop/conf/core-site.xml",
-            "props": {
-              "fs.defaultFS": "${CLUSTER_FS_URI}"
-            }
-          },
-          {
-            "type": "XML",
-            "dest_file": "/etc/hbase/conf/hbase-site.xml",
-            "props": {
-              "hbase.cluster.distributed": "true",
-              "hbase.zookeeper.quorum": "${CLUSTER_ZK_QUORUM}",
-              "hbase.rootdir": "${SERVICE_HDFS_DIR}/hbase",
-              "zookeeper.znode.parent": "${SERVICE_ZK_PATH}",
-              "hbase.master.hostname": 
"hbasemaster.${SERVICE_NAME}.${USER}.${DOMAIN}",
-              "hbase.master.info.port": "16010",
-              "hbase.regionserver.hostname": 
"${COMPONENT_INSTANCE_NAME}.${SERVICE_NAME}.${USER}.${DOMAIN}"
-            }
-          }
-        ]
-      }
-    }
-  ],
-  "quicklinks": {
-    "HBase Master Status UI": 
"http://hbasemaster0.${SERVICE_NAME}.${USER}.${DOMAIN}:16010/master-status";,
-    "Proxied HBase Master Status UI": 
"http://app-proxy/${DOMAIN}/${USER}/${SERVICE_NAME}/hbasemaster/16010/";
-  }
-}
-```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesDiscovery.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesDiscovery.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesDiscovery.md
deleted file mode 100644
index a927118..0000000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesDiscovery.md
+++ /dev/null
@@ -1,144 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-# YARN DNS Server
-
-<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
-
-## Introduction
-
-The YARN DNS Server provides a standard DNS interface to the information 
posted into the YARN Registry by deployed applications. The DNS service serves 
the following functions:
-
-1. **Exposing existing service-discovery information via DNS** - Information 
provided in
-the current YARN service registry’s records will be converted into DNS 
entries, thus
-allowing users to discover information about YARN applications using standard 
DNS
-client mechanisms (for e.g. a DNS SRV Record specifying the hostname and port
-number for services).
-2. **Enabling Container to IP mappings** - Enables discovery of the IPs of 
containers via
-standard DNS lookups. Given the availability of the records via DNS, container
-name-based communication will be facilitated (e.g. ‘curl
-http://myContainer.myDomain.com/endpoint’).
-
-## Service Properties
-
-The existing YARN Service Registry is leveraged as the source of information 
for the DNS Service.
-
-The following core functions are supported by the DNS-Server:
-
-### Functional properties
-
-1. Supports creation of DNS records for end-points of the deployed YARN 
applications
-2. Record names remain unchanged during restart of containers and/or 
applications
-3. Supports reverse lookups (name based on IP).
-4. Supports security using the standards defined by The Domain Name System 
Security
-Extensions (DNSSEC)
-5. Highly available
-6. Scalable - The service provides the responsiveness (e.g. low-latency) 
required to
-respond to DNS queries (timeouts yield attempts to invoke other configured name
-servers).
-
-### Deployment properties
-
-1. Supports integration with existing DNS assets (e.g. a corporate DNS server) 
by acting as
-a DNS server for a Hadoop cluster zone/domain. The server is not intended to 
act as a
-primary DNS server and does not forward requests to other servers.
-2. The DNS Server exposes a port that can receive both TCP and UDP requests per
-DNS standards. The default port for DNS protocols is in a restricted, 
administrative port
-range (53), so the port is configurable for deployments in which the service 
may
-not be managed via an administrative account.
-
-## DNS Record Name Structure
-
-The DNS names of generated records are composed from the following elements 
(labels). Note that these elements must be compatible with DNS conventions (see 
“Preferred Name Syntax” in RFC 1035):
-
-* **domain** - the name of the cluster DNS domain. This name is provided as a
-configuration property. In addition, it is this name that is configured at a 
parent DNS
-server as the zone name for the defined yDNS zone (the zone for which the 
parent DNS
-server will forward requests to yDNS). E.g. yarncluster.com
-* **username** - the name of the application deployer. This name is the simple 
short-name (for
-e.g. the primary component of the Kerberos principal) associated with the user 
launching
-the application. As the username is one of the elements of DNS names, it is 
expected
-that this also confirms DNS name conventions (RFC 1035 linked above), so 
special translation is performed for names with special characters like hyphens 
and spaces.
-* **application name** - the name of the deployed YARN application. This name 
is inferred
-from the YARN registry path to the application's node. Application name, 
rather thn application id, was chosen as a way of making it easy for users to 
refer to human-readable DNS
-names. This obviously mandates certain uniqueness properties on application 
names.
-* **container id** - the YARN assigned ID to a container (e.g.
-container_e3741_1454001598828_01_000004)
-* **component name** - the name assigned to the deployed component (for e.g. a 
master
-component). A component is a distributed element of an application or service 
that is
-launched in a YARN container (e.g. an HBase master). One can imagine multiple
-components within an application. A component name is not yet a first class 
concept in
-YARN, but is a very useful one that we are introducing here for the sake of 
yDNS
-entries. Many frameworks like MapReduce, Slider already have component names
-(though, as mentioned, they are not yet supported in YARN in a first class 
fashion).
-* **api** - the api designation for the exposed endpoint
-
-### Notes about DNS Names
-
-* In most instances, the DNS names can be easily distinguished by the number of
-elements/labels that compose the name. The cluster’s domain name is always 
the last
-element. After that element is parsed out, reading from right to left, the 
first element
-maps to the application user and so on. Wherever it is not easily 
distinguishable, naming conventions are used to disambiguate the name using a 
prefix such as
-“container” or suffix such as “api”. For example, an endpoint 
published as a
-management endpoint will be referenced with the name 
*management-api.griduser.yarncluster.com*.
-* Unique application name (per user) is not currently supported/guaranteed by 
YARN, but
-it is supported by frameworks such as Apache Slider. The yDNS service currently
-leverages the last element of the ZK path entry for the application as an
-application name. These application names have to be unique for a given user.
-
-## DNS Server Functionality
-
-The primary functions of the DNS service are illustrated in the following 
diagram:
-
-
-![DNS Functional Overview](../images/dns_overview.png "DNS Functional 
Overview")
-
-### DNS record creation
-The following figure illustrates at slightly greater detail the DNS record 
creation and registration sequence (NOTE: service record updates would follow a 
similar sequence of steps,
-distinguished only by the different event type):
-
-![DNS Functional Overview](../images/dns_record_creation.jpeg "DNS Functional 
Overview")
-
-### DNS record removal
-Similarly, record removal follows a similar sequence
-
-![DNS Functional Overview](../images/dns_record_removal.jpeg "DNS Functional 
Overview")
-
-(NOTE: The DNS Zone requires a record as an argument for the deletion method, 
thus
-requiring similar parsing logic to identify the specific records that should 
be removed).
-
-### DNS Service initialization
-* The DNS service initializes both UDP and TCP listeners on a configured port. 
As
-noted above, the default port of 53 is in a restricted range that is only 
accessible to an
-account with administrative privileges.
-* Subsequently, the DNS service listens for inbound DNS requests. Those 
requests are
-standard DNS requests from users or other DNS servers (for example, DNS 
servers that have the
-YARN DNS service configured as a forwarder).
-
-## Configuration
-The YARN DNS server reads its configuration properties from the yarn-site.xml 
file.  The following are the DNS associated configuration properties:
-
-| Name | Description |
-| ------------ | ------------- |
-| hadoop.registry.dns.enabled | The DNS functionality is enabled for the 
cluster. Default is false. |
-| hadoop.registry.dns.domain-name  | The domain name for Hadoop cluster 
associated records.  |
-| hadoop.registry.dns.bind-address | Address associated with the network 
interface to which the DNS listener should bind.  |
-| hadoop.registry.dns.bind-port | The port number for the DNS listener. The 
default port is 53. However, since that port falls in a administrator-only 
range, typical deployments may need to specify an alternate port.  |
-| hadoop.registry.dns.dnssec.enabled | Indicates whether the DNSSEC support is 
enabled. Default is false.  |
-| hadoop.registry.dns.public-key  | The base64 representation of the 
server’s public key. Leveraged for creating the DNSKEY Record provided for 
DNSSEC client requests.  |
-| hadoop.registry.dns.private-key-file  | The path to the standard DNSSEC 
private key file. Must only be readable by the DNS launching identity. See 
[dnssec-keygen](https://ftp.isc.org/isc/bind/cur/9.9/doc/arm/man.dnssec-keygen.html)
 documentation.  |
-| hadoop.registry.dns-ttl | The default TTL value to associate with DNS 
records. The default value is set to 1 (a value of 0 has undefined behavior). A 
typical value should be approximate to the time it takes YARN to restart a 
failed container.  |
-| hadoop.registry.dns.zone-subnet  | An indicator of the IP range associated 
with the cluster containers. The setting is utilized for the generation of the 
reverse zone name.  |
-| hadoop.registry.dns.zone-mask | The network mask associated with the zone IP 
range.  If specified, it is utilized to ascertain the IP range possible and 
come up with an appropriate reverse zone name. |
-| hadoop.registry.dns.zones-dir | A directory containing zone configuration 
files to read during zone initialization.  This directory can contain zone 
master files named *zone-name.zone*.  See 
[here](http://www.zytrax.com/books/dns/ch6/mydomain.html) for zone master file 
documentation.|

http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesIntro.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesIntro.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesIntro.md
deleted file mode 100644
index e6a4e91..0000000
--- 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/native-services/NativeServicesIntro.md
+++ /dev/null
@@ -1,107 +0,0 @@
-<!---
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License. See accompanying LICENSE file.
--->
-
-# Introduction: YARN Native Services
-
-## Overview
-YARN Native Services provides first class framework support and APIs to host 
long running services natively in YARN. In addition to launching services, the 
new APIs support performing lifecycle management operations, such as flex 
service components up/down, manage lifetime, upgrade the service to a newer 
version, and stop/restart/delete the service.
-
-The native services capabilities are built on the existing low-level resource 
management API provided by YARN that can support any type of application. Other 
application frameworks like Hadoop MapReduce already expose higher level APIs 
that users can leverage to run applications on top of YARN. With the advent of 
containerization technologies like Docker, providing first class support and 
APIs for long running services at the framework level made sense.
-
-Relying on a framework has the advantage of exposing a simpler usage model to 
the user by enabling service configuration and launch through specification 
(without writing new code), as well as hiding complex low-level details 
including state management and fault-tolerance etc. Users/operators of existing 
services typically like to avoid modifying an existing service to be aware of 
YARN. With first class support capable of running a single Docker image as well 
as complex assemblies comprised of multiple Docker images, there is no need for 
service owners to be aware of YARN. Developers of new services do not have to 
worry about YARN internals and only need to focus on containerization of their 
service(s).
-
-## First class support for services
-In order to natively provide first class support for long running services, 
several new features and improvements have been made at the framework level.
-
-### Incorporate Apache Slider into Apache YARN
-Apache Slider, which existed as a separate incubator project has been merged 
into YARN to kick start the first class support. Apache Slider is a universal 
Application Master (AM) which had several key features built in - fault 
tolerance of service containers and AM, work-preserving AM restarts, service 
logs management, service management like flex up/down, stop/start, and rolling 
upgrade to newer service versions, etc. Of course lot more work has been done 
on top of what Apache Slider brought in, details of which follow.
-
-### Native Services API
-A significant effort has gone into simplifying the user facing story for 
building services. In the past, bringing a new service to YARN was not a 
pleasant experience. The APIs of existing frameworks are either too low-level 
(native YARN), require writing new code (for frameworks with programmatic APIs) 
or require writing a complex spec (for declarative frameworks).
-
-The new REST APIs are very simple to use. The REST layer acts as a single 
point of entry for creation and lifecycle management of YARN services. Services 
here can range from simple single-component apps to the most complex, 
multi-component applications needing special orchestration needs.
-
-Plan is to make this a unified REST based entry point for other important 
features like resource-profile management 
([YARN-3926](https://issues.apache.org/jira/browse/YARN-4793)), 
package-definitions' lifecycle-management and service-discovery 
([YARN-913](https://issues.apache.org/jira/browse/YARN-913)/[YARN-4757](https://issues.apache.org/jira/browse/YARN-4757)).
-
-### Native Services Discovery
-The new discovery solution exposes the registry information through a more 
generic and widely used mechanism: DNS. Service Discovery via DNS uses the 
well-known DNS interfaces to browse the network for services. Having the 
registry information exposed via DNS simplifies the life of services.
-
-The previous read mechanisms of YARN Service Registry were limited to a 
registry specific (java) API and a REST interface. In practice, this made it 
very difficult for wiring up existing clients and services. For e.g., dynamic 
configuration of dependent endpoints of a service was not easy to implement 
using the registry-read mechanisms, **without** code-changes to existing 
services. These are solved by the DNS based service discovery.
-
-### Scheduling
-[YARN-6592](https://issues.apache.org/jira/browse/YARN-6592) covers a host of 
scheduling features that are useful for short-running applications and services 
alike. Below, are a few very important YARN core features that help schedule 
services better. Without these, running services on YARN is a hassle.
-
-* Affinity (TBD)
-* Anti-affinity (TBD)
-* Gang scheduling (TBD)
-* Malleable container sizes 
([YARN-1197](https://issues.apache.org/jira/browse/YARN-1197))
-
-### Resource Profiles
-YARN always had support for memory as a resource, inheriting it from 
Hadoop-(1.x)’s MapReduce platform. Later support for CPU as a resource 
([YARN-2](https://issues.apache.org/jira/browse/YARN-2)/[YARN-3](https://issues.apache.org/jira/browse/YARN-3))
 was added. Multiple efforts added support for various other resource-types in 
YARN such as disk 
([YARN-2139](https://issues.apache.org/jira/browse/YARN-2139)), and network 
([YARN-2140](https://issues.apache.org/jira/browse/YARN-2140)), specifically 
benefiting long running services.
-
-In many systems outside of YARN, users are already accustomed to specifying 
their desired ‘box’ of requirements where each box comes with a predefined 
amount of each resources.  Admins would define various available box-sizes 
(small, medium, large etc) and users would pick the ones they desire and 
everybody is happy. In  
[YARN-3926](https://issues.apache.org/jira/browse/YARN-3926), YARN introduces 
Resource Profiles which extends the YARN resource model for easier 
resource-type management and profiles. This helps in two ways - the system can 
schedule applications better and it can perform intelligent over-subscription 
of resources where applicable.
-
-Resource profiles are all the more important for services since -
-* Similar to short running apps, you don’t have to fiddle with varying 
resource-requirements for each container type
-* Services usually end up planning for peak usages, leaving a lot of 
possibility of barren utilization
-
-### Special handling of preemption and container reservations
-TBD.
-
-Preemption and reservation of long running containers have different 
implications from regular ones. Preemption of resources in YARN today works by 
killing of containers. For long-lived services this is unacceptable. Also, 
scheduler should avoid allocating long running containers on borrowed 
resources. [YARN-4724](https://issues.apache.org/jira/browse/YARN-4724) will 
address some of these special recognition of service containers.
-
-### Container auto-restarts
-If a service container dies, expiring container's allocation and releasing the 
allocation is undesirable in many cases. Long running containers may exit for 
various reasons, crash and need to restart but forcing them to go through the 
complete scheduling cycle, resource localization, etc. is both unnecessary and 
expensive.
-
-Services can enable app-specific policies to prevent NodeManagers to 
automatically restart containers. 
[YARN-3998](https://issues.apache.org/jira/browse/YARN-3998) implements a  
retry-policy to let NM re-launch a service container when it fails.
-
-### Container allocation re-use for application upgrades
-TBD.
-
-Auto-restart of containers will support upgrade of service containers without 
reclaiming the resources first. During an upgrade, with multitude of other 
applications running in the system, giving up and getting back resources 
allocated to the service is hard to manage. Node-Labels help this cause but are 
not straight-forward to use to address the app-specific use-cases. The umbrella 
[YARN-4726](https://issues.apache.org/jira/browse/YARN-4726) along with 
[YARN-5620](https://issues.apache.org/jira/browse/YARN-5620) and 
[YARN-4470](https://issues.apache.org/jira/browse/YARN-4470) will take care of 
this.
-
-### Dynamic Configurations
-Most production-level services require dynamic configurations to manage and 
simplify their lifecycle. Container’s resource size, local/work dirs and 
log-dirs are the most basic information services need. Service's endpoint 
details (host/port), their inter-component dependencies, health-check 
endpoints, etc. are all critical to the success of today's real-life services.
-
-### Resource re-localization for reconfiguration/upgrades
-TBD
-
-### Service Registry
-TBD
-
-### Service persistent storage and volume support
-TBD
-
-### Packaging
-TBD
-
-### Container image registry (private, public and hybrid)
-TBD
-
-### Container image management and APIs
-TBD
-
-### Container image storage
-TBD
-
-### Monitoring
-TBD
-
-### Metrics
-TBD
-
-### Service Logs
-TBD
-
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Concepts.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Concepts.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Concepts.md
new file mode 100644
index 0000000..7b62c36
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Concepts.md
@@ -0,0 +1,77 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# Concepts
+This document describes some key concepts and features that makes YARN as a 
first-class platform in order to natively support long running services on YARN.
+
+### Service Framework (ApplicationMaster) on YARN
+A container orchestration framework is implemented to help deploying services 
on YARN. In a nutshell, the framework is an ApplicationMaster that
+requests containers from ResourceManager based on service definition provided 
by the user and launch the containers across the cluster adhering to placement 
policies.
+It also does all the heavy lifting work such as resolving the service 
definition and configurations, managing component life cycles such as 
automatically restarting
+failed containers, monitoring components' healthiness and readiness, ensuring 
dependency start order across components, flexing up/down components, 
+upgrading components etc. The end goal of the framework is to make sure the 
service is up and running as the state that user desired.
+
+
+### A Restful API-Server for deploying/managing services on YARN
+A restful API server is developed to allow users to deploy/manage their 
services on YARN via a simple JSON spec. This avoids users
+from dealing with the low-level APIs, writing complex code to bring their 
services onto YARN. The REST layer acts as a unified REST based entry for
+creation and lifecycle management of YARN services. Services here can range 
from simple single-component apps to the most complex, 
+multi-component applications needing special orchestration needs. Please refer 
to this [API doc](YarnServiceAPI.md) for detailed API documentations.
+
+The API-server is stateless, which means users can simply spin up multiple 
instances, and have a load balancer fronting them to 
+support HA, distribute the load etc.
+
+### Service Discovery
+A DNS server is implemented to enable discovering services on YARN via the 
standard mechanism: DNS lookup.
+The DNS server essentially exposes the information in YARN service registry by 
translating them into DNS records such as A record and SRV record.
+Clients can discover the IPs of containers via standard DNS lookup.
+The previous read mechanisms of YARN Service Registry were limited to a 
registry specific (java) API and a REST interface and are difficult
+to wireup existing clients and services. The DNS based service discovery 
eliminates this gap. Please refer to this [DNS doc](ServiceDiscovery.md) 
+for more details.
+
+### Scheduling
+
+A host of scheduling features are being developed to support long running 
services.
+
+* Affinity and anti-affinity scheduling across containers 
([YARN-6592](https://issues.apache.org/jira/browse/YARN-6592)).
+* Container resizing 
([YARN-1197](https://issues.apache.org/jira/browse/YARN-1197))
+* Special handling of container preemption/reservation for services 
+
+### Container auto-restarts
+
+[YARN-3998](https://issues.apache.org/jira/browse/YARN-3998) implements a 
retry-policy to let NM re-launch a service container when it fails.
+The service REST API provides users a way to enable NodeManager to 
automatically restart the container if it fails.
+The advantage is that it avoids the entire cycle of releasing the failed 
containers, re-asking new containers, re-do resource localizations and so on, 
which
+greatly minimizes container downtime.
+
+
+### Container in-place upgrade
+
+[YARN-4726](https://issues.apache.org/jira/browse/YARN-4726) aims to support 
upgrading containers in-place, that is, without losing the container 
allocations.
+It opens up a few APIs in NodeManager to allow ApplicationMasters to upgrade 
their containers via a simple API call.
+Under the hood, NodeManager does below steps:
+* Downloading the new resources such as jars, docker container images, new 
configurations.
+* Stop the old container. 
+* Start the new container with the newly downloaded resources. 
+
+At the time of writing this document, core changes are done but the feature is 
not usable end-to-end.
+
+### Resource Profiles
+
+In [YARN-3926](https://issues.apache.org/jira/browse/YARN-3926), YARN 
introduces Resource Profiles which extends the YARN resource model for easier 
+resource-type management and profiles. 
+It primarily solves two problems:
+* Make it easy to support new resource types such as network 
bandwith([YARN-2140](https://issues.apache.org/jira/browse/YARN-2140)), 
disks([YARN-2139](https://issues.apache.org/jira/browse/YARN-2139)).
+ Under the hood, it unifies the scheduler codebase to essentially parameterize 
the resource types.
+* User can specify the container resource requirement by a profile name, 
rather than fiddling with varying resource-requirements for each resource type.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Overview.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Overview.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Overview.md
new file mode 100644
index 0000000..3dd4d8c
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/Overview.md
@@ -0,0 +1,58 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# YARN Service
+## Overview
+Yarn Service framework provides first class support and APIs to host long 
running services natively in YARN. 
+In a nutshell, it serves as a container orchestration platform for managing 
containerized services on YARN. It supports both docker container
+and traditional process based containers in YARN.
+
+The responsibility of this framework includes performing configuration 
resolutions and mounts, 
+lifecycle management such as stop/start/delete the service, flexing service 
components up/down, rolling upgrades services on YARN, monitoring services' 
healthiness and readiness and more.
+
+The yarn-service framework primarily includes below components:
+
+* A core framework (ApplicationMaster) running on YARN to serve as a container 
orchestrator, being responsible for all service lifecycle managements.
+* A restful API-server to for users to interact with YARN to deploy/manage 
their services via a simple JSON spec.
+* A DNS server backed by YARN service registry to enable discovering services 
on YARN by the standard DNS lookup.
+
+## Why should I try YARN Service framework?
+
+YARN Service framework makes it easy to bring existing services onto YARN.
+It hides all the complex low-level details of application management and 
relieves
+users from forced into writing new code. Developers of new services do not have
+to worry about YARN internals and only need to focus on containerization of 
their
+service(s).
+
+Further, another huge win of this feature is that now you can enable both
+traditional batch processing jobs and long running services in a single 
platform!
+The benefits of combining these workloads are two-fold:
+
+* Greatly simplify the cluster operations as you have only a single cluster to 
deal with.
+* Making both batch jobs and services share a cluster can greatly improve 
resource utilization.
+
+## How do I get started?
+
+*`This feature is in alpha state`* and so APIs, command lines are subject to 
change. We will continue to update the documents over time.
+
+[QuickStart](QuickStart.md) shows a quick tutorial that walks you through 
simple steps to deploy a service on YARN.
+
+## How do I get my hands dirty?
+
+* [Concepts](Concepts.md): Describes the internals of the framework and some 
features in YARN core to support running services on YARN.
+* [Service REST API](YarnServiceAPI.md): The API doc for deploying/managing 
services on YARN.
+* [Service Discovery](ServiceDiscovery.md): Deep dives into the YARN DNS 
internals
+
+
+ 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
new file mode 100644
index 0000000..327566b
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md
@@ -0,0 +1,218 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# Quick Start
+
+This document describes how to deploy services on YARN using the YARN Service 
framework.
+
+<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
+
+## Start HDFS and YARN components
+
+ Start all the hadoop components HDFS, YARN as usual.
+
+
+## Example service 
+Below is a simple service definition that launches sleep containers on YARN by 
writing a simple spec file and without writing any code.
+
+```
+{
+  "name": "sleeper-service",
+  "components" : 
+    [
+      {
+        "name": "sleeper",
+        "number_of_containers": 1,
+        "launch_command": "sleep 900000",
+        "resource": {
+          "cpus": 1, 
+          "memory": "256"
+       }
+      }
+    ]
+}
+```
+
+For launching docker based services using YARN Service framework, please refer 
to [API doc](YarnServiceAPI.md).
+
+## Manage services on YARN via CLI
+Below steps walk you through deploying a services on YARN using CLI.
+Refer to [Yarn Commands](../YarnCommands.md) for the full list of commands and 
options.
+### Deploy a service
+```
+yarn service create --file ${PATH_TO_SERVICE_DEF_FILE}
+```
+Params:
+- SERVICE_NAME: The name of the service. Note that this needs to be unique 
across all running services.
+- PATH_TO_SERVICE_DEF: The path to the service definition file in JSON format.
+
+For example:
+```
+yarn service create --file /path/to/local/sleeper.json
+```
+
+### Flex a component of a service
+Increase or decrease the number of containers for a component.
+```
+yarn service flex ${SERVICE_NAME} --component ${COMPONENT_NAME} 
${NUMBER_OF_CONTAINERS}
+```
+For example, for a service named `sleeper-service`:
+
+Set the `sleeper` component to `2` containers (absolute number).
+
+```
+yarn service flex sleeper-service --component sleeper 2
+```
+
+### Stop a service
+Stopping a service will stop all containers of the service and the 
ApplicationMaster, but does not delete the state of a service, such as the 
service root folder on hdfs.
+```
+yarn service stop ${SERVICE_NAME}
+```
+
+### Restart a stopped service
+Restarting a stopped service is easy - just call start!
+```
+yarn service start ${SERVICE_NAME}
+```
+
+### Destroy a service
+In addition to stopping a service, it also deletes the service root folder on 
hdfs and the records in YARN Service Registry.
+```
+yarn service destroy ${SERVICE_NAME}
+```
+
+## Manage services on YARN via REST API
+Below steps walk you through deploying services on YARN via REST API.
+ Refer to [API doc](YarnServiceAPI.md)  for the detailed API specificatiosn.
+### Start API-Server for deploying services on YARN
+API server is the service that sits in front of YARN ResourceManager and lets 
users submit their API specs via HTTP.
+```
+ yarn --daemon start apiserver
+ ```
+The above command starts the API Server on the localhost at port 9191 by 
default. 
+
+### Deploy a service
+POST the aforementioned example service definition to the api-server endpoint: 
+```
+POST  http://localhost:9191/ws/v1/services
+```
+
+### Get a service status
+```
+GET  http://localhost:9191/ws/v1/services/${SERVICE_NAME}
+```
+
+### Flex a component of a service
+```
+PUT  
http://localhost:9191/ws/v1/services/${SERVICE_NAME}/components/${COMPONENT_NAME}
+```
+`PUT` Request body:
+```
+{
+    "name": "${COMPONENT_NAME}",
+    "number_of_containers": ${COUNT}
+}
+```
+For example:
+```
+{
+    "name": "sleeper",
+    "number_of_containers": 2
+}
+```
+
+### Stop a service
+Stopping a service will stop all containers of the service and the 
ApplicationMaster, but does not delete the state of a service, such as the 
service root folder on hdfs.
+
+```
+PUT  http://localhost:9191/ws/v1/services/${SERVICE_NAME}
+```
+
+`PUT` Request body:
+```
+{
+  "name": "${SERVICE_NAME}",
+  "state": "STOPPED"
+}
+```
+
+### Restart a stopped service
+Restarting a stopped service is easy.
+
+```
+PUT  http://localhost:9191/ws/v1/services/${SERVICE_NAME}
+```
+
+`PUT` Request body:
+```
+{
+  "name": "${SERVICE_NAME}",
+  "state": "STARTED"
+}
+```
+### Destroy a service
+In addition to stopping a service, it also deletes the service root folder on 
hdfs and the records in YARN Service Registry.
+```
+DELETE  http://localhost:9191/ws/v1/services/${SERVICE_NAME}
+```
+
+## Services UI with YARN UI2 and Timeline Service v2
+A new `service` tab is added in the YARN UI2 specially to show YARN Services 
in a first class manner. 
+The services framework posts the data into TimelineService and the `service` 
UI reads data from TimelineService to render its content.
+
+### Enable Timeline Service v2
+Please refer to [TimeLineService v2 doc](../TimelineServiceV2.md) for how to 
enable Timeline Service v2.
+
+### Enable new YARN UI
+
+Set below config in `yarn-site.xml` and start ResourceManager. 
+If you are building from source code, make sure you use `-Pyarn-ui` in the 
`mvn` command - this will generate the war file for the new YARN UI.
+```
+  <property>
+    <description>To enable RM web ui2 application.</description>
+    <name>yarn.webapp.ui2.enable</name>
+    <value>true</value>
+  </property>
+```
+
+## Service Discovery with YARN DNS
+YARN Service framework comes with a DNS server (backed by YARN Service 
Registry) which enables DNS based discovery of services deployed on YARN.
+That is, user can simply access their services in a well-defined naming format 
as below:
+
+```
+${COMPONENT_INSTANCE_NAME}.${SERVICE_NAME}.${USER}.${DOMAIN}
+```
+For example, in a cluster whose domain name is `yarncluster` (as defined by 
the `hadoop.registry.dns.domain-name` in `yarn-site.xml`), a service named 
`hbase` deployed by user `dev` 
+with two components `hbasemaster` and `regionserver` can be accessed as below:
+
+This URL points to the usual hbase master UI
+```
+http://hbasemaster-0.hbase.dev.yarncluster:16010/master-status
+```
+
+
+Note that YARN service framework assigns COMPONENT_INSTANCE_NAME for each 
container in a sequence of monotonically increasing integers. For example, 
`hbasemaster-0` gets
+assigned `0` since it is the first and only instance for the `hbasemaster` 
component. In case of `regionserver` component, it can have multiple containers
+ and so be named as such: `regionserver-0`, `regionserver-1`, `regionserver-2` 
... etc 
+ 
+`Disclaimer`: The DNS implementation is still experimental. It should not be 
used as a fully-functional corporate DNS. 
+
+### Start the DNS server 
+By default, the DNS runs on non-privileged port `5353`.
+If it is configured to use the standard privileged port `53`, the DNS server 
needs to be run as root:
+```
+sudo su - -c "yarn org.apache.hadoop.registry.server.dns.RegistryDNSServer > 
/${HADOOP_LOG_FOLDER}/registryDNS.log 2>&1 &" root
+```
+Please refer to [YARN DNS doc](ServicesDiscovery.md) for the full list of 
configurations.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/803eb069/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceDiscovery.md
----------------------------------------------------------------------
diff --git 
a/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceDiscovery.md
 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceDiscovery.md
new file mode 100644
index 0000000..6318a07
--- /dev/null
+++ 
b/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/ServiceDiscovery.md
@@ -0,0 +1,150 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+# YARN DNS Server
+
+<!-- MACRO{toc|fromDepth=0|toDepth=3} -->
+
+## Introduction
+
+The YARN DNS Server provides a standard DNS interface to the information 
posted into the YARN Registry by deployed applications. The DNS service serves 
the following functions:
+
+1. **Exposing existing service-discovery information via DNS** - Information 
provided in
+the current YARN service registry’s records will be converted into DNS 
entries, thus
+allowing users to discover information about YARN applications using standard 
DNS
+client mechanisms (for e.g. a DNS SRV Record specifying the hostname and port
+number for services).
+2. **Enabling Container to IP mappings** - Enables discovery of the IPs of 
containers via
+standard DNS lookups. Given the availability of the records via DNS, container
+name-based communication will be facilitated (e.g. ‘curl
+http://myContainer.myDomain.com/endpoint’).
+
+## Service Properties
+
+The existing YARN Service Registry is leveraged as the source of information 
for the DNS Service.
+
+The following core functions are supported by the DNS-Server:
+
+### Functional properties
+
+1. Supports creation of DNS records for end-points of the deployed YARN 
applications
+2. Record names remain unchanged during restart of containers and/or 
applications
+3. Supports reverse lookups (name based on IP). Note, this works only for 
Docker containers.
+4. Supports security using the standards defined by The Domain Name System 
Security
+Extensions (DNSSEC)
+5. Highly available
+6. Scalable - The service provides the responsiveness (e.g. low-latency) 
required to
+respond to DNS queries (timeouts yield attempts to invoke other configured name
+servers).
+
+### Deployment properties
+
+1. Supports integration with existing DNS assets (e.g. a corporate DNS server) 
by acting as
+a DNS server for a Hadoop cluster zone/domain. The server is not intended to 
act as a
+primary DNS server and does not forward requests to other servers.
+2. The DNS Server exposes a port that can receive both TCP and UDP requests per
+DNS standards. The default port for DNS protocols is in a restricted, 
administrative port
+range (5353), so the port is configurable for deployments in which the service 
may
+not be managed via an administrative account.
+
+## DNS Record Name Structure
+
+The DNS names of generated records are composed from the following elements 
(labels). Note that these elements must be compatible with DNS conventions (see 
“Preferred Name Syntax” in RFC 1035):
+
+* **domain** - the name of the cluster DNS domain. This name is provided as a
+configuration property. In addition, it is this name that is configured at a 
parent DNS
+server as the zone name for the defined yDNS zone (the zone for which the 
parent DNS
+server will forward requests to yDNS). E.g. yarncluster.com
+* **username** - the name of the application deployer. This name is the simple 
short-name (for
+e.g. the primary component of the Kerberos principal) associated with the user 
launching
+the application. As the username is one of the elements of DNS names, it is 
expected
+that this also confirms DNS name conventions (RFC 1035 linked above), so 
special translation is performed for names with special characters like hyphens 
and spaces.
+* **application name** - the name of the deployed YARN application. This name 
is inferred
+from the YARN registry path to the application's node. Application name, 
rather thn application id, was chosen as a way of making it easy for users to 
refer to human-readable DNS
+names. This obviously mandates certain uniqueness properties on application 
names.
+* **container id** - the YARN assigned ID to a container (e.g.
+container_e3741_1454001598828_01_000004)
+* **component name** - the name assigned to the deployed component (for e.g. a 
master
+component). A component is a distributed element of an application or service 
that is
+launched in a YARN container (e.g. an HBase master). One can imagine multiple
+components within an application. A component name is not yet a first class 
concept in
+YARN, but is a very useful one that we are introducing here for the sake of 
yDNS
+entries. Many frameworks like MapReduce, Slider already have component names
+(though, as mentioned, they are not yet supported in YARN in a first class 
fashion).
+* **api** - the api designation for the exposed endpoint
+
+### Notes about DNS Names
+
+* In most instances, the DNS names can be easily distinguished by the number of
+elements/labels that compose the name. The cluster’s domain name is always 
the last
+element. After that element is parsed out, reading from right to left, the 
first element
+maps to the application user and so on. Wherever it is not easily 
distinguishable, naming conventions are used to disambiguate the name using a 
prefix such as
+“container” or suffix such as “api”. For example, an endpoint 
published as a
+management endpoint will be referenced with the name 
*management-api.griduser.yarncluster.com*.
+* Unique application name (per user) is not currently supported/guaranteed by 
YARN, but
+it is supported by frameworks such as Apache Slider. The yDNS service currently
+leverages the last element of the ZK path entry for the application as an
+application name. These application names have to be unique for a given user.
+
+## DNS Server Functionality
+
+The primary functions of the DNS service are illustrated in the following 
diagram:
+
+![DNS Functional Overview](../images/dns_overview.png "DNS Functional 
Overview")
+
+### DNS record creation
+The following figure illustrates at slightly greater detail the DNS record 
creation and registration sequence (NOTE: service record updates would follow a 
similar sequence of steps,
+distinguished only by the different event type):
+
+![DNS Functional Overview](../images/dns_record_creation.jpeg "DNS Functional 
Overview")
+
+### DNS record removal
+Similarly, record removal follows a similar sequence
+
+![DNS Functional Overview](../images/dns_record_removal.jpeg "DNS Functional 
Overview")
+
+(NOTE: The DNS Zone requires a record as an argument for the deletion method, 
thus
+requiring similar parsing logic to identify the specific records that should 
be removed).
+
+### DNS Service initialization
+* The DNS service initializes both UDP and TCP listeners on a configured port. 
As
+noted above, the default port of 5353 is in a restricted range that is only 
accessible to an
+account with administrative privileges.
+* Subsequently, the DNS service listens for inbound DNS requests. Those 
requests are
+standard DNS requests from users or other DNS servers (for example, DNS 
servers that have the
+YARN DNS service configured as a forwarder).
+
+## Start the DNS Server
+By default, the DNS runs on non-privileged port `5353`.
+If it is configured to use the standard privileged port `53`, the DNS server 
needs to be run as root:
+```
+sudo su - -c "yarn org.apache.hadoop.registry.server.dns.RegistryDNSServer > 
/${HADOOP_LOG_FOLDER}/registryDNS.log 2>&1 &" root
+```
+
+## Configuration
+The YARN DNS server reads its configuration properties from the yarn-site.xml 
file.  The following are the DNS associated configuration properties:
+
+| Name | Description |
+| ------------ | ------------- |
+| hadoop.registry.dns.enabled | The DNS functionality is enabled for the 
cluster. Default is false. |
+| hadoop.registry.dns.domain-name  | The domain name for Hadoop cluster 
associated records.  |
+| hadoop.registry.dns.bind-address | Address associated with the network 
interface to which the DNS listener should bind.  |
+| hadoop.registry.dns.bind-port | The port number for the DNS listener. The 
default port is 5353. However, since that port falls in a administrator-only 
range, typical deployments may need to specify an alternate port.  |
+| hadoop.registry.dns.dnssec.enabled | Indicates whether the DNSSEC support is 
enabled. Default is false.  |
+| hadoop.registry.dns.public-key  | The base64 representation of the 
server’s public key. Leveraged for creating the DNSKEY Record provided for 
DNSSEC client requests.  |
+| hadoop.registry.dns.private-key-file  | The path to the standard DNSSEC 
private key file. Must only be readable by the DNS launching identity. See 
[dnssec-keygen](https://ftp.isc.org/isc/bind/cur/9.9/doc/arm/man.dnssec-keygen.html)
 documentation.  |
+| hadoop.registry.dns-ttl | The default TTL value to associate with DNS 
records. The default value is set to 1 (a value of 0 has undefined behavior). A 
typical value should be approximate to the time it takes YARN to restart a 
failed container.  |
+| hadoop.registry.dns.zone-subnet  | An indicator of the IP range associated 
with the cluster containers. The setting is utilized for the generation of the 
reverse zone name.  |
+| hadoop.registry.dns.zone-mask | The network mask associated with the zone IP 
range.  If specified, it is utilized to ascertain the IP range possible and 
come up with an appropriate reverse zone name. |
+| hadoop.registry.dns.zones-dir | A directory containing zone configuration 
files to read during zone initialization.  This directory can contain zone 
master files named *zone-name.zone*.  See 
[here](http://www.zytrax.com/books/dns/ch6/mydomain.html) for zone master file 
documentation.|


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to