Author: lmccay
Date: Tue Nov 11 17:03:59 2014
New Revision: 1638218

URL: http://svn.apache.org/r1638218
Log:
added 0.5.0 dev-guide properly and moved samples to after quick start in TOC

Added:
    knox/trunk/books/0.5.0/dev-guide/
    knox/trunk/books/0.5.0/dev-guide/book.md
    knox/trunk/books/0.5.0/dev-guide/deployment-overview.puml
    knox/trunk/books/0.5.0/dev-guide/deployment-provider-simple.puml
    knox/trunk/books/0.5.0/dev-guide/deployment-provider.puml
    knox/trunk/books/0.5.0/dev-guide/deployment-service-simple.puml
    knox/trunk/books/0.5.0/dev-guide/deployment-service.puml
    knox/trunk/books/0.5.0/dev-guide/runtime-overview.puml
    knox/trunk/books/0.5.0/dev-guide/runtime-request-processing.puml
Modified:
    knox/site/books/knox-0-5-0/dev-guide.html
    knox/site/books/knox-0-5-0/knox-0-5-0.html
    knox/trunk/books/0.5.0/book.md

Modified: knox/site/books/knox-0-5-0/dev-guide.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/dev-guide.html?rev=1638218&r1=1638217&r2=1638218&view=diff
==============================================================================
--- knox/site/books/knox-0-5-0/dev-guide.html (original)
+++ knox/site/books/knox-0-5-0/dev-guide.html Tue Nov 11 17:03:59 2014
@@ -13,7 +13,7 @@
    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    See the License for the specific language governing permissions and
    limitations under the License.
---><p><link href="book.css" rel="stylesheet"/></p><p><img src="knox-logo.gif" 
alt="Knox"/> <img src="apache-logo.gif" align="right" alt="Apache"/></p><h1><a 
id="Apache+Knox+Gateway+0.5.x+Developer's+Guide"></a>Apache Knox Gateway 0.5.x 
Developer&rsquo;s Guide</h1><h2><a id="Table+Of+Contents"></a>Table Of 
Contents</h2>
+--><p><link href="book.css" rel="stylesheet"/></p><p><img src="knox-logo.gif" 
alt="Knox"/> <img src="apache-logo.gif" align="right" alt="Apache"/></p><h1><a 
id="Apache+Knox+Gateway+0.4.x+Developer's+Guide"></a>Apache Knox Gateway 0.4.x 
Developer&rsquo;s Guide</h1><h2><a id="Table+Of+Contents"></a>Table Of 
Contents</h2>
 <ul>
   <li><a href="#Overview">Overview</a></li>
   <li><a href="#Architecture+Overview">Architecture Overview</a></li>
@@ -476,14 +476,14 @@ public interface ServiceDeploymentContri
     context.contributeFilter( service, resource, &quot;rewrite&quot;, null, 
null );
   }
 </code></pre>
-<dl><dt>UrlRewriteRulesDescriptor allRules = context.getDescriptor( 
&ldquo;rewrite&rdquo; );</dt><dd>Here the rewrite provider runtime descriptor 
is obtained by name from the deployment context. This does represent a tight 
coupling in this case between this service and the default rewrite provider. 
The rewrite provider however is unlikely to be related with alternate 
implementations.</dd><dt>UrlRewriteRulesDescriptor newRules = 
loadRulesFromClassPath();</dt><dd>This is convenience method for loading 
partial rewrite descriptor information from the classpath. Developing and 
maintaining these rewrite rules is far easier as an external resource. The 
rewrite descriptor API could however have been used to achieve the same 
result.</dd><dt>allRules.addRules( newRules );</dt><dd>Here the rewrite rules 
for the weather service are merged into the largest set of rewrite rules.</dd>
+<dl><dt>UrlRewriteRulesDescriptor allRules = context.getDescriptor( 
&ldquo;rewrite&rdquo; );</dt><dd>Here the rewrite provider runtime descriptor 
is obtained by name from the deployment context. This does represent a tight 
coupling in this case between this service and the default rewrite provider. 
The rewrite provider however is unlikely to be related with alternate 
implementations.</dd><dt>UrlRewriteRulesDescriptor newRules = 
loadRulesFromClassPath();</dt><dd>This is convenience method for loading 
partial rewrite descriptor information from the classpath. Developing and 
maintaining these rewrite rules is far easier as an external resource. The 
rewrite descriptor API could however have been used to achieve the same 
result.</dd><dt>allRules.addRules( newRules );</dt><dd>Here the rewrite rules 
for the weather service are merged into the larget set of rewrite rules.</dd>
 </dl>
 <pre><code class="xml">&lt;project&gt;
     &lt;modelVersion&gt;4.0.0&lt;/modelVersion&gt;
     &lt;parent&gt;
         &lt;groupId&gt;org.apache.hadoop&lt;/groupId&gt;
         &lt;artifactId&gt;gateway&lt;/artifactId&gt;
-        &lt;version&gt;0.6.0-SNAPSHOT&lt;/version&gt;
+        &lt;version&gt;0.5.0-SNAPSHOT&lt;/version&gt;
     &lt;/parent&gt;
 
     &lt;artifactId&gt;gateway-service-weather&lt;/artifactId&gt;
@@ -535,7 +535,7 @@ public interface ServiceDeploymentContri
     &lt;parent&gt;
         &lt;groupId&gt;org.apache.hadoop&lt;/groupId&gt;
         &lt;artifactId&gt;gateway&lt;/artifactId&gt;
-        &lt;version&gt;0.6.0-SNAPSHOT&lt;/version&gt;
+        &lt;version&gt;0.5.0-SNAPSHOT&lt;/version&gt;
     &lt;/parent&gt;
 
     &lt;artifactId&gt;gateway-provider-security-authn-sample&lt;/artifactId&gt;

Modified: knox/site/books/knox-0-5-0/knox-0-5-0.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/knox-0-5-0.html?rev=1638218&r1=1638217&r2=1638218&view=diff
==============================================================================
--- knox/site/books/knox-0-5-0/knox-0-5-0.html (original)
+++ knox/site/books/knox-0-5-0/knox-0-5-0.html Tue Nov 11 17:03:59 2014
@@ -17,12 +17,12 @@
 <ul>
   <li><a href="#Introduction">Introduction</a></li>
   <li><a href="#Quick+Start">Quick Start</a></li>
+  <li><a href="#Gateway+Samples">Gateway Samples</a></li>
   <li><a href="#Apache+Knox+Details">Apache Knox Details</a>
   <ul>
     <li><a href="#Apache+Knox+Directory+Layout">Apache Knox Directory 
Layout</a></li>
     <li><a href="#Supported+Services">Supported Services</a></li>
   </ul></li>
-  <li><a href="#Gateway+Samples">Gateway Samples</a></li>
   <li><a href="#Gateway+Details">Gateway Details</a>
   <ul>
     <li><a href="#URL+Mapping">URL Mapping</a></li>
@@ -277,7 +277,30 @@ Server: Jetty(6.1.26)
   <li><a href="#HBase+Examples">HBase Examples</a></li>
   <li><a href="#Hive+Examples">Hive Examples</a></li>
   <li><a href="#Yarn+Examples">Yarn Examples</a></li>
-</ul><h2><a id="Gateway+Details"></a>Gateway Details</h2><p>This section 
describes the details of the Knox Gateway itself. Including: </p>
+</ul><h3><a id="Gateway+Samples"></a>Gateway Samples</h3><p>The purpose of the 
samples within the {GATEWAY_HOME}/samples directory is to demonstrate the 
capabilities of the Apache Knox Gateway to provide access to the numerous APIs 
that are available from the service components of a Hadoop 
cluster.</p><p>Depending on exactly how your Knox installation was done, there 
will be some number of steps required in order fully install and configure the 
samples for use.</p><p>This section will help describe the assumptions of the 
samples and the steps to get them to work in a couple of different deployment 
scenarios.</p><h4><a id="Assumptions+of+the+Samples"></a>Assumptions of the 
Samples</h4><p>The samples were initially written with the intent of working 
out of the box for the various Hadoop demo environments that are deployed as a 
single node cluster inside of a VM. The following assumptions were made from 
that context and should be understood in order to get the samples to work in 
other 
 deployment scenarios:</p>
+<ul>
+  <li>That there is a valid java JDK on the PATH for executing the samples</li>
+  <li>The Knox Demo LDAP server is running on localhost and port 33389 which 
is the default port for the ApacheDS LDAP server.</li>
+  <li>That the LDAP directory in use has a set of demo users provisioned with 
the convention of username and username&ldquo;-password&rdquo; as the password. 
Most of the samples have some variation of this pattern with 
&ldquo;guest&rdquo; and &ldquo;guest-password&rdquo;.</li>
+  <li>That the Knox Gateway instance is running on the same machine which you 
will be running the samples from - therefore &ldquo;localhost&rdquo; and that 
the default port of &ldquo;8443&rdquo; is being used.</li>
+  <li>Finally, that there is a properly provisioned sandbox.xml topology in 
the {GATEWAY_HOME}/conf/topologies directory that is configured to point to the 
actual host and ports of running service components.</li>
+</ul><h4><a id="Steps+for+Demo+Single+Node+Clusters"></a>Steps for Demo Single 
Node Clusters</h4><p>There should be little to do if anything in a demo 
environment that has been provisioned with illustrating the use of Apache 
Knox.</p><p>However, the following items will be worth ensuring before you 
start:</p>
+<ol>
+  <li>The sandbox.xml topology is configured properly for the deployed 
services</li>
+  <li>That there is an LDAP server running with guest/guest-password user 
available in the directory</li>
+</ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway"></a>Steps for Ambari 
Deployed Knox Gateway</h4><p>Apache Knox instances that are under the 
management of Ambari are generally assumed not to be demo instances. These 
instances are in place to facilitate development, testing or production Hadoop 
clusters.</p><p>The Knox samples can however be made to work with Ambari 
managed Knox instances with a few steps:</p>
+<ol>
+  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
+  <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
+  <li>The default.xml topology file can be copied to sandbox.xml in order to 
satisfy the topology name assumption in the samples.</li>
+  <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
+</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway"></a>Steps for a 
Manually Installed Knox Gateway</h4><p>For manually installed Knox instances, 
there is really no way for the installer to know how to configure the topology 
file for you.</p><p>Essentially, these steps are identical to the Amabari 
deployed instance except that #3 should be replaced with the configuration of 
the ootb sandbox.xml to point the configuration at the proper hosts and 
ports.</p>
+<ol>
+  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
+  <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
+  <li>Change the hosts and ports within the 
{GATEWAY_HOME}/conf/topologies/sandbox.xml to reflect your actual cluster 
service locations.</li>
+  <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
+</ol><h2><a id="Gateway+Details"></a>Gateway Details</h2><p>This section 
describes the details of the Knox Gateway itself. Including: </p>
 <ul>
   <li>How URLs are mapped between a gateway that services multiple Hadoop 
clusters and the clusters themselves</li>
   <li>How the gateway is configured through gateway-site.xml and cluster 
specific topology files</li>
@@ -1489,30 +1512,7 @@ APACHE_HOME/bin/apachectl -k stop
       <td>Logging message. Contains additional tracking information.</td>
     </tr>
   </tbody>
-</table><h4><a id="Audit+log+rotation"></a>Audit log rotation</h4><p>Audit 
logging is preconfigured with 
<code>org.apache.log4j.DailyRollingFileAppender</code>. <a 
href="http://logging.apache.org/log4j/1.2/";>Apache log4j</a> contains 
information about other Appenders.</p><h4><a 
id="How+to+change+audit+level+or+disable+it"></a>How to change audit level or 
disable it</h4><p>Audit configuration is stored in the 
<code>conf/gateway-log4j.properties</code> file.</p><p>All audit messages are 
logged at <code>INFO</code> level and this behavior can&rsquo;t be 
changed.</p><p>To change audit configuration <code>log4j.logger.audit*</code> 
and <code>log4j.appender.auditfile*</code> properties in 
<code>conf/gateway-log4j.properties</code> file should be modified.</p><p>Their 
meaning can be found in <a href="http://logging.apache.org/log4j/1.2/";>Apache 
log4j</a>.</p><p>Disabling auditing can be done by decreasing log level for 
appender.</p><h3><a id="Gateway+Samples"></a>Gateway Samples</h3><p>The
  purpose of the samples within the {GATEWAY_HOME}/samples directory is to 
demonstrate the capabilities of the Apache Knox Gateway to provide access to 
the numerous APIs that are available from the service components of a Hadoop 
cluster.</p><p>Depending on exactly how your Knox installation was done, there 
will be some number of steps required in order fully install and configure the 
samples for use.</p><p>This section will help describe the assumptions of the 
samples and the steps to get them to work in a couple of different deployment 
scenarios.</p><h4><a id="Assumptions+of+the+Samples"></a>Assumptions of the 
Samples</h4><p>The samples were initially written with the intent of working 
out of the box for the various Hadoop demo environments that are deployed as a 
single node cluster inside of a VM. The following assumptions were made from 
that context and should be understood in order to get the samples to work in 
other deployment scenarios:</p>
-<ul>
-  <li>That there is a valid java JDK on the PATH for executing the samples</li>
-  <li>The Knox Demo LDAP server is running on localhost and port 33389 which 
is the default port for the ApacheDS LDAP server.</li>
-  <li>That the LDAP directory in use has a set of demo users provisioned with 
the convention of username and username&ldquo;-password&rdquo; as the password. 
Most of the samples have some variation of this pattern with 
&ldquo;guest&rdquo; and &ldquo;guest-password&rdquo;.</li>
-  <li>That the Knox Gateway instance is running on the same machine which you 
will be running the samples from - therefore &ldquo;localhost&rdquo; and that 
the default port of &ldquo;8443&rdquo; is being used.</li>
-  <li>Finally, that there is a properly provisioned sandbox.xml topology in 
the {GATEWAY_HOME}/conf/topologies directory that is configured to point to the 
actual host and ports of running service components.</li>
-</ul><h4><a id="Steps+for+Demo+Single+Node+Clusters"></a>Steps for Demo Single 
Node Clusters</h4><p>There should be little to do if anything in a demo 
environment that has been provisioned with illustrating the use of Apache 
Knox.</p><p>However, the following items will be worth ensuring before you 
start:</p>
-<ol>
-  <li>The sandbox.xml topology is configured properly for the deployed 
services</li>
-  <li>That there is an LDAP server running with guest/guest-password user 
available in the directory</li>
-</ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway"></a>Steps for Ambari 
Deployed Knox Gateway</h4><p>Apache Knox instances that are under the 
management of Ambari are generally assumed not to be demo instances. These 
instances are in place to facilitate development, testing or production Hadoop 
clusters.</p><p>The Knox samples can however be made to work with Ambari 
managed Knox instances with a few steps:</p>
-<ol>
-  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
-  <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
-  <li>The default.xml topology file can be copied to sandbox.xml in order to 
satisfy the topology name assumption in the samples.</li>
-  <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway"></a>Steps for a 
Manually Installed Knox Gateway</h4><p>For manually installed Knox instances, 
there is really no way for the installer to know how to configure the topology 
file for you.</p><p>Essentially, these steps are identical to the Amabari 
deployed instance except that #3 should be replaced with the configuration of 
the ootb sandbox.xml to point the configuration at the proper hosts and 
ports.</p>
-<ol>
-  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
-  <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
-  <li>Change the hosts and ports within the 
{GATEWAY_HOME}/conf/topologies/sandbox.xml to reflect your actual cluster 
service locations.</li>
-  <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
-</ol><h2><a id="Client+Details"></a>Client Details</h2><p>Hadoop requires a 
client that can be used to interact remotely with the services provided by 
Hadoop cluster. This will also be true when using the Apache Knox Gateway to 
provide perimeter security and centralized access for these services. The two 
primary existing clients for Hadoop are the CLI (i.e. Command Line Interface, 
hadoop) and HUE (i.e. Hadoop User Environment). For several reasons however, 
neither of these clients can <em>currently</em> be used to access Hadoop 
services via the Apache Knox Gateway.</p><p>This led to thinking about a very 
simple client that could help people use and evaluate the gateway. The list 
below outlines the general requirements for such a client.</p>
+</table><h4><a id="Audit+log+rotation"></a>Audit log rotation</h4><p>Audit 
logging is preconfigured with 
<code>org.apache.log4j.DailyRollingFileAppender</code>. <a 
href="http://logging.apache.org/log4j/1.2/";>Apache log4j</a> contains 
information about other Appenders.</p><h4><a 
id="How+to+change+audit+level+or+disable+it"></a>How to change audit level or 
disable it</h4><p>Audit configuration is stored in the 
<code>conf/gateway-log4j.properties</code> file.</p><p>All audit messages are 
logged at <code>INFO</code> level and this behavior can&rsquo;t be 
changed.</p><p>To change audit configuration <code>log4j.logger.audit*</code> 
and <code>log4j.appender.auditfile*</code> properties in 
<code>conf/gateway-log4j.properties</code> file should be modified.</p><p>Their 
meaning can be found in <a href="http://logging.apache.org/log4j/1.2/";>Apache 
log4j</a>.</p><p>Disabling auditing can be done by decreasing log level for 
appender.</p><h2><a id="Client+Details"></a>Client Details</h2><p>Hadoo
 p requires a client that can be used to interact remotely with the services 
provided by Hadoop cluster. This will also be true when using the Apache Knox 
Gateway to provide perimeter security and centralized access for these 
services. The two primary existing clients for Hadoop are the CLI (i.e. Command 
Line Interface, hadoop) and HUE (i.e. Hadoop User Environment). For several 
reasons however, neither of these clients can <em>currently</em> be used to 
access Hadoop services via the Apache Knox Gateway.</p><p>This led to thinking 
about a very simple client that could help people use and evaluate the gateway. 
The list below outlines the general requirements for such a client.</p>
 <ul>
   <li>Promote the evaluation and adoption of the Apache Knox Gateway</li>
   <li>Simple to deploy and use on data worker desktops to access to remote 
Hadoop clusters</li>

Modified: knox/trunk/books/0.5.0/book.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/book.md?rev=1638218&r1=1638217&r2=1638218&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/book.md (original)
+++ knox/trunk/books/0.5.0/book.md Tue Nov 11 17:03:59 2014
@@ -27,10 +27,10 @@
 
 * #[Introduction]
 * #[Quick Start]
+* #[Gateway Samples]
 * #[Apache Knox Details]
     * #[Apache Knox Directory Layout]
     * #[Supported Services]
-* #[Gateway Samples]
 * #[Gateway Details]
     * #[URL Mapping]
     * #[Configuration]
@@ -76,8 +76,8 @@ In general the goals of the gateway are 
 
 <<quick_start.md>>
 <<book_getting-started.md>>
-<<book_gateway-details.md>>
 <<book_knox-samples.md>>
+<<book_gateway-details.md>>
 <<book_client-details.md>>
 <<book_service-details.md>>
 <<book_limitations.md>>

Added: knox/trunk/books/0.5.0/dev-guide/book.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/book.md?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/book.md (added)
+++ knox/trunk/books/0.5.0/dev-guide/book.md Tue Nov 11 17:03:59 2014
@@ -0,0 +1,1168 @@
+<!--
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+-->
+
+<<../../common/header.md>>
+
+<img src="knox-logo.gif" alt="Knox"/>
+<img src="apache-logo.gif" align="right" alt="Apache"/>
+
+# Apache Knox Gateway 0.4.x Developer's Guide #
+
+## Table Of Contents ##
+* #[Overview]
+  * #[Architecture Overview]
+  * #[Project Overview]
+* #[Behavior]
+  * #[Runtime Behavior]
+  * #[Deployment Behavior]
+* #[Extension Points]
+  * #[Providers]
+  * #[Services]
+* #[Standard Providers]
+  * #[Rewrite Provider]
+* #[Gateway Services]
+* #[Auditing]
+* #[Logging]
+* #[Internationalization]
+
+## Overview ##
+
+Apache Knox gateway is a specialized reverse proxy gateway for various Hadoop 
REST APIs.
+However, the gateway is built entirely upon a fairly generic framework.
+This framework is used to "plug-in" all of the behavior that makes it specific 
to Hadoop in general and any particular Hadoop REST API.
+It would be equally as possible to create a customized reverse proxy for other 
non-Hadoop HTTP endpoints.
+This approach is taken to ensure that the Apache Knox gateway can scale with 
the rapidly evolving Hadoop ecosystem.
+
+Throughout this guide we will be using a publicly available REST API to 
demonstrate the development of various extension mechanisms.
+http://openweathermap.org/
+
+### Architecture Overview ###
+
+The gateway itself is a layer over an embedded Jetty JEE server.
+At the very highest level the gateway processes requests by using request URLs 
to lookup specific JEE Servlet Filter chain that is used to process the request.
+The gateway framework provides extensible mechanisms to assemble chains of 
custom filters that support secured access to services.
+
+The gateway has two primary extensibility mechanisms: Service and Provider.
+The Service extensibility framework provides a way to add support for new 
HTTP/REST endpoints.
+For example, the support for WebHdfs is plugged into the Knox gateway as a 
Service.
+The Provider extensibility framework allows adding new features to the gateway 
that can be used across Services.
+An example of a Provider is an authentication provider.
+Providers can also expose APIs that other service and provider extensions can 
utilize.
+
+Service and Provider integrations interact with the gateway framework in two 
distinct phases: Deployment and Runtime.
+The gateway framework can be thought of as a layer over the JEE Servlet 
framework.
+Specifically all runtime processing within the gateway is performed by JEE 
Servlet Filters.
+The two phases interact with this JEE Servlet Filter based model in very 
different ways.
+The first phase, Deployment, is responsible for converting fairly simple to 
understand configuration called topology into JEE WebArchive (WAR) based 
implementation details.
+The second phase, Runtime, is the processing of requests via a set of Filters 
configured in the WAR.
+
+From an "ethos" perspective, Service and Provider extensions should attempt to 
incur complexity associated with configuration in the deployment phase.
+This should allow for very streamlined request processing that is very high 
performance and easily testable.
+The preference at runtime, in OO style, is for small classes that perform a 
specific function.
+The ideal set of implementation classes are then assembled by the Service and 
Provider plugins during deployment.
+
+A second critical design consideration is streaming.
+The processing infrastructure is build around JEE Servlet Filters as they 
provide a natural streaming interception model.
+All Provider implementations should make every attempt to maintaining this 
streaming characteristic.
+
+### Project Overview ###
+
+The table below describes the purpose of the current modules in the project.
+Of particular importance are the root pom.xml and the gateway-release module.
+The root pom.xml is critical because this is where all dependency version must 
be declared.
+There should be no dependency version information in module pom.xml files.
+The gateway-release module is critical because the dependencies declared there 
essentially define the classpath of the released gateway server.
+This is also true of the other -release modules in the project.
+
+| File/Module                                    | Description                 
                              |
+| 
-----------------------------------------------|-----------------------------------------------------------|
+| LICENSE                                        | The license for all source 
files in the release.          |
+| NOTICE                                         | Attributions required by 
dependencies.                    |
+| README                                         | A brief overview of the 
Knox project.                     |
+| CHANGES                                        | A description of the 
changes for each release.            |
+| ISSUES                                         | The knox issues for the 
current release.                  |
+| gateway-util-common                            | Common low level utilities 
used by many modules.          |
+| gateway-util-launcher                          | The launcher framework.     
                              |
+| gateway-util-urltemplate                       | The i18n logging and 
resource framework.                  |
+| gateway-i18n                                   | The URL template and 
rewrite utilities                    |
+| gateway-i18n-logging-log4j                     | The integration of i18n 
logging with log4j.               |
+| gateway-i18n-logging-sl4j                      | The integration of i18n 
logging with sl4j.                |
+| gateway-spi                                    | The SPI for service and 
provider extensions.              |
+| gateway-provider-identity-assertion-pseudo     | The identity assertion 
provider.                          |
+| gateway-provider-jersey                        | The jersey display 
provider.                              |
+| gateway-provider-rewrite                       | The URL rewrite provider.   
                              |
+| gateway-provider-rewrite-func-hostmap-static   | Host mapping function 
extension to rewrite.               |
+| gateway-provider-rewrite-func-service-registry | Service registry function 
extension to rewrite.           |
+| gateway-provider-rewrite-step-secure-query     | Crypto step extension to 
rewrite.                         |
+| gateway-provider-security-authz-acls           | Service level 
authorization.                              |
+| gateway-provider-security-jwt                  | JSON Web Token utilities.   
                              |
+| gateway-provider-security-preauth              | Preauthenticated SSO header 
support.                      |
+| gateway-provider-security-shiro                | Shiro authentiation 
integration.                          |
+| gateway-provider-security-webappsec            | Filters to prevent common 
webapp security issues.         |
+| gateway-service-as                             | The implementation of the 
Access service POC.             |
+| gateway-service-hbase                          | The implementation of the 
HBase service.                  |
+| gateway-service-hive                           | The implementation of the 
Hive service.                   |
+| gateway-service-oozie                          | The implementation of the 
Oozie service.                  |
+| gateway-service-tgs                            | The implementation of the 
Ticket Granting service POC.    |
+| gateway-service-webhcat                        | The implementation of the 
WebHCat service.                |
+| gateway-service-webhdfs                        | The implementation of the 
WebHdfs service.                |
+| gateway-server                                 | The implementation of the 
Knox gateway server.            |
+| gateway-shell                                  | The implementation of the 
Knox Groovy shell.              |
+| gateway-test-ldap                              | Pulls in all of the 
dependencies of the test LDAP server. |
+| gateway-server-launcher                        | The launcher definition for 
the gateway.                  |
+| gateway-shell-launcher                         | The launcher definition for 
the shell.                    |
+| knox-cli-launcher                              | A module to pull in all of 
the dependencies of the CLI.   |
+| gateway-test-ldap-launcher                     | The launcher definition for 
the test LDAP server.         |
+| gateway-release                                | The definition of the 
gateway binary release. Contains content and dependencies to be included in 
binary gateway package. |
+| gateway-test-utils                             | Various utilities used in 
unit and system tests.          |
+| gateway-test                                   | The functional tests.       
                              |
+| pom.xml                                        | The top level pom.          
                              |
+| build.xml                                      | A collection of utility for 
building and releasing.       |
+
+
+### Development Processes ###
+
+The project uses Maven in general with a few convenience Ant targets.
+
+Building the project can be built via Maven or Ant.  The two commands below 
are equivalent.
+
+```
+mvn clean install
+ant
+```
+
+A more complete build can be done that builds and generates the unsigned ZIP 
release artifacts.
+You will find these in the target/{version} directory (e.g. 
target/0.5.0-SNAPSHOT).
+
+```
+mvn -Prelease clean install
+ant release
+```
+
+There are a few other Ant targets that are especially convenient for testing.
+
+This command installs the gateway into the {{{install}}} directory of the 
project.
+Note that this command does not first build the project.
+
+```
+ant install-test-home
+```
+
+This command starts the gateway and LDAP servers installed by the command 
above into a test GATEWAY_HOME (i.e. install).
+Note that this command does not first install the test home.
+
+```
+ant start-test-servers
+```
+
+So putting things together the following Ant command will build a release, 
install it and start the servers ready for manual testing.
+
+```
+ant release install-test-home start-test-servers
+```
+
+## Behavior ##
+
+There are two distinct phases in the behavior of the gateway.
+These are the deployment and runtime phases.
+The deployment phase is responsible for converting topology descriptors into 
an executable JEE style WAR.
+The runtime phase is the processing of requests via WAR created during the 
deployment phase.
+
+The deployment phase is arguably the more complex of the two phases.
+This is because runtime relies on well known JEE constructs while deployment 
introduces new framework concepts.
+The base concept of the deployment framework is that of a "contributor".
+In the framework, contributors are pluggable component responsible for 
generating JEE WAR artifacts from topology files.
+
+### Deployment Behavior ###
+
+The goal of the deployment phase is to take easy to understand topology 
descriptions and convert them into optimized runtime artifacts.
+Our goal is not only should the topology descriptors be easy to understand, 
but have them be easy for a management system (e.g. Ambari) to generate.
+Think of deployment as compiling an assembly descriptor into a JEE WAR.
+WARs are then deployed to an embedded JEE container (i.e. Jetty).
+
+Consider the results of starting the gateway the first time.
+There are two sets of files that are relevant for deployment.
+The first is the topology file `<GATEWAY_HOME>/conf/topologies/sandbox.xml`.
+This second set is the WAR structure created during the deployment of the 
topology file.
+
+```
+data/deployments/sandbox.war.143bfef07f0/WEB-INF
+  web.xml
+  gateway.xml
+  shiro.ini
+  rewrite.xml
+  hostmap.txt
+```
+
+Notice that the directory `sandbox.war.143bfef07f0` is an "unzipped" 
representation of a JEE WAR file.
+This specifically means that it contains a `WEB-INF` directory which contains 
a `web.xml` file.
+For the curious the strange number (i.e. 143bfef07f0) in the name of the WAR 
directory is an encoded timestamp.
+This is the timestamp of the topology file (i.e. sandbox.xml) at the time the 
deployment occurred.
+This value is used to determine when topology files have changed and 
redeployment is required.
+
+Here is a brief overview of the purpose of each file in the WAR structure.
+
+web.xml
+: A standard JEE WAR descriptor.
+In this case a build-in GatewayServlet is mapped to the url pattern /*.
+
+gateway.xml
+: The configuration file for the GatewayServlet.
+Defines the filter chain that will be applied to each service's various URLs.
+
+shiro.ini
+: The configuration file for the Shiro authentication provider's filters.
+This information is derived from the information in the provider section of 
the topology file.
+
+rewrite.xml
+: The configuration file for the rewrite provider's filter.
+This captures all of the rewrite rules for the services.
+These rules are contributed by the contributors for each service.
+
+hostmap.txt
+: The configuration file the hostmap provider's filter.
+This information is derived from the information in the provider section of 
the topology file.
+
+The deployment framework follows "visitor" style patterns.
+Each topology file is parsed and the various constructs within it are 
"visited".
+The appropriate contributor for each visited construct is selected by the 
framework.
+The contributor is then passed the contrust from the topology file and asked 
to update the JEE WAR artifacts.
+Each contributor is free to inspect and modify any portion of the WAR 
artifacts.
+
+The diagram below provides an overview of the deployment processing.
+Detailed descriptions of each step follow the diagram.
+
+<<deployment-overview.puml>>
+
+1. The gateway server loads a topology file from conf/topologies into an 
internal structure.
+
+2. The gateway server delegates to a deployment factory to create the JEE WAR 
structure.
+
+3. The deployment factory first creates a basic WAR structure with 
WEB-INF/web.xml.
+
+4. Each provider and service in the topology is visited and the appropriate 
deployment contributor invoked.
+Each contributor is passed the appropriate information from the topology and 
modifies the WAR structure.
+
+5. A complete WAR structure is returned to the gateway service.
+
+6. The gateway server uses internal container APIs to dynamically deploy the 
WAR.
+
+The Java method below is the actual code from the DeploymentFactory that 
implements this behavior.
+You will note the initialize, contribute, finalize sequence.
+Each contributor is given three opportunities to interact with the topology 
and archive.
+This allows the various contributors to interact if required.
+For example, the service contributors use the deployment descriptor added to 
the WAR by the rewrite provider.
+
+```java
+public static WebArchive createDeployment( GatewayConfig config, Topology 
topology ) {
+  Map<String,List<ProviderDeploymentContributor>> providers;
+  Map<String,List<ServiceDeploymentContributor>> services;
+  DeploymentContext context;
+
+  providers = selectContextProviders( topology );
+  services = selectContextServices( topology );
+  context = createDeploymentContext( config, topology.getName(), topology, 
providers, services );
+
+  initialize( context, providers, services );
+  contribute( context, providers, services );
+  finalize( context, providers, services );
+
+  return context.getWebArchive();
+}
+```
+
+Below is a diagram that provides more detail.
+This diagram focuses on the interactions between the deployment factory and 
the service deployment contributors.
+Detailed description of each step follow the diagram.
+
+<<deployment-service.puml>>
+
+1. The gateway server loads global configuration (i.e. 
<GATEWAY_HOME>/conf/gateway-site.xml
+
+2. The gateway server loads a topology descriptor file.
+
+3. The gateway server delegates to the deployment factory to create a 
deployable WAR structure.
+
+4. The deployment factory creates a runtime descriptor to configure that 
gateway servlet.
+
+5. The deployment factory creates a basic WAR structure and adds the gateway 
servlet runtime descriptor to it.
+
+6. The deployment factory creates a deployment context object and adds the WAR 
structure to it.
+
+7. For each service defined in the topology descriptor file the appropriate 
service deployment contributor is selected and invoked.
+The correct service deployment contributor is determined by matching the role 
of a service in the topology descriptor
+to a value provided by the getRole() method of the 
ServiceDeploymentContributor interface.
+The initializeContribution method from _each_ service identified in the 
topology is called.
+Each service deployment contributor is expected to setup any runtime artifacts 
in the WAR that other services or provides may need.
+
+8. The contributeService method from _each_ service identified in the topology 
is called.
+This is where the service deployment contributors will modify any runtime 
descriptors.
+
+9. One of they ways that a service deployment contributor can modify the 
runtime descriptors is by asking the framework to contribute filters.
+This is how services are loosely coupled to the providers of features.
+For example a service deployment contributor might ask the framework to 
contribute the filters required for authorization.
+The deployment framework will then delegate to the correct provider deployment 
contributor to add filters for that feature.
+
+10. Finally the finalizeContribution method for each service is invoked.
+This provides an opportunity to react to anything done via the 
contributeService invocations and tie up any loose ends.
+
+11. The populated WAR is returned to the gateway server.
+
+The following diagram will provided expanded detail on the behavior of 
provider deployment contributors.
+Much of the beginning and end of the sequence shown overlaps with the service 
deployment sequence above.
+Those steps (i.e. 1-6, 17) will not be described below for brevity.
+The remaining steps have detailed descriptions following the diagram.
+
+<<deployment-provider.puml>>
+
+7. For each provider the appropriate provider deployment contributor is 
selected and invoked.
+The correct service deployment contributor is determined by first matching the 
role of a provider in the topology descriptor
+to a value provided by the getRole() method of the 
ProviderDeploymentContributor interface.
+If this is ambiguous, the name from the topology is used match the value 
provided by the getName() method of the ProviderDeploymentContributor interface.
+The initializeContribution method from _each_ provider identified in the 
topology is called.
+Each provider deployment contributor is expected to setup any runtime 
artifacts in the WAR that other services or provides may need.
+Note: In addition, others provider not explicitly referenced in the topology 
may have their initializeContribution method called.
+If this is the case only one default instance for each role declared vis the 
getRole() method will be used.
+The method used to determine the default instance is non-deterministic so it 
is best to select a particular named instance of a provider for each role.
+
+8. Each provider deployment contributor will typically add any runtime 
deployment descriptors it requires for operation.
+These descriptors are added to the WAR structure within the deployment context.
+
+9. The contributeProvider method of each configured or default provider 
deployment contributor is invoked.
+
+10. Each provider deployment contributor populates any runtime deployment 
descriptors based on information in the topology.
+
+11. Provider deployment contributors are never asked to contribute to the 
deployment directly.
+Instead a service deployment contributor will ask to have a particular 
provider role (e.g. authentication) contribute to the deployment.
+
+12. A service deployment contributor asks the framework to contribute filters 
for a given provider role.
+
+13. The framework selects the appropriate provider deployment contributor and 
invokes its contributeFilter method.
+
+14. During this invocation the provider deployment contributor populate 
populate service specific information.
+In particular it will add filters to the gateway servlet's runtime descriptor 
by adding JEE Servlet Filters.
+These filters will be added to the resources (or URLs) identified by the 
service deployment contributor.
+
+15. The finalizeContribute method of all referenced and default provider 
deployment contributors is invoked.
+
+16. The provider deployment contributor is expected to perform any final 
modifications to the runtime descriptors in the WAR structure.
+
+### Runtime Behavior ###
+
+The runtime behavior of the gateway is somewhat simpler as it more or less 
follows well known JEE models.
+There is one significant wrinkle.
+The filter chains are managed within the GatewayServlet as opposed to being 
managed by the JEE container.
+This is the result of an early decision made in the project.
+The intention is to allow more powerful URL matching than is provided by the 
JEE Servlet mapping mechanisms.
+
+The diagram below provides a high level overview of the runtime processing.
+An explanation for each step is provided after the diagram.
+
+<<runtime-overview.puml>>
+
+1. A REST client makes a HTTP request that is received by the embedded JEE 
container.
+
+2. A filter chain is looked up in a map of URLs to filter chains.
+
+3. The filter chain, which is itself a filter, is invoked.
+
+4. Each filter invokes the filters that follow it in the chain.
+The request and response objects can be wrapped in typically JEE Filter 
fashion.
+Filters may not continue chain processing and return if that is appropriate.
+
+5. Eventually the end of the last filter in the chain is invoked.
+Typically this is a special "dispatch" filter that is responsible for 
dispatching the request to the ultimate endpoint.
+Dispatch filters are also responsible for reading the response.
+
+6. The response may be in the form of a number of content types (e.g. 
application/json, text/xml).
+
+7. The response entity is streamed through the various response wrappers added 
by the filters.
+These response wrappers may rewrite various portions of the headers and body 
as per their configuration.
+
+8. The return of the response entity to the client is ultimately "pulled 
through" the filter response wrapper by the container.
+
+9. The response entity is returned original client.
+
+This diagram providers a more detailed breakdown of the request processing.
+Again descriptions of each step follow the diagram.
+
+<<runtime-request-processing.puml>>
+
+1. A REST client makes a HTTP request that is received by the embedded JEE 
container.
+
+2. The embedded container looks up the servlet mapped to the URL and invokes 
the service method.
+This our case the GatewayServlet is mapped to /* and therefore receives all 
requests for a given topology.
+Keep in mind that the WAR itself is deployed on a root context path that 
typically contains a level for the gateway and the name of the topology.
+This means that there is a single GatewayServlet per topology and it is 
effectivly mapped to <gateway>/<topology>/*.
+
+3. The GatewayServlet holds a single reference to a GatewayFilter which is a 
specialized JEE Servlet Filter.
+This choice was made to allow the GatewayServlet to dynamically deploy 
modified topologies.
+This is done by building a new GatewayFilter instance and replacing the old in 
an atomic fashion.
+
+4. The GatewayFilter contains another layer of URL mapping as defined in the 
gateway.xml runtime descriptor.
+The various service deployment contributor added these mappings at deployment 
time.
+Each service may add a number of different sub-URLs depending in their 
requirements.
+These sub-URLs will all be mapped to independently configured filter chains.
+
+5. The GatewayFilter invokes the doFilter method on the selected chain.
+
+6. The chain invokes the doFilter method of the first filter in the chain.
+
+7. Each filter in the chain continues processing by invoking the doFilter on 
the next filter in the chain.
+Ultimately a dispatch filter forward the request to the real service instead 
of invoking another filter.
+This is sometimes referred to as pivoting.
+
+## Gateway Servlet & Gateway Filter ##
+
+TODO
+
+```xml
+<web-app>
+
+  <servlet>
+    <servlet-name>sample</servlet-name>
+    <servlet-class>org.apache.hadoop.gateway.GatewayServlet</servlet-class>
+    <init-param>
+      <param-name>gatewayDescriptorLocation</param-name>
+      <param-value>gateway.xml</param-value>
+    </init-param>
+  </servlet>
+
+  <servlet-mapping>
+    <servlet-name>sandbox</servlet-name>
+    <url-pattern>/*</url-pattern>
+  </servlet-mapping>
+
+  <listener>
+    
<listener-class>org.apache.hadoop.gateway.services.GatewayServicesContextListener</listener-class>
+  </listener>
+
+  ...
+
+</web-app>
+```
+
+```xml
+<gateway>
+
+  <resource>
+    <role>WEATHER</role>
+    <pattern>/weather/**?**</pattern>
+
+    <filter>
+      <role>authentication</role>
+      <name>sample</name>
+      <class>...</class>
+    </filter>
+
+    <filter>...</filter>*
+
+  </resource>
+
+</gateway>
+```
+
+```java
+@Test
+public void testDevGuideSample() throws Exception {
+  Template pattern, input;
+  Matcher<String> matcher;
+  Matcher<String>.Match match;
+
+  // GET http://api.openweathermap.org/data/2.5/weather?q=Palo+Alto
+  pattern = Parser.parse( "/weather/**?**" );
+  input = Parser.parse( "/weather/2.5?q=Palo+Alto" );
+
+  matcher = new Matcher<String>();
+  matcher.add( pattern, "fake-chain" );
+  match = matcher.match( input );
+
+  assertThat( match.getValue(), is( "fake-chain") );
+}
+```
+
+## Extension Logistics ##
+
+There are a number of extension points available in the gateway: services, 
providers, rewrite steps and functions, etc.
+All of these use the Java ServiceLoader mechanism for their discovery.
+There are two ways to make these extensions available on the class path at 
runtime.
+The first way to to add a new module to the project and have the extension 
"built-in".
+The second is to add the extension to the class path of the server after it is 
installed.
+Both mechanism are described in more detail below.
+
+### Service Loaders ###
+
+Extensions are discovered via Java's [Service 
Loader|http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html] 
mechanism.
+There are good 
[tutorials|http://docs.oracle.com/javase/tutorial/ext/basics/spi.html] 
available for learning more about this.
+The basics come town to two things.
+
+1. Implement the service contract interface (e.g. 
ServiceDeploymentContributor, ProviderDeploymentContributor)
+
+2. Create a file in META-INF/services of the JAR that will contain the 
extension.
+This file will be named as the fully qualified name of the contract interface 
(e.g. org.apache.hadoop.gateway.deploy.ProviderDeploymentContributor).
+The contents of the file will be the fully qualified names of any 
implementation of that contract interface in that JAR.
+
+One tip is to include a simple test with each of you extension to ensure that 
it will be properly discovered.
+This is very helpful in situations where a refactoring fails to change the a 
class in the META-INF/services files.
+An example of one such test from the project is shown below.
+
+```java
+  @Test
+  public void testServiceLoader() throws Exception {
+    ServiceLoader loader = ServiceLoader.load( 
ProviderDeploymentContributor.class );
+    Iterator iterator = loader.iterator();
+    assertThat( "Service iterator empty.", iterator.hasNext() );
+    while( iterator.hasNext() ) {
+      Object object = iterator.next();
+      if( object instanceof ShiroDeploymentContributor ) {
+        return;
+      }
+    }
+    fail( "Failed to find " + ShiroDeploymentContributor.class.getName() + " 
via service loader." );
+  }
+```
+
+### Class Path ###
+
+One way to extend the functionality of the server without having to recompile 
is to add the extension JARs to the servers class path.
+As an extensible server this is made straight forward but it requires some 
understanding of how the server's classpath is setup.
+In the <GATEWAY_HOME> directory there are four class path related directories 
(i.e. bin, lib, dep, ext).
+
+The bin directory contains very small "launcher" jars that contain only enough 
code to read configuration and setup a class path.
+By default the configuration of a launcher is embedded with the launcher JAR 
but it may also be extracted into a .cfg file.
+In that file you will see how the class path is defined.
+
+```
+class.path=../lib/*.jar,../dep/*.jar;../ext;../ext/*.jar
+```
+
+The paths are all relative to the directory that contains the launcher JAR.
+
+../lib/*.jar
+: These are the "built-in" jars that are part of the project itself.
+Information is provided elsewhere in this document for how to integrate a 
built-in extension.
+
+../dep/*.jar
+: These are the JARs for all of the external dependencies of the project.
+This separation between the generated JARs and dependencies help keep 
licensing issues straight.
+
+../ext
+: This directory is for post-install extensions and is empty by default.
+Including the directory (vs *.jar) allows for individual classes to be placed 
in this directory.
+
+../ext/*.jar
+: This would pick up all extension JARs placed in the ext directory.
+
+Note that order is significant.  The lib JARs take precedence over dep JARs 
and they take precedence over ext classes and JARs.
+
+### Maven Module ###
+
+Integrating an extension into the project follows well established Maven 
patterns for adding modules.
+Below are several points that are somewhat unique to the Knox project.
+
+1. Add the module to the root pom.xml file's <modules> list.
+Take care to ensure that the module is in the correct place in the list based 
on its dependencies.
+Note: In general modules should not have non-test dependencies on 
gateway-server but rather gateway-spi
+
+2. Any new dependencies must be represented in the root pom.xml file's 
<dependencyManagement> section.
+The required version of the dependencies will be declared there.
+The new sub-module's pom.xml file must not include dependency version 
information.
+This helps prevent dependency version conflict issues.
+
+3. If the extension is to be "built into" the released gateway server it needs 
to be added as a dependency to the gateway-release module.
+This is done by adding to the <dependencies> section of the gateway-release's 
pom.xml file.
+If this isn't done the JARs for the module will not be automatically packaged 
into the release artifacts.
+This can be useful while an extension is under development but not yet ready 
for inclusion in the release.
+
+More detailed examples of adding both a service and a provider extension are 
provided in subsequent sections.
+
+### Services ###
+
+Services are extensions that are responsible for converting information in the 
topology file to runtime descriptors.
+Typically services do not require their own runtime descriptors.
+Rather, they modify either the gateway runtime descriptor (i.e. gateway.xml) 
or descriptors of other providers (e.g. rewrite.xml).
+
+The service provider interface for a Service is ServiceDeploymentContributor 
and is shown below.
+
+```java
+package org.apache.hadoop.gateway.deploy;
+import org.apache.hadoop.gateway.topology.Service;
+public interface ServiceDeploymentContributor {
+  String getRole();
+  void initializeContribution( DeploymentContext context );
+  void contributeService( DeploymentContext context, Service service ) throws 
Exception;
+  void finalizeContribution( DeploymentContext context );
+}
+```
+
+Each service provides an implementation of this interface that is discovered 
via the ServerLoader mechanism previously described.
+The meaning of this is best understood in the context of the structure of the 
topology file.
+A fragment of a topology file is shown below.
+
+```xml
+<topology>
+    <gateway>
+        ....
+    </gateway>
+    <service>
+        <role>WEATHER</role>
+        <url>http://api.openweathermap.org/data</url>
+    </service>
+    ....
+</topology>
+```
+
+With these two things a more detailed description of the purpose of each 
ServiceDeploymentContributor method should be helpful.
+
+String getRole();
+: This is the value the framework uses to associate a given `<service><role>` 
with a particular ServiceDeploymentContributor implementation.
+See below how the example WeatherDeploymentContributor implementation returns 
the role WEATHER that matches the value in the topology file.
+This will result in the WeatherDeploymentContributor's methods being invoked 
when a WEATHER service is encountered in the topology file.
+
+```java
+public class WeatherDeploymentContributor extends 
ServiceDeploymentContributorBase {
+  private static final String ROLE = "WEATHER";
+  @Override
+  public String getRole() {
+    return ROLE;
+  }
+  ...
+}
+```
+
+void initializeContribution( DeploymentContext context );
+: In this method a contributor would create, initialize and add any 
descriptors it was responsible for to the deployment context.
+For the weather service example this isn't required so the empty method isn't 
shown here.
+
+void contributeService( DeploymentContext context, Service service ) throws 
Exception;
+: In this method a service contributor typically add and configures any 
features it requires.
+This method will be dissected in more detail below.
+
+void finalizeContribution( DeploymentContext context );
+: In this method a contributor would finalize any descriptors it was 
responsible for to the deployment context.
+For the weather service example this isn't required so the empty method isn't 
shown here.
+
+#### Service Contribution Behavior ####
+
+In order to understand the job of the ServiceDeploymentContributor a few 
runtime descriptors need to be introduced.
+
+Gateway Runtime Descriptor: WEB-INF/gateway.xml
+: This runtime descriptor controls the behavior of the GatewayFilter.
+It defines a mapping between resources (i.e. URL patterns) and filter chains.
+The sample gateway runtime descriptor helps illustrate.
+
+```xml
+<gateway>
+  <resource>
+    <role>WEATHER</role>
+    <pattern>/weather/**?**</pattern>
+    <filter>
+      <role>authentication</role>
+      <name>sample</name>
+      <class>...</class>
+    </filter>
+    <filter>...</filter>*
+    ...
+  </resource>
+</gateway>
+```
+
+Rewrite Provider Runtime Descriptor: WEB-INF/rewrite.xml
+: The rewrite provider runtime descriptor controls the behavior of the rewrite 
filter.
+Each service contributor is responsible for adding the rules required to 
control the URL rewriting required by that service.
+Later sections will provide more detail about the capabilities of the rewrite 
provider.
+
+```xml
+<rules>
+  <rule dir="IN" name="WEATHER/openweathermap/inbound/versioned/file"
+      pattern="*://*:*/**/weather/{version}?{**}">
+    <rewrite template="{$serviceUrl[WEATHER]}/{version}/weather?{**}"/>
+  </rule>
+</rules>
+```
+
+With these two descriptors in mind a detailed breakdown of the 
WeatherDeploymentContributor's contributeService method will make more sense.
+At a high level the important concept is that contributeService is invoked by 
the framework for each <service> in the topology file.
+
+```java
+public class WeatherDeploymentContributor extends 
ServiceDeploymentContributorBase {
+  ...
+  @Override
+  public void contributeService( DeploymentContext context, Service service ) 
throws Exception {
+    contributeResources( context, service );
+    contributeRewriteRules( context );
+  }
+
+  private void contributeResources( DeploymentContext context, Service service 
) throws URISyntaxException {
+    ResourceDescriptor resource = context.getGatewayDescriptor().addResource();
+    resource.role( service.getRole() );
+    resource.pattern( "/weather/**?**" );
+    addAuthenticationFilter( context, service, resource );
+    addRewriteFilter( context, service, resource );
+    addDispatchFilter( context, service, resource );
+  }
+
+  private void contributeRewriteRules( DeploymentContext context ) throws 
IOException {
+    UrlRewriteRulesDescriptor allRules = context.getDescriptor( "rewrite" );
+    UrlRewriteRulesDescriptor newRules = loadRulesFromClassPath();
+    allRules.addRules( newRules );
+  }
+
+  ...
+}
+```
+
+The DeploymentContext parameter contains information about the deployment as 
well as the WAR structure being created via deployment.
+The Service parameter is the object representation of the <service> element in 
the topology file.
+Details about particularly important lines follow the code block.
+
+ResourceDescriptor resource = context.getGatewayDescriptor().addResource();
+: Obtains a reference to the gateway runtime descriptor and adds a new 
resource element.
+Note that many of the APIs in the deployment framework follow a fluent vs bean 
style.
+
+resource.role( service.getRole() );
+: Sets the role for a particular resource.
+Many of the filters may need access to this role information in order to make 
runtime decisions.
+
+resource.pattern( "/weather/**?**" );
+: Sets the URL pattern to which the filter chain that will follow will be 
mapped within the GatewayFilter.
+
+add*Filter( context, service, resource );
+: These are taken from a base class.
+A representation of the implementation of that method from the base class is 
shown below.
+Notice how this essentially delegates back to the framework to add the filters 
required by a particular provider role (e.g. "rewrite").
+
+```java
+  protected void addRewriteFilter( DeploymentContext context, Service service, 
ResourceDescriptor resource ) {
+    context.contributeFilter( service, resource, "rewrite", null, null );
+  }
+```
+
+UrlRewriteRulesDescriptor allRules = context.getDescriptor( "rewrite" );
+: Here the rewrite provider runtime descriptor is obtained by name from the 
deployment context.
+This does represent a tight coupling in this case between this service and the 
default rewrite provider.
+The rewrite provider however is unlikely to be related with alternate 
implementations.
+
+UrlRewriteRulesDescriptor newRules = loadRulesFromClassPath();
+: This is convenience method for loading partial rewrite descriptor 
information from the classpath.
+Developing and maintaining these rewrite rules is far easier as an external 
resource.
+The rewrite descriptor API could however have been used to achieve the same 
result.
+
+allRules.addRules( newRules );
+: Here the rewrite rules for the weather service are merged into the larget 
set of rewrite rules.
+
+```xml
+<project>
+    <modelVersion>4.0.0</modelVersion>
+    <parent>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>gateway</artifactId>
+        <version>0.5.0-SNAPSHOT</version>
+    </parent>
+
+    <artifactId>gateway-service-weather</artifactId>
+    <name>gateway-service-weather</name>
+    <description>A sample extension to the gateway for a weather REST 
API.</description>
+
+    <licenses>
+        <license>
+            <name>The Apache Software License, Version 2.0</name>
+            <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+            <distribution>repo</distribution>
+        </license>
+    </licenses>
+
+    <dependencies>
+        <dependency>
+            <groupId>${gateway-group}</groupId>
+            <artifactId>gateway-spi</artifactId>
+        </dependency>
+        <dependency>
+            <groupId>${gateway-group}</groupId>
+            <artifactId>gateway-provider-rewrite</artifactId>
+        </dependency>
+
+        ... Test Dependencies ...
+
+    </dependencies>
+
+</project>
+```
+
+### Providers ###
+
+```java
+public interface ProviderDeploymentContributor {
+  String getRole();
+  String getName();
+
+  void initializeContribution( DeploymentContext context );
+  void contributeProvider( DeploymentContext context, Provider provider );
+  void contributeFilter(
+      DeploymentContext context,
+      Provider provider,
+      Service service,
+      ResourceDescriptor resource,
+      List<FilterParamDescriptor> params );
+
+  void finalizeContribution( DeploymentContext context );
+}
+```
+
+```xml
+<project>
+    <modelVersion>4.0.0</modelVersion>
+    <parent>
+        <groupId>org.apache.hadoop</groupId>
+        <artifactId>gateway</artifactId>
+        <version>0.5.0-SNAPSHOT</version>
+    </parent>
+
+    <artifactId>gateway-provider-security-authn-sample</artifactId>
+    <name>gateway-provider-security-authn-sample</name>
+    <description>A simple sample authorization provider.</description>
+
+    <licenses>
+        <license>
+            <name>The Apache Software License, Version 2.0</name>
+            <url>http://www.apache.org/licenses/LICENSE-2.0.txt</url>
+            <distribution>repo</distribution>
+        </license>
+    </licenses>
+
+    <dependencies>
+        <dependency>
+            <groupId>${gateway-group}</groupId>
+            <artifactId>gateway-spi</artifactId>
+        </dependency>
+    </dependencies>
+
+</project>
+```
+
+### Deployment Context ###
+
+```java
+package org.apache.hadoop.gateway.deploy;
+
+import ...
+
+public interface DeploymentContext {
+
+  GatewayConfig getGatewayConfig();
+
+  Topology getTopology();
+
+  WebArchive getWebArchive();
+
+  WebAppDescriptor getWebAppDescriptor();
+
+  GatewayDescriptor getGatewayDescriptor();
+
+  void contributeFilter(
+      Service service,
+      ResourceDescriptor resource,
+      String role,
+      String name,
+      List<FilterParamDescriptor> params );
+
+  void addDescriptor( String name, Object descriptor );
+
+  <T> T getDescriptor( String name );
+
+}
+```
+
+```java
+public class Topology {
+
+  public URI getUri() {...}
+  public void setUri( URI uri ) {...}
+
+  public String getName() {...}
+  public void setName( String name ) {...}
+
+  public long getTimestamp() {...}
+  public void setTimestamp( long timestamp ) {...}
+
+  public Collection<Service> getServices() {...}
+  public Service getService( String role, String name ) {...}
+  public void addService( Service service ) {...}
+
+  public Collection<Provider> getProviders() {...}
+  public Provider getProvider( String role, String name ) {...}
+  public void addProvider( Provider provider ) {...}
+}
+```
+
+```java
+public interface GatewayDescriptor {
+  List<GatewayParamDescriptor> params();
+  GatewayParamDescriptor addParam();
+  GatewayParamDescriptor createParam();
+  void addParam( GatewayParamDescriptor param );
+  void addParams( List<GatewayParamDescriptor> params );
+
+  List<ResourceDescriptor> resources();
+  ResourceDescriptor addResource();
+  ResourceDescriptor createResource();
+  void addResource( ResourceDescriptor resource );
+}
+```
+
+### Gateway Services ###
+
+TODO - Describe the service registry and other global services.
+
+## Standard Providers ##
+
+### Rewrite Provider ###
+
+gateway-provider-rewrite
+org.apache.hadoop.gateway.filter.rewrite.api.UrlRewriteRulesDescriptor
+
+```xml
+<rules>
+  <rule
+      dir="IN"
+      name="WEATHER/openweathermap/inbound/versioned/file"
+      pattern="*://*:*/**/weather/{version}?{**}">
+    <rewrite template="{$serviceUrl[WEATHER]}/{version}/weather?{**}"/>
+  </rule>
+</rules>
+```
+
+```xml
+<rules>
+  <filter name="WEBHBASE/webhbase/status/outbound">
+    <content type="*/json">
+      <apply path="$[LiveNodes][*][name]" 
rule="WEBHBASE/webhbase/address/outbound"/>
+    </content>
+    <content type="*/xml">
+      <apply path="/ClusterStatus/LiveNodes/Node/@name" 
rule="WEBHBASE/webhbase/address/outbound"/>
+    </content>
+  </filter>
+</rules>
+```
+
+```java
+@Test
+public void testDevGuideSample() throws Exception {
+  URI inputUri, outputUri;
+  Matcher<Void> matcher;
+  Matcher<Void>.Match match;
+  Template input, pattern, template;
+
+  inputUri = new URI( 
"http://sample-host:8443/gateway/topology/weather/2.5?q=Palo+Alto"; );
+
+  input = Parser.parse( inputUri.toString() );
+  pattern = Parser.parse( "*://*:*/**/weather/{version}?{**}" );
+  template = Parser.parse( 
"http://api.openweathermap.org/data/{version}/weather?{**}"; );
+
+  matcher = new Matcher<Void>();
+  matcher.add( pattern, null );
+  match = matcher.match( input );
+
+  outputUri = Expander.expand( template, match.getParams(), null );
+
+  assertThat(
+      outputUri.toString(),
+      is( "http://api.openweathermap.org/data/2.5/weather?q=Palo+Alto"; ) );
+}
+```
+
+```java
+@Test
+public void testDevGuideSampleWithEvaluator() throws Exception {
+  URI inputUri, outputUri;
+  Matcher<Void> matcher;
+  Matcher<Void>.Match match;
+  Template input, pattern, template;
+  Evaluator evaluator;
+
+  inputUri = new URI( 
"http://sample-host:8443/gateway/topology/weather/2.5?q=Palo+Alto"; );
+  input = Parser.parse( inputUri.toString() );
+
+  pattern = Parser.parse( "*://*:*/**/weather/{version}?{**}" );
+  template = Parser.parse( "{$serviceUrl[WEATHER]}/{version}/weather?{**}" );
+
+  matcher = new Matcher<Void>();
+  matcher.add( pattern, null );
+  match = matcher.match( input );
+
+  evaluator = new Evaluator() {
+    @Override
+    public List<String> evaluate( String function, List<String> parameters ) {
+      return Arrays.asList( "http://api.openweathermap.org/data"; );
+    }
+  };
+
+  outputUri = Expander.expand( template, match.getParams(), evaluator );
+
+  assertThat(
+      outputUri.toString(),
+      is( "http://api.openweathermap.org/data/2.5/weather?q=Palo+Alto"; ) );
+}
+```
+
+#### Rewrite Filters ####
+TODO - Cover the supported content types.
+TODO - Provide a XML and JSON "properties" example where one NVP is modified 
based on value of another name.
+
+```xml
+<rules>
+  <filter name="WEBHBASE/webhbase/regions/outbound">
+    <content type="*/json">
+      <apply path="$[Region][*][location]" 
rule="WEBHBASE/webhbase/address/outbound"/>
+    </content>
+    <content type="*/xml">
+      <apply path="/TableInfo/Region/@location" 
rule="WEBHBASE/webhbase/address/outbound"/>
+    </content>
+  </filter>
+</rules>
+```
+
+```xml
+<gateway>
+  ...
+  <resource>
+    <role>WEBHBASE</role>
+    <pattern>/hbase/*/regions?**</pattern>
+    ...
+    <filter>
+      <role>rewrite</role>
+      <name>url-rewrite</name>
+      
<class>org.apache.hadoop.gateway.filter.rewrite.api.UrlRewriteServletFilter</class>
+      <param>
+        <name>response.body</name>
+        <value>WEBHBASE/webhbase/regions/outbound</value>
+      </param>
+    </filter>
+    ...
+  </resource>
+  ...
+</gateway>
+```
+
+HBaseDeploymentContributor
+```java
+    params = new ArrayList<FilterParamDescriptor>();
+    params.add( regionResource.createFilterParam().name( "response.body" 
).value( "WEBHBASE/webhbase/regions/outbound" ) );
+    addRewriteFilter( context, service, regionResource, params );
+```
+
+#### Rewrite Functions ####
+TODO - Provide an lowercase function as an example.
+
+```xml
+<rules>
+  <functions>
+    <hostmap config="/WEB-INF/hostmap.txt"/>
+  </functions>
+  ...
+</rules>
+```
+
+#### Rewrite Steps ####
+TODO - Provide an lowercase step as an example.
+
+```xml
+<rules>
+  <rule dir="OUT" name="WEBHDFS/webhdfs/outbound/namenode/headers/location">
+    <match pattern="{scheme}://{host}:{port}/{path=**}?{**}"/>
+    <rewrite 
template="{gateway.url}/webhdfs/data/v1/{path=**}?{scheme}?host={$hostmap(host)}?{port}?{**}"/>
+    <encrypt-query/>
+  </rule>
+</rules>
+```
+
+### Jersey Provider ###
+TODO
+
+## Auditing ##
+
+```java
+public class AuditingSample {
+
+  private static Auditor AUDITOR = 
AuditServiceFactory.getAuditService().getAuditor(
+      "sample-channel", "sample-service", "sample-component" );
+
+  public void sampleMethod() {
+      ...
+      AUDITOR.audit( Action.AUTHORIZATION, sourceUrl, ResourceType.URI, 
ActionOutcome.SUCCESS );
+      ...
+  }
+
+}
+```
+
+## Logging ##
+
+```java
+@Messages( logger = "org.apache.project.module" )
+public interface CustomMessages {
+
+  @Message( level = MessageLevel.FATAL, text = "Failed to parse command line: 
{0}" )
+  void failedToParseCommandLine( @StackTrace( level = MessageLevel.DEBUG ) 
ParseException e );
+
+}
+```
+
+```java
+public class CustomLoggingSample {
+
+  private static GatewayMessages MSG = MessagesFactory.get( 
GatewayMessages.class );
+
+  public void sampleMethod() {
+    ...
+    MSG.failedToParseCommandLine( e );
+    ...
+  }
+
+}
+```
+
+## Internationalization ##
+
+```java
+@Resources
+public interface CustomResources {
+
+  @Resource( text = "Apache Hadoop Gateway {0} ({1})" )
+  String gatewayVersionMessage( String version, String hash );
+
+}
+```
+
+```java
+public class CustomResourceSample {
+
+  private static GatewayResources RES = ResourcesFactory.get( 
GatewayResources.class );
+
+  public void sampleMethod() {
+    ...
+    String s = RES.gatewayVersionMessage( "0.0.0", "XXXXXXX" ) );
+    ...
+  }
+
+}
+```
+
+<<../../common/footer.md>>
+
+
+

Added: knox/trunk/books/0.5.0/dev-guide/deployment-overview.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/deployment-overview.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/deployment-overview.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/deployment-overview.puml Tue Nov 11 
17:03:59 2014
@@ -0,0 +1,37 @@
+@startuml
+title Deployment Processing Overview
+hide footbox
+autonumber
+
+participant "Gateway\nServer" as GW
+participant "Embedded\nJetty" as EJ
+participant "Deployment\nFactory" as DF
+participant "Deployment\nContributors" as DC
+participant "Topology\nDescriptor" as TD
+participant "Web\nArchive" as WAR
+
+activate GW
+
+  create TD
+  GW -> TD: td = loadTopology( xml )
+
+  GW -> DF: war = createDeployment( td )
+  activate DF
+
+    create WAR
+    DF -> WAR: war = createEmptyWar()
+
+    DF -> DC: addDescriptors( td, war )
+    activate DC
+    deactivate DC
+
+  GW <-- DF
+  deactivate DF
+
+  GW -> EJ: deploy( war )
+  activate EJ
+  deactivate EJ
+
+deactivate GW
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.5.0/dev-guide/deployment-provider-simple.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/deployment-provider-simple.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/deployment-provider-simple.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/deployment-provider-simple.puml Tue Nov 11 
17:03:59 2014
@@ -0,0 +1,34 @@
+@startuml
+title Provider Deployment\n(Simplified)
+hide footbox
+autonumber
+
+participant "Deployment\nFactory" as DF
+participant "Provider\nDeployment\nContributor" as PDC
+participant "Service\nDeployment\nContributor" as SDC
+
+activate DF
+
+  DF -> PDC:initializeContribution
+  activate PDC
+  deactivate PDC
+  
+  DF -> SDC:contributeService
+  activate SDC
+    SDC -> DF: contributeFilter
+    activate DF
+
+    DF -> PDC: contributeFilter
+    activate PDC
+    deactivate PDC
+
+    deactivate DF
+  deactivate SDC
+  
+  DF -> PDC:finalizeContribution
+  activate PDC
+  deactivate PDC
+
+deactivate DF
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.5.0/dev-guide/deployment-provider.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/deployment-provider.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/deployment-provider.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/deployment-provider.puml Tue Nov 11 
17:03:59 2014
@@ -0,0 +1,68 @@
+@startuml
+title Provider Deployment
+hide footbox
+autonumber
+
+participant "Gateway\nServer" as GW
+participant "Gateway\nConfig\n(gc)" as GC
+participant "Topology\n(td)" as TD
+participant "Deployment\nFactory" as DF
+participant "Deployment\nContext\n(dc)" as DC
+participant "Web\nArchive\n(wa)" as WA
+participant "Gateway\nDescriptor\n(gd)" as GD
+participant "Provider\nDeployment\nContributor" as PDC
+participant "Service\nDeployment\nContributor" as SDC
+
+create GC
+GW -> GC: load
+
+create TD
+GW -> TD: load
+
+GW -> DF: createDeployment( gc, td ): wa
+activate DF
+
+  create GD
+  DF -> GD: create
+  create WA
+  DF -> WA: create( gd )
+  create DC
+  DF -> DC: create( gc, td, wa )
+
+  loop Provider p in Topology dc.td
+    DF -> PDC:initializeContribution( dc, p )
+    activate PDC
+    PDC -> WA: <i>createDescriptors</i>
+    deactivate PDC
+  end
+  loop Provider p in Topology dc.td
+    DF -> PDC:contributeProvider( dc, p )
+    activate PDC
+    PDC -> WA: <i>populateDescriptors</i>
+    deactivate PDC
+  end
+  loop Service s in Topology dc.td
+    DF -> SDC:contributeService( dc, s )
+    activate SDC
+      SDC -> DC: contributeFilter( s, <i>resource, role, name, params</i> )
+      activate DC
+      DC -> PDC: contributeFilter( s, <i>resource, role, name, params</i> )
+      activate PDC
+        PDC -> WA: <i>modifyDescriptors</i>
+      deactivate PDC
+      'DC --> SDC
+      deactivate DC
+    'DF <-- SDC
+    deactivate SDC
+  end
+  loop Provider p in Topology dc.td
+    DF -> PDC:finalizeContribution( dc, P )
+    activate PDC
+    PDC -> WA: <i>finalizeDescriptors</i>
+    deactivate PDC
+  end
+
+GW <-- DF: WebArchive wa
+deactivate DF
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.5.0/dev-guide/deployment-service-simple.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/deployment-service-simple.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/deployment-service-simple.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/deployment-service-simple.puml Tue Nov 11 
17:03:59 2014
@@ -0,0 +1,28 @@
+@startuml
+title Service Deployment\n(Simplified)
+hide footbox
+autonumber
+
+participant "Deployment\nFactory" as DF
+participant "Service\nDeployment\nContributor" as SDC
+
+activate DF
+
+  DF -> SDC:initializeContribution
+  activate SDC
+  deactivate SDC
+
+  DF -> SDC:contributeService
+  activate SDC
+    SDC -> DF: contributeFilter
+    activate DF
+    deactivate DF
+  deactivate SDC
+
+  DF -> SDC:finalizeContribution
+  activate SDC
+  deactivate SDC
+
+deactivate DF
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.5.0/dev-guide/deployment-service.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/deployment-service.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/deployment-service.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/deployment-service.puml Tue Nov 11 
17:03:59 2014
@@ -0,0 +1,55 @@
+@startuml
+title Service Deployment
+hide footbox
+autonumber
+
+participant "Gateway\nServer" as GW
+participant "Gateway\nConfig\n(gc)" as GC
+participant "Topology\n(td)" as TD
+participant "Deployment\nFactory" as DF
+participant "Deployment\nContext\n(dc)" as DC
+participant "Web\nArchive\n(wa)" as WA
+participant "Gateway\nDescriptor\n(gd)" as GD
+participant "Service\nDeployment\nContributor" as SDC
+
+create GC
+GW -> GC: load
+
+create TD
+GW -> TD: load
+
+GW -> DF: createDeployment( gc, td ): wa
+activate DF
+
+  create GD
+  DF -> GD: create
+  create WA
+  DF -> WA: create( gd )
+  create DC
+  DF -> DC: create( gc, td, wa )
+
+  loop Service s in Topology dc.td
+    DF -> SDC:initializeContribution( dc, s )
+    'activate SDC
+    'SDC -> WA: <i>setupDescriptors</i>
+    'deactivate SDC
+  end
+  loop Service s in Topology dc.td
+    DF -> SDC:contributeService( dc, s )
+    activate SDC
+    group each required provider
+      SDC -> DF: contributeFilter( s, <i>resource, role, name, params</i> )
+    end
+    deactivate SDC
+  end
+  loop Service s in Topology dc.td
+    DF -> SDC:finalizeContribution( dc, s )
+    'activate SDC
+    'SDC -> WA: <i>finalizeDescriptors</i>
+    'deactivate SDC
+  end
+
+GW <-- DF: WebArchive wa
+deactivate DF
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.5.0/dev-guide/runtime-overview.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/runtime-overview.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/runtime-overview.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/runtime-overview.puml Tue Nov 11 17:03:59 
2014
@@ -0,0 +1,36 @@
+@startuml
+title Request Processing Overview
+hide footbox
+autonumber
+
+actor "REST Client" as C
+box "Gateway"
+  participant "Embedded\nJetty" as GW
+  participant "Map\n<URL,Chain<Filter>>" as CM
+  participant "Chain\n<Filter>" as FC
+end box
+participant "Hadoop\nService" as S
+
+C -> GW: GET( URL )
+activate GW
+  GW -> CM: Chain<Filter> = lookup( URL )
+  activate CM
+  deactivate CM
+  GW -> FC: doFilter
+  activate FC
+
+      FC -> FC: doFilter*
+      activate FC
+        FC -> S: GET( URL' )
+        activate S
+        FC <-- S: JSON
+        deactivate S
+      FC <-- FC: JSON
+      deactivate FC
+
+    GW <-- FC: JSON
+  deactivate FC
+C <-- GW: JSON
+deactivate GW
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.5.0/dev-guide/runtime-request-processing.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/dev-guide/runtime-request-processing.puml?rev=1638218&view=auto
==============================================================================
--- knox/trunk/books/0.5.0/dev-guide/runtime-request-processing.puml (added)
+++ knox/trunk/books/0.5.0/dev-guide/runtime-request-processing.puml Tue Nov 11 
17:03:59 2014
@@ -0,0 +1,38 @@
+@startuml
+title Request Processing Behavior
+hide footbox
+autonumber
+
+actor Client as C
+participant "Gateway\nServer\n(Jetty)" as GW
+participant "Gateway\nServlet" as GS
+participant "Gateway\nFilter" as GF
+participant "Matcher<Chain>" as UM
+participant "Chain" as FC
+participant "Filter" as PF
+
+C -> GW: GET( URL )
+activate C
+  activate GW
+    GW -> GS: service
+    activate GS
+      GS -> GF: doFilter
+      activate GF
+        GF -> UM: match( URL ): Chain
+        GF -> FC: doFilter
+        activate FC
+          FC -> PF: doFilter
+          activate PF
+            PF -> PF: doFilter
+            activate PF
+            deactivate PF
+          'FC <-- PF
+          deactivate PF
+        deactivate FC
+      deactivate GS
+    deactivate GF
+  deactivate GW
+deactivate C
+
+
+@enduml
\ No newline at end of file


Reply via email to