Author: lmccay
Date: Tue Nov 11 16:26:49 2014
New Revision: 1638209

URL: http://svn.apache.org/r1638209
Log:
added samples section

Modified:
    knox/site/books/knox-0-5-0/knox-0-5-0.html
    knox/trunk/books/0.5.0/book.md

Modified: knox/site/books/knox-0-5-0/knox-0-5-0.html
URL: 
http://svn.apache.org/viewvc/knox/site/books/knox-0-5-0/knox-0-5-0.html?rev=1638209&r1=1638208&r2=1638209&view=diff
==============================================================================
--- knox/site/books/knox-0-5-0/knox-0-5-0.html (original)
+++ knox/site/books/knox-0-5-0/knox-0-5-0.html Tue Nov 11 16:26:49 2014
@@ -22,6 +22,7 @@
     <li><a href="#Apache+Knox+Directory+Layout">Apache Knox Directory 
Layout</a></li>
     <li><a href="#Supported+Services">Supported Services</a></li>
   </ul></li>
+  <li><a href="#Gateway+Samples">Gateway Samples</a></li>
   <li><a href="#Gateway+Details">Gateway Details</a>
   <ul>
     <li><a href="#URL+Mapping">URL Mapping</a></li>
@@ -1488,7 +1489,30 @@ APACHE_HOME/bin/apachectl -k stop
       <td>Logging message. Contains additional tracking information.</td>
     </tr>
   </tbody>
-</table><h4><a id="Audit+log+rotation"></a>Audit log rotation</h4><p>Audit 
logging is preconfigured with 
<code>org.apache.log4j.DailyRollingFileAppender</code>. <a 
href="http://logging.apache.org/log4j/1.2/";>Apache log4j</a> contains 
information about other Appenders.</p><h4><a 
id="How+to+change+audit+level+or+disable+it"></a>How to change audit level or 
disable it</h4><p>Audit configuration is stored in the 
<code>conf/gateway-log4j.properties</code> file.</p><p>All audit messages are 
logged at <code>INFO</code> level and this behavior can&rsquo;t be 
changed.</p><p>To change audit configuration <code>log4j.logger.audit*</code> 
and <code>log4j.appender.auditfile*</code> properties in 
<code>conf/gateway-log4j.properties</code> file should be modified.</p><p>Their 
meaning can be found in <a href="http://logging.apache.org/log4j/1.2/";>Apache 
log4j</a>.</p><p>Disabling auditing can be done by decreasing log level for 
appender.</p><h2><a id="Client+Details"></a>Client Details</h2><p>Hadoo
 p requires a client that can be used to interact remotely with the services 
provided by Hadoop cluster. This will also be true when using the Apache Knox 
Gateway to provide perimeter security and centralized access for these 
services. The two primary existing clients for Hadoop are the CLI (i.e. Command 
Line Interface, hadoop) and HUE (i.e. Hadoop User Environment). For several 
reasons however, neither of these clients can <em>currently</em> be used to 
access Hadoop services via the Apache Knox Gateway.</p><p>This led to thinking 
about a very simple client that could help people use and evaluate the gateway. 
The list below outlines the general requirements for such a client.</p>
+</table><h4><a id="Audit+log+rotation"></a>Audit log rotation</h4><p>Audit 
logging is preconfigured with 
<code>org.apache.log4j.DailyRollingFileAppender</code>. <a 
href="http://logging.apache.org/log4j/1.2/";>Apache log4j</a> contains 
information about other Appenders.</p><h4><a 
id="How+to+change+audit+level+or+disable+it"></a>How to change audit level or 
disable it</h4><p>Audit configuration is stored in the 
<code>conf/gateway-log4j.properties</code> file.</p><p>All audit messages are 
logged at <code>INFO</code> level and this behavior can&rsquo;t be 
changed.</p><p>To change audit configuration <code>log4j.logger.audit*</code> 
and <code>log4j.appender.auditfile*</code> properties in 
<code>conf/gateway-log4j.properties</code> file should be modified.</p><p>Their 
meaning can be found in <a href="http://logging.apache.org/log4j/1.2/";>Apache 
log4j</a>.</p><p>Disabling auditing can be done by decreasing log level for 
appender.</p><h3><a id="Apache+Knox+Samples"></a>Apache Knox Samples</h
 3><p>The purpose of the samples within the {GATEWAY_HOME}/samples directory is 
to demonstrate the capabilities of the Apache Knox Gateway to provide access to 
the numerous APIs that are available from the service components of a Hadoop 
cluster.</p><p>Depending on exactly how your Knox installation was done, there 
will be some number of steps required in order fully install and configure the 
samples for use.</p><p>This section will help describe the assumptions of the 
samples and the steps to get them to work in a couple of different deployment 
scenarios.</p><h4><a id="Assumptions+of+the+Samples"></a>Assumptions of the 
Samples</h4><p>The samples were initially written with the intent of working 
out of the box for the various Hadoop demo environments that are deployed as a 
single node cluster inside of a VM. The following assumptions were made from 
that context and should be understood in order to get the samples to work in 
other deployment scenarios:</p>
+<ul>
+  <li>That there is a valid java JDK on the PATH for executing the samples</li>
+  <li>The Knox Demo LDAP server is running on localhost and port 33389 which 
is the default port for the ApacheDS LDAP server.</li>
+  <li>That the LDAP directory in use has a set of demo users provisioned with 
the convention of username and username&ldquo;-password&rdquo; as the password. 
Most of the samples have some variation of this pattern with 
&ldquo;guest&rdquo; and &ldquo;guest-password&rdquo;.</li>
+  <li>That the Knox Gateway instance is running on the same machine which you 
will be running the samples from - therefore &ldquo;localhost&rdquo; and that 
the default port of &ldquo;8443&rdquo; is being used.</li>
+  <li>Finally, that there is a properly provisioned sandbox.xml topology in 
the {GATEWAY_HOME}/conf/topologies directory that is configured to point to the 
actual host and ports of running service components.</li>
+</ul><h4><a id="Steps+for+Demo+Single+Node+Clusters"></a>Steps for Demo Single 
Node Clusters</h4><p>There should be little to do if anything in a demo 
environment that has been provisioned with illustrating the use of Apache 
Knox.</p><p>However, the following items will be worth ensuring before you 
start:</p>
+<ol>
+  <li>The sandbox.xml topology is configured properly for the deployed 
services</li>
+  <li>That there is an LDAP server running with guest/guest-password user 
available in the directory</li>
+</ol><h4><a id="Steps+for+Ambari+Deployed+Knox+Gateway"></a>Steps for Ambari 
Deployed Knox Gateway</h4><p>Apache Knox instances that are under the 
management of Ambari are generally assumed not to be demo instances. These 
instances are in place to facilitate development, testing or production Hadoop 
clusters.</p><p>The Knox samples can however be made to work with Ambari 
managed Knox instances with a few steps:</p>
+<ol>
+  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
+  <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
+  <li>The default.xml topology file can be copied to sandbox.xml in order to 
satisfy the topology name assumption in the samples.</li>
+  <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
+</ol><h4><a id="Steps+for+a+Manually+Installed+Knox+Gateway"></a>Steps for a 
Manually Installed Knox Gateway</h4><p>For manually installed Knox instances, 
there is really no way for the installer to know how to configure the topology 
file for you.</p><p>Essentially, these steps are identical to the Amabari 
deployed instance except that #3 should be replaced with the configuration of 
the ootb sandbox.xml to point the configuration at the proper hosts and 
ports.</p>
+<ol>
+  <li>You need to have ssh access to the environment in order for the 
localhost assumption within the samples to be valid.</li>
+  <li>The Knox Demo LDAP Server is started - you can start it from Ambari</li>
+  <li>Change the hosts and ports within the 
{GATEWAY_HOME}/conf/topologies/sandbox.xml to reflect your actual cluster 
service locations.</li>
+  <li><p>Be sure to use an actual Java JRE to run the sample with something 
like:</p><p>/usr/jdk64/jdk1.7.0_67/bin/java -jar bin/shell.jar 
samples/ExampleWebHdfsLs.groovy</p></li>
+</ol><h2><a id="Client+Details"></a>Client Details</h2><p>Hadoop requires a 
client that can be used to interact remotely with the services provided by 
Hadoop cluster. This will also be true when using the Apache Knox Gateway to 
provide perimeter security and centralized access for these services. The two 
primary existing clients for Hadoop are the CLI (i.e. Command Line Interface, 
hadoop) and HUE (i.e. Hadoop User Environment). For several reasons however, 
neither of these clients can <em>currently</em> be used to access Hadoop 
services via the Apache Knox Gateway.</p><p>This led to thinking about a very 
simple client that could help people use and evaluate the gateway. The list 
below outlines the general requirements for such a client.</p>
 <ul>
   <li>Promote the evaluation and adoption of the Apache Knox Gateway</li>
   <li>Simple to deploy and use on data worker desktops to access to remote 
Hadoop clusters</li>

Modified: knox/trunk/books/0.5.0/book.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.5.0/book.md?rev=1638209&r1=1638208&r2=1638209&view=diff
==============================================================================
--- knox/trunk/books/0.5.0/book.md (original)
+++ knox/trunk/books/0.5.0/book.md Tue Nov 11 16:26:49 2014
@@ -30,6 +30,7 @@
 * #[Apache Knox Details]
     * #[Apache Knox Directory Layout]
     * #[Supported Services]
+* #[Gateway Samples]
 * #[Gateway Details]
     * #[URL Mapping]
     * #[Configuration]
@@ -76,6 +77,7 @@ In general the goals of the gateway are 
 <<quick_start.md>>
 <<book_getting-started.md>>
 <<book_gateway-details.md>>
+<<book_knox-samples.md>>
 <<book_client-details.md>>
 <<book_service-details.md>>
 <<book_limitations.md>>


Reply via email to