Added: knox/trunk/books/0.11.0/dev-guide/knoxsso_integration.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/dev-guide/knoxsso_integration.md?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/dev-guide/knoxsso_integration.md (added)
+++ knox/trunk/books/0.11.0/dev-guide/knoxsso_integration.md Tue Dec 13 
16:00:35 2016
@@ -0,0 +1,689 @@
+Knox SSO Integration for UIs
+===
+
+Introduction
+---
+KnoxSSO provides an abstraction for integrating any number of authentication 
systems and SSO solutions and enables participating web applications to scale 
to those solutions more easily. Without the token exchange capabilities offered 
by KnoxSSO each component UI would need to integrate with each desired solution 
on its own. 
+
+This document examines the way to integrate with Knox SSO in the form of a 
Servlet Filter. This approach should be easily extrapolated into other 
frameworks - ie. Spring Security.
+
+### General Flow
+
+The following is a generic sequence diagram for SAML integration through 
KnoxSSO.
+
+<<general_saml_flow.puml>> 
+
+#### KnoxSSO Setup
+
+##### knoxsso.xml Topology
+In order to enable KnoxSSO, we need to configure the IdP topology. The 
following is an example of this topology that is configured to use HTTP Basic 
Auth against the Knox Demo LDAP server. This is the lowest barrier of entry for 
your development environment that actually authenticates against a real user 
store. What’s great is if you work against the IdP with Basic Auth then you 
will work with SAML or anything else as well.
+
+```
+               <?xml version="1.0" encoding="utf-8"?>
+               <topology>
+               <gateway>
+                       <provider>
+                       <role>authentication</role>
+                       <name>ShiroProvider</name>
+                       <enabled>true</enabled>
+                       <param>
+                               <name>sessionTimeout</name>
+                               <value>30</value>
+                       </param>
+                       <param>
+                               <name>main.ldapRealm</name>
+                               
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapRealm</value>
+                       </param>
+                       <param>
+                               <name>main.ldapContextFactory</name>
+                               
<value>org.apache.hadoop.gateway.shirorealm.KnoxLdapContextFactory</value>
+                       </param>
+                       <param>
+                               <name>main.ldapRealm.contextFactory</name>
+                               <value>$ldapContextFactory</value>
+                       </param>
+                       <param>
+                               <name>main.ldapRealm.userDnTemplate</name>
+                               
<value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+                       </param>
+                       <param>
+                               <name>main.ldapRealm.contextFactory.url</name>
+                               <value>ldap://localhost:33389</value>
+                       </param>
+                       <param>
+                               
<name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+                               <value>simple</value>
+                       </param>
+                       <param>
+                               <name>urls./**</name>
+                               <value>authcBasic</value>
+                       </param>
+                       </provider>
+        
+                       <provider>
+                           <role>identity-assertion</role>
+                       <name>Default</name>
+                       <enabled>true</enabled>
+                       </provider>
+               </gateway>
+
+                   <service>
+                       <role>KNOXSSO</role>
+                       <param>
+                               <name>knoxsso.cookie.secure.only</name>
+                               <value>true</value>
+                       </param>
+                       <param>
+                               <name>knoxsso.token.ttl</name>
+                               <value>100000</value>
+                       </param>
+               </service>
+               </topology>
+```
+
+Just as with any Knox service, the KNOXSSO service is protected by the gateway 
providers defined above it. In this case, the ShiroProvider is taking care of 
HTTP Basic Auth against LDAP for us. Once the user authenticates the request 
processing continues to the KNOXSSO service that will create the required 
cookie and do the necessary redirects.
+
+The authenticate/federation provider can be swapped out to fit your deployment 
environment.
+
+##### sandbox.xml Topology
+In order to see the end to end story and use it as an example in your 
development, you can configure one of the cluster topologies to use the 
SSOCookieProvider instead of the out of the box ShiroProvider. The following is 
an example sandbox.xml topology that is configured for using KnoxSSO to protect 
access to the Hadoop REST APIs.
+
+```
+       <?xml version="1.0" encoding="utf-8"?>
+       <topology>
+    <gateway>
+      <provider>
+          <role>federation</role>
+          <name>SSOCookieProvider</name>
+          <enabled>true</enabled>
+          <param>
+              <name>sso.authentication.provider.url</name>
+       <value>https://localhost:9443/gateway/idp/api/v1/websso</value>
+          </param>
+      </provider>
+        
+        <provider>
+            <role>identity-assertion</role>
+            <name>Default</name>
+            <enabled>true</enabled>
+        </provider>
+    </gateway>
+    
+    <service>
+        <role>NAMENODE</role>
+        <url>hdfs://localhost:8020</url>
+    </service>
+
+    <service>
+        <role>JOBTRACKER</role>
+        <url>rpc://localhost:8050</url>
+    </service>
+
+    <service>
+        <role>WEBHDFS</role>
+        <url>http://localhost:50070/webhdfs</url>
+    </service>
+
+    <service>
+        <role>WEBHCAT</role>
+        <url>http://localhost:50111/templeton</url>
+    </service>
+
+    <service>
+        <role>OOZIE</role>
+        <url>http://localhost:11000/oozie</url>
+    </service>
+
+    <service>
+        <role>WEBHBASE</role>
+        <url>http://localhost:60080</url>
+    </service>
+
+    <service>
+        <role>HIVE</role>
+        <url>http://localhost:10001/cliservice</url>
+    </service>
+
+    <service>
+        <role>RESOURCEMANAGER</role>
+        <url>http://localhost:8088/ws</url>
+    </service>
+       </topology>
+```
+
+* NOTE: Be aware that when using Chrome as your browser that cookies don’t 
seem to work for “localhost”. Either use a VM or like I did - use 
127.0.0.1. Safari works with localhost without problems.
+
+As you can see above, the only thing being configured is the SSO provider URL. 
Since Knox is the issuer of the cookie and token, we don’t need to configure 
the public key since we have programmatic access to the actual keystore for use 
at verification time.
+
+#### Curl the Flow
+We should now be able to walk through the SSO Flow at the command line with 
curl to see everything that happens.
+
+First, issue a request to WEBHDFS through knox.
+
+```
+       bash-3.2$ curl -iku guest:guest-password 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
+       
+       HTTP/1.1 302 Found
+       Location: 
https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op+LISTSTATUS
+       Content-Length: 0
+       Server: Jetty(8.1.14.v20131031)
+```
+
+Note the redirect to the knoxsso endpoint and the loginUrl with the 
originalUrl request parameter. We need to see that come from your integration 
as well.
+
+Let’s manually follow that redirect with curl now:
+
+```
+       bash-3.2$ curl -iku guest:guest-password 
"https://localhost:8443/gateway/idp/api/v1/websso?originalUrl=https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS";
+
+       HTTP/1.1 307 Temporary Redirect
+       Set-Cookie: 
JSESSIONID=mlkda4crv7z01jd0q0668nsxp;Path=/gateway/idp;Secure;HttpOnly
+       Set-Cookie: 
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODUzNzEsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.RpA84Qdr6RxEZjg21PyVCk0G1kogvkuJI2bo302bpwbvmc-i01gCwKNeoGYzUW27MBXf6a40vylHVR3aZuuBUxsJW3aa_ltrx0R5ztKKnTWeJedOqvFKSrVlBzJJ90PzmDKCqJxA7JUhyo800_lDHLTcDWOiY-ueWYV2RMlCO0w;Path=/;Domain=localhost;Secure;HttpOnly
+       Expires: Thu, 01 Jan 1970 00:00:00 GMT
+       Location: 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+       Content-Length: 0
+       Server: Jetty(8.1.14.v20131031)
+```
+
+Note the redirect back to the original URL in the Location header and the 
Set-Cookie for the hadoop-jwt cookie. This is what the SSOCookieProvider in 
sandbox (and ultimately in your integration) will be looking for.
+
+Finally, we should be able to take the above cookie and pass it to the 
original url as indicated in the Location header for our originally requested 
resource:
+
+```
+       bash-3.2$ curl -ikH "Cookie: 
hadoop-jwt=eyJhbGciOiJSUzI1NiJ9.eyJleHAiOjE0NDM1ODY2OTIsInN1YiI6Imd1ZXN0IiwiYXVkIjoiSFNTTyIsImlzcyI6IkhTU08ifQ.Os5HEfVBYiOIVNLRIvpYyjeLgAIMbBGXHBWMVRAEdiYcNlJRcbJJ5aSUl1aciNs1zd_SHijfB9gOdwnlvQ_0BCeGHlJBzHGyxeypIoGj9aOwEf36h-HVgqzGlBLYUk40gWAQk3aRehpIrHZT2hHm8Pu8W-zJCAwUd8HR3y6LF3M;Path=/;Domain=localhost;Secure;HttpOnly"
 https://localhost:9443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+
+       TODO: cluster was down and needs to be recreated :/
+```
+
+#### Browse the Flow
+At this point, we can use a web browser instead of the command line and see 
how the browser will challenge the user for Basic Auth Credentials and then 
manage the cookies such that the SSO and token exchange aspects of the flow are 
hidden from the user.
+
+Simply, try to invoke the same webhdfs API from the browser URL bar.
+
+
+```
+               
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+```
+
+Based on our understanding of the flow it should behave like:
+
+* SSOCookieProvider checks for hadoop-jwt cookie and in its absence redirects 
to the configured SSO provider URL (knoxsso endpoint)
+* ShiroProvider on the KnoxSSO endpoint returns a 401 and the browser 
challenges the user for username/password
+* The ShiroProvider authenticates the user against the Demo LDAP Server using 
a simple LDAP bind and establishes the security context for the WebSSO request
+* The WebSSO service exchanges the normalized Java Subject into a JWT token 
and sets it on the response as a cookie named hadoop-jwt
+* The WebSSO service then redirects the user agent back to the originally 
requested URL - the webhdfs Knox service
+subsequent invocations will find the cookie in the incoming request and not 
need to engage the WebSSO service again until it expires.
+
+#### Filter by Example
+We have added a federation provider to Knox for accepting KnoxSSO cookies for 
REST APIs. This provides us with a couple benefits:
+KnoxSSO support for REST APIs for XmlHttpRequests from JavaScript (basic CORS 
functionality is also included). This is still rather basic and considered beta 
code.
+A model and real world usecase for others to base their integrations on
+
+In addition, https://issues.apache.org/jira/browse/HADOOP-11717 added support 
for the Hadoop UIs to the hadoop-auth module and it can be used as another 
example.
+
+We will examine the new SSOCookieFederationFilter in Knox here.
+
+```
+package org.apache.hadoop.gateway.provider.federation.jwt.filter;
+
+       import java.io.IOException;
+               import java.security.Principal;
+               import java.security.PrivilegedActionException;
+               import java.security.PrivilegedExceptionAction;
+               import java.util.ArrayList;
+               import java.util.Date;
+               import java.util.HashSet;
+               import java.util.List;
+               import java.util.Set;
+               
+               import javax.security.auth.Subject;
+               import javax.servlet.Filter;
+               import javax.servlet.FilterChain;
+               import javax.servlet.FilterConfig;
+               import javax.servlet.ServletException;
+               import javax.servlet.ServletRequest;
+               import javax.servlet.ServletResponse;
+               import javax.servlet.http.Cookie;
+               import javax.servlet.http.HttpServletRequest;
+               import javax.servlet.http.HttpServletResponse;
+               
+               import org.apache.hadoop.gateway.i18n.messages.MessagesFactory;
+               import 
org.apache.hadoop.gateway.provider.federation.jwt.JWTMessages;
+               import org.apache.hadoop.gateway.security.PrimaryPrincipal;
+               import org.apache.hadoop.gateway.services.GatewayServices;
+               import 
org.apache.hadoop.gateway.services.security.token.JWTokenAuthority;
+               import 
org.apache.hadoop.gateway.services.security.token.TokenServiceException;
+               import 
org.apache.hadoop.gateway.services.security.token.impl.JWTToken;
+               
+               public class SSOCookieFederationFilter implements Filter {
+                 private static JWTMessages log = MessagesFactory.get( 
JWTMessages.class );
+                 private static final String ORIGINAL_URL_QUERY_PARAM = 
"originalUrl=";
+                 private static final String SSO_COOKIE_NAME = 
"sso.cookie.name";
+                 private static final String SSO_EXPECTED_AUDIENCES = 
"sso.expected.audiences";
+                 private static final String SSO_AUTHENTICATION_PROVIDER_URL = 
"sso.authentication.provider.url";
+                 private static final String DEFAULT_SSO_COOKIE_NAME = 
"hadoop-jwt";
+```
+
+The above represent the configurable aspects of the integration
+
+```
+    private JWTokenAuthority authority = null;
+    private String cookieName = null;
+    private List<String> audiences = null;
+    private String authenticationProviderUrl = null;
+
+    @Override
+    public void init( FilterConfig filterConfig ) throws ServletException {
+      GatewayServices services = (GatewayServices) 
filterConfig.getServletContext().getAttribute(GatewayServices.GATEWAY_SERVICES_ATTRIBUTE);
+      authority = 
(JWTokenAuthority)services.getService(GatewayServices.TOKEN_SERVICE);
+```
+
+The above is a Knox specific internal service that we use to issue and verify 
JWT tokens. This will be covered separately and you will need to be implement 
something similar in your filter implementation.
+
+```
+    // configured cookieName
+    cookieName = filterConfig.getInitParameter(SSO_COOKIE_NAME);
+    if (cookieName == null) {
+      cookieName = DEFAULT_SSO_COOKIE_NAME;
+    }
+```
+
+The configurable cookie name is something that can be used to change a cookie 
name to fit your deployment environment. The default name is hadoop-jwt which 
is also the default in the Hadoop implementation. This name must match the name 
being used by the KnoxSSO endpoint when setting the cookie.
+
+```
+    // expected audiences or null
+    String expectedAudiences = 
filterConfig.getInitParameter(SSO_EXPECTED_AUDIENCES);
+    if (expectedAudiences != null) {
+      audiences = parseExpectedAudiences(expectedAudiences);
+    }
+```
+
+Audiences are configured as a comma separated list of audience strings. Names 
of intended recipients or intents. The semantics that we are using for this 
processing is that - if not configured than any (or none) audience is accepted. 
If there are audiences configured then as long as one of the expected ones is 
found in the set of claims in the token it is accepted.
+
+```
+    // url to SSO authentication provider
+    authenticationProviderUrl = 
filterConfig.getInitParameter(SSO_AUTHENTICATION_PROVIDER_URL);
+    if (authenticationProviderUrl == null) {
+      log.missingAuthenticationProviderUrlConfiguration();
+    }
+  }
+```
+
+This is the URL to the KnoxSSO endpoint. It is required and SSO/token exchange 
will not work without this set correctly.
+
+```
+       /**
+       * @param expectedAudiences
+       * @return
+       */
+       private List<String> parseExpectedAudiences(String expectedAudiences) {
+     ArrayList<String> audList = null;
+       // setup the list of valid audiences for token validation
+       if (expectedAudiences != null) {
+         // parse into the list
+         String[] audArray = expectedAudiences.split(",");
+         audList = new ArrayList<String>();
+         for (String a : audArray) {
+           audList.add(a);
+         }
+       }
+       return audList;
+     }
+```
+
+The above method parses the comma separated list of expected audiences and 
makes it available for interrogation during token validation.
+
+```
+    public void destroy() {
+    }
+
+    public void doFilter(ServletRequest request, ServletResponse response, 
FilterChain chain) 
+        throws IOException, ServletException {
+      String wireToken = null;
+      HttpServletRequest req = (HttpServletRequest) request;
+  
+      String loginURL = constructLoginURL(req);
+      wireToken = getJWTFromCookie(req);
+      if (wireToken == null) {
+        if (req.getMethod().equals("OPTIONS")) {
+          // CORS preflight requests to determine allowed origins and related 
config
+          // must be able to continue without being redirected
+          Subject sub = new Subject();
+          sub.getPrincipals().add(new PrimaryPrincipal("anonymous"));
+          continueWithEstablishedSecurityContext(sub, req, 
(HttpServletResponse) response, chain);
+        }
+        log.sendRedirectToLoginURL(loginURL);
+        ((HttpServletResponse) response).sendRedirect(loginURL);
+      }
+      else {
+        JWTToken token = new JWTToken(wireToken);
+        boolean verified = false;
+        try {
+          verified = authority.verifyToken(token);
+          if (verified) {
+            Date expires = token.getExpiresDate();
+            if (expires == null || new Date().before(expires)) {
+              boolean audValid = validateAudiences(token);
+              if (audValid) {
+                Subject subject = createSubjectFromToken(token);
+                continueWithEstablishedSecurityContext(subject, 
(HttpServletRequest)request, (HttpServletResponse)response, chain);
+              }
+              else {
+                log.failedToValidateAudience();
+                ((HttpServletResponse) response).sendRedirect(loginURL);
+              }
+            }
+            else {
+              log.tokenHasExpired();
+            ((HttpServletResponse) response).sendRedirect(loginURL);
+            }
+          }
+          else {
+            log.failedToVerifyTokenSignature();
+          ((HttpServletResponse) response).sendRedirect(loginURL);
+          }
+        } catch (TokenServiceException e) {
+          log.unableToVerifyToken(e);
+        ((HttpServletResponse) response).sendRedirect(loginURL);
+        }
+      }
+    }
+```
+
+The doFilter method above is where all the real work is done. We look for a 
cookie by the configured name. If it isn’t there then we redirect to the 
configured SSO provider URL in order to acquire one. That is unless it is an 
OPTIONS request which may be a preflight CORS request. You shouldn’t need to 
worry about this aspect. It is really a REST API concern not a web app UI one.
+
+Once we get a cookie, the underlying JWT token is extracted and returned as 
the wireToken from which we create a Knox specific JWTToken. This abstraction 
is around the use of the nimbus JWT library which you can use directly. We will 
cover those details separately.
+
+We then ask the token authority component to verify the token. This involves 
signature validation of the signed token. In order to verify the signature of 
the token you will need to have the public key of the Knox SSO server 
configured and provided to the nimbus library through its API at verification 
time. NOTE: This is a good place to look at the Hadoop implementation as an 
example.
+
+Once we know the token is signed by a trusted party we then validate whether 
it is expired and that it has an expected (or no) audience claims.
+
+Finally, when we have a valid token, we create a Java Subject from it and 
continue the request through the filterChain as the authenticated user.
+
+```
+       /**
+       * Encapsulate the acquisition of the JWT token from HTTP cookies within 
the
+       * request.
+       *
+       * @param req servlet request to get the JWT token from
+       * @return serialized JWT token
+       */
+       protected String getJWTFromCookie(HttpServletRequest req) {
+    String serializedJWT = null;
+    Cookie[] cookies = req.getCookies();
+    if (cookies != null) {
+      for (Cookie cookie : cookies) {
+        if (cookieName.equals(cookie.getName())) {
+          log.cookieHasBeenFound(cookieName);
+          serializedJWT = cookie.getValue();
+          break;
+        }
+      }
+    }
+    return serializedJWT;
+       }
+```
+  
+The above method extracts the serialized token from the cookie and returns it 
as the wireToken.
+
+```
+       /**
+       * Create the URL to be used for authentication of the user in the 
absence of
+       * a JWT token within the incoming request.
+       *
+       * @param request for getting the original request URL
+       * @return url to use as login url for redirect
+       */
+       protected String constructLoginURL(HttpServletRequest request) {
+    String delimiter = "?";
+    if (authenticationProviderUrl.contains("?")) {
+      delimiter = "&";
+    }
+    String loginURL = authenticationProviderUrl + delimiter
+        + ORIGINAL_URL_QUERY_PARAM
+        + request.getRequestURL().toString()+ getOriginalQueryString(request);
+       return loginURL;
+       }
+
+       private String getOriginalQueryString(HttpServletRequest request) {
+       String originalQueryString = request.getQueryString();
+       return (originalQueryString == null) ? "" : "?" + originalQueryString;
+       }
+```
+
+The above method creates the full URL to be used in redirecting to the KnoxSSO 
endpoint. It includes the SSO provider URL as well as the original request URL 
so that we can redirect back to it after authentication and token exchange.
+
+```
+       /**
+       * Validate whether any of the accepted audience claims is present in the
+       * issued token claims list for audience. Override this method in 
subclasses
+       * in order to customize the audience validation behavior.
+       *
+       * @param jwtToken
+       *          the JWT token where the allowed audiences will be found
+       * @return true if an expected audience is present, otherwise false
+       */
+       protected boolean validateAudiences(JWTToken jwtToken) {
+       boolean valid = false;
+       String[] tokenAudienceList = jwtToken.getAudienceClaims();
+       // if there were no expected audiences configured then just
+       // consider any audience acceptable
+       if (audiences == null) {
+               valid = true;
+       } else {
+               // if any of the configured audiences is found then consider it
+               // acceptable
+               for (String aud : tokenAudienceList) {
+               if (audiences.contains(aud)) {
+                       //log.debug("JWT token audience has been successfully 
validated");
+                       log.jwtAudienceValidated();
+                       valid = true;
+                       break;
+               }
+       }
+    }
+    return valid;
+       }
+```
+
+The above method implements the audience claim semantics explained earlier.
+
+```
+       private void continueWithEstablishedSecurityContext(Subject subject, 
final              HttpServletRequest request, final HttpServletResponse 
response, final FilterChain chain) throws IOException, ServletException {
+    try {
+      Subject.doAs(
+        subject,
+        new PrivilegedExceptionAction<Object>() {
+          @Override
+          public Object run() throws Exception {
+            chain.doFilter(request, response);
+            return null;
+          }
+        }
+        );
+    }
+    catch (PrivilegedActionException e) {
+      Throwable t = e.getCause();
+      if (t instanceof IOException) {
+        throw (IOException) t;
+      }
+      else if (t instanceof ServletException) {
+        throw (ServletException) t;
+      }
+      else {
+        throw new ServletException(t);
+      }
+    }
+       }
+```
+
+This method continues the filter chain processing upon successful validation 
of the token. This would need to be replaced with your environment’s 
equivalent of continuing the request or login to the app as the authenticated 
user.
+
+```
+       private Subject createSubjectFromToken(JWTToken token) {
+       final String principal = token.getSubject();
+       @SuppressWarnings("rawtypes")
+       HashSet emptySet = new HashSet();
+       Set<Principal> principals = new HashSet<Principal>();
+       Principal p = new PrimaryPrincipal(principal);
+       principals.add(p);
+       javax.security.auth.Subject subject = new 
javax.security.auth.Subject(true, principals, emptySet, emptySet);
+       return subject;
+       }
+```
+This method takes a JWTToken and creates a Java Subject with the principals 
expected by the rest of the Knox processing. This would need to be implemented 
in a way appropriate for your operating environment as well. For instance, the 
Hadoop handler implementation returns a Hadoop AuthenticationToken to the 
calling filter which in turn ends up in the Hadoop auth cookie.
+
+```
+       }
+```
+
+#### Token Signature Validation 
+The following is the method from the Hadoop handler implementation that 
validates the signature.
+
+```
+       /** 
+       * Verify the signature of the JWT token in this method. This method 
depends on the      * public key that was established during init based upon 
the provisioned public key.    * Override this method in subclasses in order to 
customize the signature verification behavior.
+       * @param jwtToken the token that contains the signature to be validated
+       * @return valid true if signature verifies successfully; false otherwise
+       */
+       protected boolean validateSignature(SignedJWT jwtToken){
+               boolean valid=false;
+               if (JWSObject.State.SIGNED == jwtToken.getState()) {
+               LOG.debug("JWT token is in a SIGNED state");
+               if (jwtToken.getSignature() != null) {
+                       LOG.debug("JWT token signature is not null");
+                       try {
+                               JWSVerifier verifier=new 
RSASSAVerifier(publicKey);
+                               if (jwtToken.verify(verifier)) {
+                               valid=true;
+                               LOG.debug("JWT token has been successfully 
verified");
+                       }
+                       else {
+                       LOG.warn("JWT signature verification failed.");
+               }
+       }
+               catch (JOSEException je) {
+               LOG.warn("Error while validating signature",je);
+       }
+    }
+       }
+       return valid;
+       }
+```
+
+Hadoop Configuration Example
+The following is like the configuration in the Hadoop handler implementation.
+
+OBSOLETE but in the proper spirit of HADOOP-11717 ( HADOOP-11717 - Add 
Redirecting WebSSO behavior with JWT Token in Hadoop Auth RESOLVED )
+
+```
+       <property>
+               <name>hadoop.http.authentication.type</name>
+               
<value>org.apache.hadoop/security.authentication/server.JWTRedirectAuthenticationHandler</value>
+       </property>
+```
+
+This is the handler classname in Hadoop auth for JWT token (KnoxSSO) support.
+
+```
+       <property>
+               
<name>hadoop.http.authentication.authentication.provider.url</name>
+               <value>http://c6401.ambari.apache.org:8888/knoxsso</value>
+       </property>
+```
+
+The above property is the SSO provider URL that points to the knoxsso endpoint.
+
+```
+       <property>
+               <name>hadoop.http.authentication.public.key.pem</name>
+               
<value>MIICVjCCAb+gAwIBAgIJAPPvOtuTxFeiMA0GCSqGSIb3DQEBBQUAMG0xCzAJBgNV
+       BAYTAlVTMQ0wCwYDVQQIEwRUZXN0MQ0wCwYDVQQHEwRUZXN0MQ8wDQYDVQQKEwZI
+       YWRvb3AxDTALBgNVBAsTBFRlc3QxIDAeBgNVBAMTF2M2NDAxLmFtYmFyaS5hcGFj
+       aGUub3JnMB4XDTE1MDcxNjE4NDcyM1oXDTE2MDcxNTE4NDcyM1owbTELMAkGA1UE
+       BhMCVVMxDTALBgNVBAgTBFRlc3QxDTALBgNVBAcTBFRlc3QxDzANBgNVBAoTBkhh
+       ZG9vcDENMAsGA1UECxMEVGVzdDEgMB4GA1UEAxMXYzY0MDEuYW1iYXJpLmFwYWNo
+       ZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMFs/rymbiNvg8lDhsdA
+       qvh5uHP6iMtfv9IYpDleShjkS1C+IqId6bwGIEO8yhIS5BnfUR/fcnHi2ZNrXX7x
+       QUtQe7M9tDIKu48w//InnZ6VpAqjGShWxcSzR6UB/YoGe5ytHS6MrXaormfBg3VW
+       tDoy2MS83W8pweS6p5JnK7S5AgMBAAEwDQYJKoZIhvcNAQEFBQADgYEANyVg6EzE
+       2q84gq7wQfLt9t047nYFkxcRfzhNVL3LB8p6IkM4RUrzWq4kLA+z+bpY2OdpkTOe
+       wUpEdVKzOQd4V7vRxpdANxtbG/XXrJAAcY/S+eMy1eDK73cmaVPnxPUGWmMnQXUi
+       TLab+w8tBQhNbq6BOQ42aOrLxA8k/M4cV1A=</value>
+       </property>
+```
+
+The above property holds the KnoxSSO server’s public key for signature 
verification. Adding it directly to the config like this is convenient and is 
easily done through Ambari to existing config files that take custom 
properties. Config is generally protected as root access only as well - so it 
is a pretty good solution.
+
+#### Public Key Parsing
+In order to turn the pem encoded config item into a public key the hadoop 
handler implementation does the following in the init() method.
+
+```
+       if (publicKey == null) {
+     String pemPublicKey = config.getProperty(PUBLIC_KEY_PEM);
+     if (pemPublicKey == null) {
+       throw new ServletException(
+           "Public key for signature validation must be provisioned.");
+     }
+     publicKey = CertificateUtil.parseRSAPublicKey(pemPublicKey);
+   }
+```
+
+and the CertificateUtil class is below:
+
+```
+       package org.apache.hadoop.security.authentication.util;
+
+       import java.io.ByteArrayInputStream;
+       import java.io.UnsupportedEncodingException;
+       import java.security.PublicKey;
+       import java.security.cert.CertificateException;
+       import java.security.cert.CertificateFactory;
+       import java.security.cert.X509Certificate;
+       import java.security.interfaces.RSAPublicKey;
+
+       import javax.servlet.ServletException;
+
+       public class CertificateUtil {
+               private static final String PEM_HEADER = "-----BEGIN 
CERTIFICATE-----\n";
+               private static final String PEM_FOOTER = "\n-----END 
CERTIFICATE-----";
+
+        /**
+         * Gets an RSAPublicKey from the provided PEM encoding.
+         *
+         * @param pem
+      *          - the pem encoding from config without the header and footer
+      * @return RSAPublicKey the RSA public key
+      * @throws ServletException thrown if a processing error occurred
+      */
+       public static RSAPublicKey parseRSAPublicKey(String pem) throws 
ServletException {
+               String fullPem = PEM_HEADER + pem + PEM_FOOTER;
+               PublicKey key = null;
+               try {
+               CertificateFactory fact = 
CertificateFactory.getInstance("X.509");
+               ByteArrayInputStream is = new ByteArrayInputStream(
+                       fullPem.getBytes("UTF8"));
+               X509Certificate cer = (X509Certificate) 
fact.generateCertificate(is);
+               key = cer.getPublicKey();
+               } catch (CertificateException ce) {
+               String message = null;
+               if (pem.startsWith(PEM_HEADER)) {
+                               message = "CertificateException - be sure not 
to include PEM header "
+                               + "and footer in the PEM configuration 
element.";
+               } else {
+                               message = "CertificateException - PEM may be 
corrupt";
+               }
+               throw new ServletException(message, ce);
+               } catch (UnsupportedEncodingException uee) {
+               throw new ServletException(uee);
+               }
+               return (RSAPublicKey) key;
+               }
+       }
+```
+
+
+
+
+
+

Added: knox/trunk/books/0.11.0/dev-guide/runtime-overview.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/dev-guide/runtime-overview.puml?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/dev-guide/runtime-overview.puml (added)
+++ knox/trunk/books/0.11.0/dev-guide/runtime-overview.puml Tue Dec 13 16:00:35 
2016
@@ -0,0 +1,36 @@
+@startuml
+title Request Processing Overview
+hide footbox
+autonumber
+
+actor "REST Client" as C
+box "Gateway"
+  participant "Embedded\nJetty" as GW
+  participant "Map\n<URL,Chain<Filter>>" as CM
+  participant "Chain\n<Filter>" as FC
+end box
+participant "Hadoop\nService" as S
+
+C -> GW: GET( URL )
+activate GW
+  GW -> CM: Chain<Filter> = lookup( URL )
+  activate CM
+  deactivate CM
+  GW -> FC: doFilter
+  activate FC
+
+      FC -> FC: doFilter*
+      activate FC
+        FC -> S: GET( URL' )
+        activate S
+        FC <-- S: JSON
+        deactivate S
+      FC <-- FC: JSON
+      deactivate FC
+
+    GW <-- FC: JSON
+  deactivate FC
+C <-- GW: JSON
+deactivate GW
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.11.0/dev-guide/runtime-request-processing.puml
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/dev-guide/runtime-request-processing.puml?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/dev-guide/runtime-request-processing.puml (added)
+++ knox/trunk/books/0.11.0/dev-guide/runtime-request-processing.puml Tue Dec 
13 16:00:35 2016
@@ -0,0 +1,38 @@
+@startuml
+title Request Processing Behavior
+hide footbox
+autonumber
+
+actor Client as C
+participant "Gateway\nServer\n(Jetty)" as GW
+participant "Gateway\nServlet" as GS
+participant "Gateway\nFilter" as GF
+participant "Matcher<Chain>" as UM
+participant "Chain" as FC
+participant "Filter" as PF
+
+C -> GW: GET( URL )
+activate C
+  activate GW
+    GW -> GS: service
+    activate GS
+      GS -> GF: doFilter
+      activate GF
+        GF -> UM: match( URL ): Chain
+        GF -> FC: doFilter
+        activate FC
+          FC -> PF: doFilter
+          activate PF
+            PF -> PF: doFilter
+            activate PF
+            deactivate PF
+          'FC <-- PF
+          deactivate PF
+        deactivate FC
+      deactivate GS
+    deactivate GF
+  deactivate GW
+deactivate C
+
+
+@enduml
\ No newline at end of file

Added: knox/trunk/books/0.11.0/knox_cli.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/knox_cli.md?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/knox_cli.md (added)
+++ knox/trunk/books/0.11.0/knox_cli.md Tue Dec 13 16:00:35 2016
@@ -0,0 +1,132 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Knox CLI ###
+The Knox CLI is a command line utility for the management of various aspects 
of the Knox deployment. It is primarily concerned with the management of the 
security artifacts for the gateway instance and each of the deployed topologies 
or Hadoop clusters that are gated by the Knox Gateway instance.
+
+The various security artifacts are also generated and populated automatically 
by the Knox Gateway runtime when they are not found at startup. The assumptions 
made in those cases are appropriate for a test or development gateway instance 
and assume 'localhost' for hostname specific activities. For production 
deployments the use of the CLI may aid in managing  some production deployments.
+
+The knoxcli.sh script is located in the `{GATEWAY_HOME}/bin` directory.
+
+#### Help ####
+##### `bin/knoxcli.sh [--help]` #####
+prints help for all commands
+
+#### Knox Version Info ####
+##### `bin/knoxcli.sh version [--help]` #####
+Displays Knox version information.
+
+#### Master secret persistence ####
+##### `bin/knoxcli.sh create-master [--force][--help]` #####
+Creates and persists an encrypted master secret in a file within 
`{GATEWAY_HOME}/data/security/master`. 
+
+NOTE: This command fails when there is an existing master file in the expected 
location. You may force it to overwrite the master file with the \-\-force 
switch. NOTE: this will require you to change passwords protecting the 
keystores for the gateway identity keystores and all credential stores.
+
+#### Alias creation ####
+##### `bin/knoxcli.sh create-alias name [--cluster c] [--value v] [--generate] 
[--help]` #####
+Creates a password alias and stores it in a credential store within the 
`{GATEWAY_HOME}/data/security/keystores` dir. 
+
+argument    | description
+------------|-----------
+name|name of the alias to create  
+\-\-cluster|name of Hadoop cluster for the cluster specific credential store 
otherwise assumes that it is for the gateway itself
+\-\-value|parameter for specifying the actual password otherwise prompted. 
Escape complex passwords or surround with single quotes.<br/>
+\-\-generate|boolean flag to indicate whether the tool should just generate 
the value. This assumes that \-\-value is not set - will result in error 
otherwise. User will not be prompted for the value when \-\-generate is set.    
          
+
+#### Alias deletion ####
+##### `bin/knoxcli.sh delete-alias name [--cluster c] [--help]` #####
+Deletes a password and alias mapping from a credential store within 
`{GATEWAY_HOME}/data/security/keystores`.
+
+argument | description
+---------|-----------
+name | name of the alias to delete  
+\-\-cluster | name of Hadoop cluster for the cluster specific credential store 
otherwise assumes '__gateway'
+
+#### Alias listing ####
+##### `bin/knoxcli.sh list-alias [--cluster c] [--help]` #####
+Lists the alias names for the credential store within 
`{GATEWAY_HOME}/data/security/keystores`.
+
+NOTE: This command will list the aliases in lowercase which is a result of the 
underlying credential store implementation. Lookup of credentials is a case 
insensitive operation - so this is not an issue.
+
+argument | description
+---------|-----------
+\-\-cluster    |       name of Hadoop cluster for the cluster specific 
credential store otherwise assumes '__gateway'
+
+#### Self-signed cert creation ####
+##### `bin/knoxcli.sh create-cert [--hostname n] [--help]` #####
+Creates and stores a self-signed certificate to represent the identity of the 
gateway instance. This is stored within the 
`{GATEWAY_HOME}/data/security/keystores/gateway.jks` keystore.  
+
+argument | description
+---------|-----------
+\-\-hostname|name of the host to be used in the self-signed certificate. This 
allows multi-host deployments to specify the proper hostnames for hostname 
verification to succeed on the client side of the SSL connection. The default 
is 'localhost'.
+
+#### Certificate Export ####
+##### `bin/knoxcli.sh export-cert [--type JKS|PEM] [--help]` #####
+Exports and stores the gateway-identity certificate as the type indicated or 
PEM by default. This is stored within the 
`{GATEWAY_HOME}/data/security/keystores/` directory as either 
gateway-identity.pem or gateway-client-trust.jks depending on the type 
specified.  
+
+#### Topology Redeploy ####
+##### `bin/knoxcli.sh redeploy [--cluster c]` #####
+Redeploys one or all of the gateway's clusters (a.k.a topologies).
+
+#### Topology Listing ####
+##### `bin/knoxcli.sh list-topologies [--help]` ####
+Lists all of the topologies found in Knox's topologies directory. Useful for 
specifying a valid --cluster argument.
+
+#### Topology Validation ####
+##### `bin/knoxcli.sh validate-topology [--cluster c] [--path path] [--help]` 
####
+This ensures that a cluster's description (a.k. topology) follows the correct 
formatting rules. It is possible to specify a name of a cluster already in the 
topology directory, or a path to any file.
+
+argument | description
+---------|-----------
+\-\-cluster    |       name of Hadoop cluster for which you want to validate
+\-\-path | path to topology file that you wish to validate.
+
+#### LDAP Authentication and Authorization ####
+##### `bin/knoxcli.sh user-auth-test [--cluster c] [--u username] [--p 
password] [--g] [--d] [--help]` ####
+This command will test a topology's ability to connect, authenticate, and 
authorize a user with an LDAP server. The only required argument is the 
--cluster argument to specify the name of the topology you wish to use. The 
topology must be valid (passes validate-topology command). If a --u and  --p 
argument are not specified, the command line will prompt for a username and 
password. If authentication is successful then the command will attempt to use 
the topology to do an LDAP group lookup. The topology must be configured 
correctly to do this. If it is not, groups will not return and no errors will 
be printed unless the `--g` command is specified. Currently this command only 
works if a topology supports the use of ShiroProvider for authentication.
+
+argument | description
+---------|-----------
+\-\-cluster    | Required; name of cluster for which you want to test 
authentication
+\-\-u | Optional; username you wish you authenticate with.
+\-\-p | Optional; password you wish to authenticate with
+\-\-g | Optional; Specify that you are looking to return a user's groups. If 
not specified, group lookup errors won't return.
+\-\-d | Optional; Print extra debug info on failed authentication
+
+#### Topology LDAP Bind ####
+##### `bin/knoxcli.sh system-user-auth-test [--cluster c] [--d] [--help]` ####
+This command will test a given topology's ability to connect, bind, and 
authenticate with the ldap server from the settings specified in the topology 
file. The bind currently only will with Shiro as the authentication provider. 
There are also two parameters required inside of the topology for these  
+
+argument | description
+---------|-----------
+\-\-cluster    | Required; name of cluster for which you want to test 
authentication
+\-\-d | Optional; Print extra debug info on failed authentication
+
+
+#### Gateway Service Test ####
+##### `bin/knoxcli.sh service-test [--cluster c] [--hostname hostname] [--port 
port] [--u username] [--p password] [--d] [--help]` ####
+
+This will test a topology configuration's ability to connect to multiple 
hadoop services. Each service found in a topology will be tested with multiple 
URLs. Results are printed to the console in JSON format..
+
+argument | description
+---------|-----------
+\-\-cluster    | Required; name of cluster for which you want to test 
authentication
+\-\-hostname   | Required; hostname of the cluster currently running on the 
machine
+\-\-port       | Optional; port that the cluster is running on. If not 
supplied CLI will try to read config files to find the port.
+\-\-u  | Required; username to authorize against Hadoop services
+\-\-p  | Required; password to match username
+\-\-d | Optional; Print extra debug info on failed authentication
\ No newline at end of file

Added: knox/trunk/books/0.11.0/likeised
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/likeised?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/likeised (added)
+++ knox/trunk/books/0.11.0/likeised Tue Dec 13 16:00:35 2016
@@ -0,0 +1,47 @@
+# This sed script must be kept in sync with the table of contents
+
+#wrap the entire page and the banner
+s@<p><br>  <img src="knox-logo.gif"@<div id="page-wrap"><div 
id="banner"><p><br>  <img src="knox-logo.gif"@
+
+# close the banner and the start the sidebar
+s@<h2><a id="Table+Of+Contents"></a>Table Of Contents</h2>@</div><div 
id="sidebar">@
+
+#close the sidebar, start main content section and start the first of the 
chapers
+s@<h2><a id="Introduction@</div><div id="content"><div 
id="Introduction"><h2><a id="Introduction@
+s@<h2><a id="Quick+Start@</div><div id="Quick+Start"><h2><a id="Quick+Start@
+s@<h2><a id="Apache+Knox+Details@</div><div id="Apache+Knox+Details"><h2><a 
id="Apache+Knox+Details@
+# subchapters...
+s@<h4><a id="Apache+Knox+Directory+Layout@</div><div 
id="Apache+Knox+Directory+Layout"><h4><a id="Layout@
+s@<h3><a id="Supported+Services@</div><div id="Supported+Services"><h3><a 
id="Supported+Services@
+s@<h4><a id="Configure+Sandbox+port+mapping+for+VirtualBox@</div><div 
id="Configure+Sandbox+port+mapping+for+VirtualBox"><h4><a 
id="Configure+Sandbox+port+mapping+for+VirtualBox@
+s@<h2><a id="Gateway+Details@</div><div id="Gateway+Details"><h2><a 
id="Gateway+Details@
+s@<h3><a id="Configuration@</div><div id="Configuration"><h3><a 
id="Configuration@
+s@<h3><a id="Knox+CLI@</div><div id="Knox+CLI"><h3><a id="Knox+CLI@
+s@<h3><a id="Authentication@</div><div id="Authentication"><h3><a 
id="Authentication@
+s@<h3><a id="LDAP+Group+Lookup@</div><div id="LDAP+Group+Lookup"><h3><a 
id="LDAP+Group+Lookup@
+s@<h3><a id="Identity+Assertion@</div><div id="Identity+Assertion"><h3><a 
id="Identity+Assertion@
+s@<h3><a id="Authorization@</div><div id="Authorization"><h3><a 
id="Authorization@
+s@<h2><a id="Configuration@</div><div id="Configuration"><h2><a 
id="Configuration@
+s@<h3><a id="Secure+Clusters@</div><div id="Secure+Clusters"><h3><a 
id="Secure+Clusters@
+s@<h3><a id="High+Availability@</div><div id="High+Availability"><h3><a 
id="High+Availability@
+s@<h3><a id="Web+App+Security+Provider@</div><div 
id="Web+App+Security+Provider"><h3><a id="Web+App+Security+Provider@
+s@<h3><a id="Preauthenticated+SSO+Provider@</div><div 
id="Preauthenticated+SSO+Provider"><h3><a id="Preauthenticated+SSO+Provider@
+s@<h3><a id="Mutual+Authentication+with+SSL@</div><div 
id="Mutual+Authentication+with+SSL"><h3><a id="Mutual+Authentication+with+SSL@
+s@<h3><a id="Audit@</div><div id="Audit"><h3><a id="Audit@
+s@<h2><a id="Client+Details@</div><div id="Client+Details"><h2><a 
id="Client+Details@
+s@<h2><a id="Service+Details@</div><div id="Service+Details"><h2><a 
id="Service+Details@
+s@<h3><a id="WebHDFS@</div><div id="WebHDFS"><h3><a id="WebHDFS@
+s@<h3><a id="WebHCat@</div><div id="WebHCat"><h3><a id="WebHCat@
+s@<h3><a id="Oozie@</div><div id="Oozie"><h3><a id="Oozie@
+s@<h3><a id="HBase@</div><div id="HBase"><h3><a id="HBase@
+s@<h3><a id="Hive@</div><div id="Hive"><h3><a id="Hive@
+s@<h3><a id="Storm@</div><div id="Storm"><h3><a id="Storm@
+s@<h3><a id="Default+Service+HA+support@</div><div 
id="Default+Service+HA+support"><h3><a id="Default+Service+HA+support@
+s@<h2><a id="Limitations@</div><div id="Limitations"><h2><a id="Limitations@
+s@<h2><a id="Troubleshooting@</div><div id="Troubleshooting"><h2><a 
id="Troubleshooting@
+s@<h2><a id="Export+Controls@</div><div id="Export+Controls"><h2><a 
id="Export+Controls@
+
+# closing the last chapter section, page-wrap and content sections is done 
outside of this script
+# using cat >> filename
+
+# sed -f likeised knox-incubating-0-4-0.html > knox-incubating-0-4-0-new.html 
&& echo "</div></div></div>" >> knox-incubating-0-4-0-new.html

Added: knox/trunk/books/0.11.0/quick_start.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/quick_start.md?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/quick_start.md (added)
+++ knox/trunk/books/0.11.0/quick_start.md Tue Dec 13 16:00:35 2016
@@ -0,0 +1,207 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Quick Start ##
+
+Here are the steps to have Apache Knox up and running against a Hadoop Cluster:
+
+1. Verify system requirements
+1. Download a virtual machine (VM) with Hadoop 
+1. Download Apache Knox Gateway
+1. Start the virtual machine with Hadoop
+1. Install Knox
+1. Start the LDAP embedded within Knox
+1. Start the Knox Gateway
+1. Do Hadoop with Knox
+
+
+
+### 1 - Requirements ###
+
+#### Java ####
+
+Java 1.6 or later is required for the Knox Gateway runtime.
+Use the command below to check the version of Java installed on the system 
where Knox will be running.
+
+    java -version
+
+#### Hadoop ####
+
+Knox 0.10.0 supports Hadoop 2.x, the quick start instructions assume a Hadoop 
2.x virtual machine based environment.
+
+
+### 2 - Download Hadoop 2.x VM ###
+The quick start provides a link to download Hadoop 2.0 based Hortonworks 
virtual machine [Sandbox](http://hortonworks.com/products/hdp-2/#install). 
Please note Knox supports other Hadoop distributions and is configurable 
against a full-blown Hadoop cluster.
+Configuring Knox for Hadoop 2.x version, or Hadoop deployed in EC2 or a custom 
Hadoop cluster is documented in advance deployment guide.
+
+
+### 3 - Download Apache Knox Gateway ###
+
+Download one of the distributions below from the [Apache mirrors][mirror].
+
+* Source archive: [knox-0.10.0-src.zip][src-zip] ([PGP signature][src-pgp], 
[SHA1 digest][src-sha], [MD5 digest][src-md5])
+* Binary archive: [knox-0.10.0.zip][bin-zip] ([PGP signature][bin-pgp], [SHA1 
digest][bin-sha], [MD5 digest][bin-md5])
+
+[keys]: https://dist.apache.org/repos/dist/release/knox/KEYS 
+[src-zip]: http://www.apache.org/dyn/closer.cgi/knox/0.10.0/knox-0.10.0-src.zip
+[src-sha]: http://www.apache.org/dist/knox/0.10.0/knox-0.10.0-src.zip.sha
+[src-pgp]: http://www.apache.org/dist/knox/0.10.0/knox-0.10.0-src.zip.asc
+[src-md5]: http://www.apache.org/dist/knox/0.10.0/knox-0.10.0-src.zip.md5
+[bin-zip]: http://www.apache.org/dyn/closer.cgi/knox/0.10.0/knox-0.10.0.zip
+[bin-pgp]: http://www.apache.org/dist/knox/0.10.0/knox-0.10.0.zip.asc
+[bin-sha]: http://www.apache.org/dist/knox/0.10.0/knox-0.10.0.zip.sha
+[bin-md5]: http://www.apache.org/dist/knox/0.10.0/knox-0.10.0.zip.md5
+
+Apache Knox Gateway releases are available under the [Apache License, Version 
2.0][asl].
+See the NOTICE file contained in each release artifact for applicable 
copyright attribution notices.
+
+
+### Verify ###
+
+While recommended, verify is an optional step. You can verify the integrity of 
any downloaded files using the PGP signatures.
+Please read [Verifying Apache HTTP Server 
Releases](http://httpd.apache.org/dev/verification.html) for more information 
on why you should verify our releases.
+
+The PGP signatures can be verified using PGP or GPG.
+First download the [KEYS][keys] file as well as the .asc signature files for 
the relevant release packages.
+Make sure you get these files from the main distribution directory linked 
above, rather than from a mirror.
+Then verify the signatures using one of the methods below.
+
+    % pgpk -a KEYS
+    % pgpv knox-0.10.0.zip.asc
+
+or
+
+    % pgp -ka KEYS
+    % pgp knox-0.10.0.zip.asc
+
+or
+
+    % gpg --import KEYS
+    % gpg --verify knox-0.10.0.zip.asc
+
+### 4 - Start Hadoop virtual machine ###
+
+Start the Hadoop virtual machine.
+
+### 5 - Install Knox ###
+
+The steps required to install the gateway will vary depending upon which 
distribution format (zip | rpm) was downloaded.
+In either case you will end up with a directory where the gateway is installed.
+This directory will be referred to as your `{GATEWAY_HOME}` throughout this 
document.
+
+#### ZIP ####
+
+If you downloaded the Zip distribution you can simply extract the contents 
into a directory.
+The example below provides a command that can be executed to do this.
+Note the `{VERSION}` portion of the command must be replaced with an actual 
Apache Knox Gateway version number.
+This might be 0.10.0 for example.
+
+    unzip knox-{VERSION}.zip
+
+This will create a directory `knox-{VERSION}` in your current directory.
+The directory `knox-{VERSION}` will considered your `{GATEWAY_HOME}`
+
+### 6 - Start LDAP embedded in Knox ###
+
+Knox comes with an LDAP server for demonstration purposes.
+Note: If the tool used to extract the contents of the Tar or tar.gz file was 
not capable of
+making the files in the bin directory executable
+
+    cd {GATEWAY_HOME}
+    bin/ldap.sh start
+
+### 7 - Create the Master Secret
+
+Run the knoxcli create-master command in order to persist the master secret
+that is used to protect the key and credential stores for the gateway instance.
+
+    cd {GATEWAY_HOME}
+    bin/knoxcli.sh create-master
+
+The cli will prompt you for the master secret (i.e. password).
+
+### 7 - Start Knox  ###
+
+The gateway can be started using the provided shell script.
+
+The server will discover the persisted master secret during start up and 
complete the setup process for demo installs.
+A demo install will consist of a knox gateway instance with an identity 
certificate for localhost.
+This will require clients to be on the same machine or to turn off hostname 
verification.
+For more involved deployments, See the Knox CLI section of this document for 
additional configuration options,
+including the ability to create a self-signed certificate for a specific 
hostname.
+
+    cd {GATEWAY_HOME}
+    bin/gateway.sh start
+
+When starting the gateway this way the process will be run in the background.
+The log files will be written to {GATEWAY_HOME}/logs and the process ID files 
(PIDS) will b written to {GATEWAY_HOME}/pids.
+
+In order to stop a gateway that was started with the script use this command.
+
+    cd {GATEWAY_HOME}
+    bin/gateway.sh stop
+
+If for some reason the gateway is stopped other than by using the command 
above you may need to clear the tracking PID.
+
+    cd {GATEWAY_HOME}
+    bin/gateway.sh clean
+
+__NOTE: This command will also clear any .out and .err file from the 
{GATEWAY_HOME}/logs directory so use this with caution.__
+
+
+### 8 - Do Hadoop with Knox
+
+#### Invoke the LISTSTATUS operation on WebHDFS via the gateway.
+This will return a directory listing of the root (i.e. /) directory of HDFS.
+
+    curl -i -k -u guest:guest-password -X GET \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/?op=LISTSTATUS'
+
+The results of the above command should result in something to along the lines 
of the output below.
+The exact information returned is subject to the content within HDFS in your 
Hadoop cluster.
+Successfully executing this command at a minimum proves that the gateway is 
properly configured to provide access to WebHDFS.
+It does not necessarily provide that any of the other services are correct 
configured to be accessible.
+To validate that see the sections for the individual services in #[Service 
Details].
+
+    HTTP/1.1 200 OK
+    Content-Type: application/json
+    Content-Length: 760
+    Server: Jetty(6.1.26)
+
+    {"FileStatuses":{"FileStatus":[
+    
{"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595859762,"owner":"hdfs","pathSuffix":"apps","permission":"755","replication":0,"type":"DIRECTORY"},
+    
{"accessTime":0,"blockSize":0,"group":"mapred","length":0,"modificationTime":1350595874024,"owner":"mapred","pathSuffix":"mapred","permission":"755","replication":0,"type":"DIRECTORY"},
+    
{"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350596040075,"owner":"hdfs","pathSuffix":"tmp","permission":"777","replication":0,"type":"DIRECTORY"},
+    
{"accessTime":0,"blockSize":0,"group":"hdfs","length":0,"modificationTime":1350595857178,"owner":"hdfs","pathSuffix":"user","permission":"755","replication":0,"type":"DIRECTORY"}
+    ]}}
+
+#### Put a file in HDFS via Knox.
+
+    curl -i -k -u guest:guest-password -X PUT \
+        
'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=CREATE'
+
+    curl -i -k -u guest:guest-password -T LICENSE -X PUT \
+        '{Value of Location header from response   above}'
+
+#### Get a file in HDFS via Knox.
+
+    curl -i -k -u guest:guest-password -X GET \
+        'https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp/LICENSE?op=OPEN'
+
+    curl -i -k -u guest:guest-password -X GET \
+        '{Value of Location header from command response above}'
+        

Added: knox/trunk/books/0.11.0/service_config.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/service_config.md?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/service_config.md (added)
+++ knox/trunk/books/0.11.0/service_config.md Tue Dec 13 16:00:35 2016
@@ -0,0 +1,42 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Common Service Config ###
+
+It is possible to override a few of the global configuration settings provided 
in gateway-site.xml at the service level.
+These overrides are specified as name/value pairs within the \<service> 
elements of a particular service.
+The overidden settings apply only to that service.
+
+The following table shows the common configuration settings available at the 
service level via service level parameters.
+Individual services may support additional service level parameters.
+
+Property | Description | Default
+---------|-------------|---------
+httpclient.maxConnections|The maximum number of connections that a single 
httpclient will maintain to a single host:port.  The default is 32.|32
+httpclient.connectionTimeout|The amount of time to wait when attempting a 
connection. The natural unit is milliseconds but a 's' or 'm' suffix may be 
used for seconds or minutes respectively. The default timeout is system 
dependent. | System Dependent
+httpclient.socketTimeout|The amount of time to wait for data on a socket 
before aborting the connection. The natural unit is milliseconds but a 's' or 
'm' suffix may be used for seconds or minutes respectively. The default timeout 
is system dependent but is likely to be indefinite. | System Dependent
+
+The example below demonstrates how these service level parameters are used.
+
+    <service>
+         <role>HIVE</role>
+         <param>
+             <name>httpclient.socketTimeout</name>
+             <value>180s</value>
+         </param>
+    </service>
+

Added: knox/trunk/books/0.11.0/service_default_ha.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/service_default_ha.md?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/service_default_ha.md (added)
+++ knox/trunk/books/0.11.0/service_default_ha.md Tue Dec 13 16:00:35 2016
@@ -0,0 +1,101 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Default Service HA support ###
+
+Knox provides connectivity based failover functionality for service calls that 
can be made to more than one server
+instance in a cluster. To enable this functionality HaProvider configuration 
needs to be enabled for the service and
+the service itself needs to be configured with more than one URL in the 
topology file.
+
+The default HA functionality works on a simple round robin algorithm, where 
the top of the list of URLs is always used
+to route all of a service's REST calls until a connection error occurs. The 
top URL is then put at the bottom of the
+list and the next URL is attempted. This goes on until the setting of 
'maxFailoverAttempts' is reached.
+
+At present the following services can use this default High Availability 
functionality and have been tested for the
+same:
+
+* WEBHCAT
+* HBASE
+* OOZIE
+
+To enable HA functionality for a service in Knox the following configuration 
has to be added to the topology file.
+
+    <provider>
+         <role>ha</role>
+         <name>HaProvider</name>
+         <enabled>true</enabled>
+         <param>
+             <name>{SERVICE}</name>
+             
<value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
+         </param>
+    </provider>
+
+The role and name of the provider above must be as shown. The name in the 
'param' section i.e. `{SERVICE}` must match
+that of the service role name that is being configured for HA and the value in 
the 'param' section is the configuration
+for that particular service in HA mode. For example, the value of `{SERVICE}` 
can be 'WEBHCAT', 'HBASE' or 'OOZIE'.
+
+To configure multiple services in HA mode, additional 'param' sections can be 
added.
+
+For example,
+
+    <provider>
+         <role>ha</role>
+         <name>HaProvider</name>
+         <enabled>true</enabled>
+         <param>
+             <name>OOZIE</name>
+             
<value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
+         </param>
+         <param>
+             <name>HBASE</name>
+             
<value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
+         </param>
+         <param>
+             <name>WEBHCAT</name>
+             
<value>maxFailoverAttempts=3;failoverSleep=1000;enabled=true</value>
+         </param>
+    </provider>
+
+The various configuration parameters are described below:
+
+* maxFailoverAttempts -
+This is the maximum number of times a failover will be attempted. The failover 
strategy at this time is very simplistic
+in that the next URL in the list of URLs provided for the service is used and 
the one that failed is put at the bottom
+of the list. If the list is exhausted and the maximum number of attempts is 
not reached then the first URL will be tried
+again.
+
+* failoverSleep -
+The amount of time in millis that the process will wait or sleep before 
attempting to failover.
+
+* enabled -
+Flag to turn the particular service on or off for HA.
+
+And for the service configuration itself the additional URLs should be added 
to the list.
+
+    <service>
+        <role>{SERVICE}</role>
+        <url>http://host1:port1</url>
+        <url>http://host2:port2</url>
+    </service>
+
+For example,
+
+    <service>
+        <role>OOZIE</role>
+        <url>http://sandbox1:11000/oozie</url>
+        <url>http://sandbox2:11000/oozie</url>
+    </service>

Added: knox/trunk/books/0.11.0/service_hbase.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/0.11.0/service_hbase.md?rev=1774044&view=auto
==============================================================================
--- knox/trunk/books/0.11.0/service_hbase.md (added)
+++ knox/trunk/books/0.11.0/service_hbase.md Tue Dec 13 16:00:35 2016
@@ -0,0 +1,657 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### HBase ###
+
+HBase provides an optional REST API (previously called Stargate).
+See the HBase REST Setup section below for getting started with the HBase REST 
API and Knox with the Hortonworks Sandbox environment.
+
+#### HBase URL Mapping ####
+
+| ------- | 
----------------------------------------------------------------------------- |
+| Gateway | 
`https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/hbase` |
+| Cluster | `http://{hbase-rest-host}:8080/`                                   
      |
+
+#### HBase Examples ####
+
+The examples below illustrate the set of basic operations with HBase instance 
using the REST API.
+Use following link to get more details about HBase REST API: 
http://hbase.apache.org/book.html#_rest.
+
+Note: Some HBase examples may not work due to enabled [Access 
Control](http://hbase.apache.org/book.html#_securing_access_to_your_data). User 
may not be granted access for performing operations in the samples. In order to 
check if Access Control is configured in the HBase instance verify 
`hbase-site.xml` for a presence of 
`org.apache.hadoop.hbase.security.access.AccessController` in 
`hbase.coprocessor.master.classes` and `hbase.coprocessor.region.classes` 
properties.  
+To grant the Read, Write, Create permissions to `guest` user execute the 
following command:
+
+    echo grant 'guest', 'RWC' | hbase shell
+
+If you are using a cluster secured with Kerberos you will need to have used 
`kinit` to authenticate to the KDC.
+
+#### HBase REST API Setup ####
+
+#### Launch REST API ####
+
+The command below launches the REST daemon on port 8080 (the default)
+
+    sudo {HBASE_BIN}/hbase-daemon.sh start rest
+
+Where `{HBASE_BIN}` is `/usr/hdp/current/hbase-master/bin/` in the case of a 
HDP install.
+
+To use a different port use the `-p` option:
+
+    sudo {HBASE_BIN/hbase-daemon.sh start rest -p 60080
+
+#### Configure Sandbox port mapping for VirtualBox ####
+
+1. Select the VM
+2. Select menu Machine>Settings...
+3. Select tab Network
+4. Select Adapter 1
+5. Press Port Forwarding button
+6. Press Plus button to insert new rule: Name=HBASE REST, Host Port=60080, 
Guest Port=60080
+7. Press OK to close the rule window
+8. Press OK to Network window save the changes
+
+#### HBase Restart ####
+
+If it becomes necessary to restart HBase you can log into the hosts running 
HBase and use these steps.
+
+    sudo {HBASE_BIN}/hbase-daemon.sh stop rest
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop regionserver
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop master
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh stop zookeeper
+
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start regionserver
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start master
+    sudo -u hbase {HBASE_BIN}/hbase-daemon.sh start zookeeper
+    sudo {HBASE_BIN}/hbase-daemon.sh start rest -p 60080
+
+Where `{HBASE_BIN}` is `/usr/hdp/current/hbase-master/bin/` in the case of a 
HDP Sandbox install.
+ 
+#### HBase client DSL ####
+
+For more details about client DSL usage please look at the chapter about the 
client DSL in this guide.
+
+After launching the shell, execute the following command to be able to use the 
snippets below.
+`import org.apache.hadoop.gateway.shell.hbase.HBase;`
+ 
+#### systemVersion() - Query Software Version.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+    * `HBase.session(session).systemVersion().now().string`
+
+#### clusterVersion() - Query Storage Cluster Version.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+    * `HBase.session(session).clusterVersion().now().string`
+
+#### status() - Query Storage Cluster Status.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+    * `HBase.session(session).status().now().string`
+
+#### table().list() - Query Table List.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+  * `HBase.session(session).table().list().now().string`
+
+#### table(String tableName).schema() - Query Table Schema.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+    * `HBase.session(session).table().schema().now().string`
+
+#### table(String tableName).create() - Create Table Schema.
+
+* Request
+    * attribute(String name, Object value) - the table's attribute.
+    * family(String name) - starts family definition. Has sub requests:
+    * attribute(String name, Object value) - the family's attribute.
+    * endFamilyDef() - finishes family definition.
+* Response
+    * EmptyResponse
+* Example
+
+
+    HBase.session(session).table(tableName).create()
+       .attribute("tb_attr1", "value1")
+       .attribute("tb_attr2", "value2")
+       .family("family1")
+           .attribute("fm_attr1", "value3")
+           .attribute("fm_attr2", "value4")
+       .endFamilyDef()
+       .family("family2")
+       .family("family3")
+       .endFamilyDef()
+       .attribute("tb_attr3", "value5")
+       .now()
+
+#### table(String tableName).update() - Update Table Schema.
+
+* Request
+    * family(String name) - starts family definition. Has sub requests:
+    * attribute(String name, Object value) - the family's attribute.
+    * endFamilyDef() - finishes family definition.
+* Response
+    * EmptyResponse
+* Example
+
+
+    HBase.session(session).table(tableName).update()
+         .family("family1")
+             .attribute("fm_attr1", "new_value3")
+         .endFamilyDef()
+         .family("family4")
+             .attribute("fm_attr3", "value6")
+         .endFamilyDef()
+         .now()```
+
+#### table(String tableName).regions() - Query Table Metadata.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+    * `HBase.session(session).table(tableName).regions().now().string`
+
+#### table(String tableName).delete() - Delete Table.
+
+* Request
+    * No request parameters.
+* Response
+    * EmptyResponse
+* Example
+    * `HBase.session(session).table(tableName).delete().now()`
+
+#### table(String tableName).row(String rowId).store() - Cell Store.
+
+* Request
+    * column(String family, String qualifier, Object value, Long time) - the 
data to store; "qualifier" may be "null"; "time" is optional.
+* Response
+    * EmptyResponse
+* Example
+
+
+    HBase.session(session).table(tableName).row("row_id_1").store()
+         .column("family1", "col1", "col_value1")
+         .column("family1", "col2", "col_value2", 1234567890l)
+         .column("family2", null, "fam_value1")
+         .now()
+
+
+    HBase.session(session).table(tableName).row("row_id_2").store()
+         .column("family1", "row2_col1", "row2_col_value1")
+         .now()
+
+#### table(String tableName).row(String rowId).query() - Cell or Row Query.
+
+* rowId is optional. Querying with null or empty rowId will select all rows.
+* Request
+    * column(String family, String qualifier) - the column to select; 
"qualifier" is optional.
+    * startTime(Long) - the lower bound for filtration by time.
+    * endTime(Long) - the upper bound for filtration by time.
+    * times(Long startTime, Long endTime) - the lower and upper bounds for 
filtration by time.
+    * numVersions(Long) - the maximum number of versions to return.
+* Response
+    * BasicResponse
+* Example
+
+
+    HBase.session(session).table(tableName).row("row_id_1")
+         .query()
+         .now().string
+
+
+    HBase.session(session).table(tableName).row().query().now().string
+
+
+    HBase.session(session).table(tableName).row().query()
+         .column("family1", "row2_col1")
+         .column("family2")
+         .times(0, Long.MAX_VALUE)
+         .numVersions(1)
+         .now().string
+
+#### table(String tableName).row(String rowId).delete() - Row, Column, or Cell 
Delete.
+
+* Request
+    * column(String family, String qualifier) - the column to delete; 
"qualifier" is optional.
+    * time(Long) - the upper bound for time filtration.
+* Response
+    * EmptyResponse
+* Example
+
+
+    HBase.session(session).table(tableName).row("row_id_1")
+         .delete()
+         .column("family1", "col1")
+         .now()```
+
+
+    HBase.session(session).table(tableName).row("row_id_1")
+         .delete()
+         .column("family2")
+         .time(Long.MAX_VALUE)
+         .now()```
+
+#### table(String tableName).scanner().create() - Scanner Creation.
+
+* Request
+    * startRow(String) - the lower bound for filtration by row id.
+    * endRow(String) - the upper bound for filtration by row id.
+    * rows(String startRow, String endRow) - the lower and upper bounds for 
filtration by row id.
+    * column(String family, String qualifier) - the column to select; 
"qualifier" is optional.
+    * batch(Integer) - the batch size.
+    * startTime(Long) - the lower bound for filtration by time.
+    * endTime(Long) - the upper bound for filtration by time.
+    * times(Long startTime, Long endTime) - the lower and upper bounds for 
filtration by time.
+    * filter(String) - the filter XML definition.
+    * maxVersions(Integer) - the maximum number of versions to return.
+* Response
+    * scannerId : String - the scanner ID of the created scanner. Consumes 
body.
+* Example
+
+
+    HBase.session(session).table(tableName).scanner().create()
+         .column("family1", "col2")
+         .column("family2")
+         .startRow("row_id_1")
+         .endRow("row_id_2")
+         .batch(1)
+         .startTime(0)
+         .endTime(Long.MAX_VALUE)
+         .filter("")
+         .maxVersions(100)
+         .now()```
+
+#### table(String tableName).scanner(String scannerId).getNext() - Scanner Get 
Next.
+
+* Request
+    * No request parameters.
+* Response
+    * BasicResponse
+* Example
+    * 
`HBase.session(session).table(tableName).scanner(scannerId).getNext().now().string`
+
+#### table(String tableName).scanner(String scannerId).delete() - Scanner 
Deletion.
+
+* Request
+    * No request parameters.
+* Response
+    * EmptyResponse
+* Example
+    * 
`HBase.session(session).table(tableName).scanner(scannerId).delete().now()`
+
+### HBase via Client DSL ###
+
+This example illustrates sequence of all basic HBase operations: 
+1. get system version
+2. get cluster version
+3. get cluster status
+4. create the table
+5. get list of tables
+6. get table schema
+7. update table schema
+8. insert single row into table
+9. query row by id
+10. query all rows
+11. delete cell from row
+12. delete entire column family from row
+13. get table regions
+14. create scanner
+15. fetch values using scanner
+16. drop scanner
+17. drop the table
+
+There are several ways to do this depending upon your preference.
+
+You can use the Groovy interpreter provided with the distribution.
+
+    java -jar bin/shell.jar samples/ExampleHBase.groovy
+
+You can manually type in the KnoxShell DSL script into the interactive Groovy 
interpreter provided with the distribution.
+
+    java -jar bin/shell.jar
+
+Each line from the file below will need to be typed or copied into the 
interactive shell.
+
+    /**
+     * Licensed to the Apache Software Foundation (ASF) under one
+     * or more contributor license agreements.  See the NOTICE file
+     * distributed with this work for additional information
+     * regarding copyright ownership.  The ASF licenses this file
+     * to you under the Apache License, Version 2.0 (the
+     * "License"); you may not use this file except in compliance
+     * with the License.  You may obtain a copy of the License at
+     *
+     *     http://www.apache.org/licenses/LICENSE-2.0
+     *
+     * Unless required by applicable law or agreed to in writing, software
+     * distributed under the License is distributed on an "AS IS" BASIS,
+     * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+     * See the License for the specific language governing permissions and
+     * limitations under the License.
+     */
+    package org.apache.hadoop.gateway.shell.hbase
+
+    import org.apache.hadoop.gateway.shell.Hadoop
+
+    import static java.util.concurrent.TimeUnit.SECONDS
+
+    gateway = "https://localhost:8443/gateway/sandbox";
+    username = "guest"
+    password = "guest-password"
+    tableName = "test_table"
+
+    session = Hadoop.login(gateway, username, password)
+
+    println "System version : " + 
HBase.session(session).systemVersion().now().string
+
+    println "Cluster version : " + 
HBase.session(session).clusterVersion().now().string
+
+    println "Status : " + HBase.session(session).status().now().string
+
+    println "Creating table '" + tableName + "'..."
+
+    HBase.session(session).table(tableName).create()  \
+        .attribute("tb_attr1", "value1")  \
+        .attribute("tb_attr2", "value2")  \
+        .family("family1")  \
+            .attribute("fm_attr1", "value3")  \
+            .attribute("fm_attr2", "value4")  \
+        .endFamilyDef()  \
+        .family("family2")  \
+        .family("family3")  \
+        .endFamilyDef()  \
+        .attribute("tb_attr3", "value5")  \
+        .now()
+
+    println "Done"
+
+    println "Table List : " + 
HBase.session(session).table().list().now().string
+
+    println "Schema for table '" + tableName + "' : " + HBase.session(session) 
 \
+        .table(tableName)  \
+        .schema()  \
+        .now().string
+
+    println "Updating schema of table '" + tableName + "'..."
+
+    HBase.session(session).table(tableName).update()  \
+        .family("family1")  \
+            .attribute("fm_attr1", "new_value3")  \
+        .endFamilyDef()  \
+        .family("family4")  \
+            .attribute("fm_attr3", "value6")  \
+        .endFamilyDef()  \
+        .now()
+
+    println "Done"
+
+    println "Schema for table '" + tableName + "' : " + HBase.session(session) 
 \
+        .table(tableName)  \
+        .schema()  \
+        .now().string
+
+    println "Inserting data into table..."
+
+    HBase.session(session).table(tableName).row("row_id_1").store()  \
+        .column("family1", "col1", "col_value1")  \
+        .column("family1", "col2", "col_value2", 1234567890l)  \
+        .column("family2", null, "fam_value1")  \
+        .now()
+
+    HBase.session(session).table(tableName).row("row_id_2").store()  \
+        .column("family1", "row2_col1", "row2_col_value1")  \
+        .now()
+
+    println "Done"
+
+    println "Querying row by id..."
+
+    println HBase.session(session).table(tableName).row("row_id_1")  \
+        .query()  \
+        .now().string
+
+    println "Querying all rows..."
+
+    println HBase.session(session).table(tableName).row().query().now().string
+
+    println "Querying row by id with extended settings..."
+
+    println HBase.session(session).table(tableName).row().query()  \
+        .column("family1", "row2_col1")  \
+        .column("family2")  \
+        .times(0, Long.MAX_VALUE)  \
+        .numVersions(1)  \
+        .now().string
+
+    println "Deleting cell..."
+
+    HBase.session(session).table(tableName).row("row_id_1")  \
+        .delete()  \
+        .column("family1", "col1")  \
+        .now()
+
+    println "Rows after delete:"
+
+    println HBase.session(session).table(tableName).row().query().now().string
+
+    println "Extended cell delete"
+
+    HBase.session(session).table(tableName).row("row_id_1")  \
+        .delete()  \
+        .column("family2")  \
+        .time(Long.MAX_VALUE)  \
+        .now()
+
+    println "Rows after delete:"
+
+    println HBase.session(session).table(tableName).row().query().now().string
+
+    println "Table regions : " + HBase.session(session).table(tableName)  \
+        .regions()  \
+        .now().string
+
+    println "Creating scanner..."
+
+    scannerId = HBase.session(session).table(tableName).scanner().create()  \
+        .column("family1", "col2")  \
+        .column("family2")  \
+        .startRow("row_id_1")  \
+        .endRow("row_id_2")  \
+        .batch(1)  \
+        .startTime(0)  \
+        .endTime(Long.MAX_VALUE)  \
+        .filter("")  \
+        .maxVersions(100)  \
+        .now().scannerId
+
+    println "Scanner id=" + scannerId
+
+    println "Scanner get next..."
+
+    println HBase.session(session).table(tableName).scanner(scannerId)  \
+        .getNext()  \
+        .now().string
+
+    println "Dropping scanner with id=" + scannerId
+
+    HBase.session(session).table(tableName).scanner(scannerId).delete().now()
+
+    println "Done"
+
+    println "Dropping table '" + tableName + "'..."
+
+    HBase.session(session).table(tableName).delete().now()
+
+    println "Done"
+
+    session.shutdown(10, SECONDS)
+
+### HBase via cURL
+
+#### Get software version
+
+Set Accept Header to "text/plain", "text/xml", "application/json" or 
"application/x-protobuf"
+
+    %  curl -ik -u guest:guest-password\
+     -H "Accept:  application/json"\
+     -X GET 'https://localhost:8443/gateway/sandbox/hbase/version'
+
+#### Get version information regarding the HBase cluster backing the REST API 
instance
+
+Set Accept Header to "text/plain", "text/xml" or "application/x-protobuf"
+
+    %  curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X GET 'https://localhost:8443/gateway/sandbox/hbase/version/cluster'
+
+#### Get detailed status on the HBase cluster backing the REST API instance.
+
+Set Accept Header to "text/plain", "text/xml", "application/json" or 
"application/x-protobuf"
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X GET 'https://localhost:8443/gateway/sandbox/hbase/status/cluster'
+
+#### Get the list of available tables.
+
+Set Accept Header to "text/plain", "text/xml", "application/json" or 
"application/x-protobuf"
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X GET 'https://localhost:8443/gateway/sandbox/hbase'
+
+#### Create table with two column families using xml input
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"   -H "Content-Type: text/xml"\
+     -d '<?xml version="1.0" encoding="UTF-8"?><TableSchema 
name="table1"><ColumnSchema name="family1"/><ColumnSchema 
name="family2"/></TableSchema>'\
+     -X PUT 'https://localhost:8443/gateway/sandbox/hbase/table1/schema'
+
+#### Create table with two column families using JSON input
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: application/json"  -H "Content-Type: application/json"\
+     -d 
'{"name":"table2","ColumnSchema":[{"name":"family3"},{"name":"family4"}]}'\
+     -X PUT 'https://localhost:8443/gateway/sandbox/hbase/table2/schema'
+
+#### Get table metadata
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X GET 'https://localhost:8443/gateway/sandbox/hbase/table1/regions'
+
+#### Insert single row table
+
+    curl -ik -u guest:guest-password\
+     -H "Content-Type: text/xml"\
+     -H "Accept: text/xml"\
+     -d '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><CellSet><Row 
key="cm93MQ=="><Cell column="ZmFtaWx5MTpjb2wx" 
>dGVzdA==</Cell></Row></CellSet>'\
+     -X POST 'https://localhost:8443/gateway/sandbox/hbase/table1/row1'
+
+#### Insert multiple rows into table
+
+    curl -ik -u guest:guest-password\
+     -H "Content-Type: text/xml"\
+     -H "Accept: text/xml"\
+     -d '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><CellSet><Row 
key="cm93MA=="><Cell column=" ZmFtaWx5Mzpjb2x1bW4x" >dGVzdA==</Cell></Row><Row 
key="cm93MQ=="><Cell column=" ZmFtaWx5NDpjb2x1bW4x" 
>dGVzdA==</Cell></Row></CellSet>'\
+     -X POST 
'https://localhost:8443/gateway/sandbox/hbase/table2/false-row-key'
+
+#### Get all data from table
+
+Set Accept Header to "text/plain", "text/xml", "application/json" or 
"application/x-protobuf"
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X GET 'https://localhost:8443/gateway/sandbox/hbase/table1/*'
+
+#### Execute cell or row query
+
+Set Accept Header to "text/plain", "text/xml", "application/json" or 
"application/x-protobuf"
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X GET 
'https://localhost:8443/gateway/sandbox/hbase/table1/row1/family1:col1'
+
+#### Delete entire row from table
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X DELETE 'https://localhost:8443/gateway/sandbox/hbase/table2/row0'
+
+#### Delete column family from row
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X DELETE 
'https://localhost:8443/gateway/sandbox/hbase/table2/row0/family3'
+
+#### Delete specific column from row
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X DELETE 
'https://localhost:8443/gateway/sandbox/hbase/table2/row0/family3'
+
+#### Create scanner
+
+Scanner URL will be in Location response header
+
+    curl -ik -u guest:guest-password\
+     -H "Content-Type: text/xml"\
+     -d '<Scanner batch="1"/>'\
+     -X PUT 'https://localhost:8443/gateway/sandbox/hbase/table1/scanner'
+
+#### Get the values of the next cells found by the scanner
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: application/json"\
+     -X GET 
'https://localhost:8443/gateway/sandbox/hbase/table1/scanner/13705290446328cff5ed'
+
+#### Delete scanner
+
+    curl -ik -u guest:guest-password\
+     -H "Accept: text/xml"\
+     -X DELETE 
'https://localhost:8443/gateway/sandbox/hbase/table1/scanner/13705290446328cff5ed'
+
+#### Delete table
+
+    curl -ik -u guest:guest-password\
+     -X DELETE 'https://localhost:8443/gateway/sandbox/hbase/table1/schema'
+
+
+### HBase REST HA ###
+
+Please look at #[Default Service HA support]
+


Reply via email to