Added: knox/trunk/books/1.4.0/config_knox_token.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_knox_token.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_knox_token.md (added)
+++ knox/trunk/books/1.4.0/config_knox_token.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,51 @@
+## KnoxToken Configuration
+
+### Introduction
+---
+
+The Knox Token Service enables the ability for clients to acquire the same JWT 
token that is used for KnoxSSO with WebSSO flows for UIs to be used for 
accessing REST APIs. By acquiring the token and setting it as a Bearer token on 
a request, a client is able to access REST APIs that are protected with the 
JWTProvider federation provider.
+
+This section describes the overall setup requirements and options for 
KnoxToken service.
+
+### KnoxToken service
+The Knox Token Service configuration can be configured in any topology and be 
tailored to issue tokens to authenticated users and constrain the usage of the 
tokens in a number of ways.
+
+    <service>
+       <role>KNOXTOKEN</role>
+       <param>
+          <name>knox.token.ttl</name>
+          <value>36000000</value>
+       </param>
+       <param>
+          <name>knox.token.audiences</name>
+          <value>tokenbased</value>
+       </param>
+       <param>
+          <name>knox.token.target.url</name>
+          <value>https://localhost:8443/gateway/tokenbased</value>
+       </param>
+    </service>
+
+#### KnoxToken Configuration Parameters
+
+Parameter                        | Description | Default
+-------------------------------- |------------ |----------- 
+knox.token.ttl                | This indicates the lifespan of the token. Once 
it expires a new token must be acquired from KnoxToken service. This is in 
milliseconds. The 36000000 in the topology above gives you 10 hrs. | 30000 That 
is 30 seconds.
+knox.token.audiences          | This is a comma separated list of audiences to 
add to the JWT token. This is used to ensure that a token received by a 
participating application knows that the token was intended for use with that 
application. It is optional. In the event that an endpoint has expected 
audiences and they are not present the token must be rejected. In the event 
where the token has audiences and the endpoint has none expected then the token 
is accepted.| empty
+knox.token.target.url         | This is an optional configuration parameter to 
indicate the intended endpoint for which the token may be used. The KnoxShell 
token credential collector can pull this URL from a knoxtokencache file to be 
used in scripts. This eliminates the need to prompt for or hardcode endpoints 
in your scripts. | n/a
+
+Adding the KnoxToken configuration shown above to a topology that is protected 
with the ShrioProvider is a very simple and effective way to expose an endpoint 
from which a Knox token can be requested. Once it is acquired it may be used to 
access resources at intended endpoints until it expires.
+
+The following curl command can be used to acquire a token from the Knox Token 
service as configured in the sandbox topology:
+
+    curl -ivku guest:guest-password 
https://localhost:8443/gateway/sandbox/knoxtoken/api/v1/token
+    
+Resulting in a JSON response that contains the token, the expiration and the 
optional target endpoint:
+
+          
`{"access_token":"eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJndWVzdCIsImF1ZCI6InRva2VuYmFzZWQiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNDg5OTQyMTg4fQ.bcqSK7zMnABEM_HVsm3oWNDrQ_ei7PcMI4AtZEERY9LaPo9dzugOg3PA5JH2BRF-lXM3tuEYuZPaZVf8PenzjtBbuQsCg9VVImuu2r1YNVJlcTQ7OV-eW50L6OTI0uZfyrFwX6C7jVhf7d7YR1NNxs4eVbXpS1TZ5fDIRSfU3MU","target_url":"https://localhost:8443/gateway/tokenbased","token_type":"Bearer
 ","expires_in":1489942188233}`
+
+The following curl example shows how to add a bearer token to an Authorization 
header:
+
+    curl -ivk -H "Authorization: Bearer 
eyJhbGciOiJSUzI1NiJ9.eyJzdWIiOiJndWVzdCIsImF1ZCI6InRva2VuYmFzZWQiLCJpc3MiOiJLTk9YU1NPIiwiZXhwIjoxNDg5OTQyMTg4fQ.bcqSK7zMnABEM_HVsm3oWNDrQ_ei7PcMI4AtZEERY9LaPo9dzugOg3PA5JH2BRF-lXM3tuEYuZPaZVf8PenzjtBbuQsCg9VVImuu2r1YNVJlcTQ7OV-eW50L6OTI0uZfyrFwX6C7jVhf7d7YR1NNxs4eVbXpS1TZ5fDIRSfU3MU"
 https://localhost:8443/gateway/tokenbased/webhdfs/v1/tmp?op=LISTSTATUS
+
+See documentation in Client Details for KnoxShell init, list and destroy for 
commands that leverage this token service for CLI sessions.
\ No newline at end of file

Added: knox/trunk/books/1.4.0/config_ldap_authc_cache.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_ldap_authc_cache.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_ldap_authc_cache.md (added)
+++ knox/trunk/books/1.4.0/config_ldap_authc_cache.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,211 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### LDAP Authentication Caching ###
+
+Knox can be configured to cache LDAP authentication information. Knox 
leverages Shiro's built in
+caching mechanisms and has been tested with Shiro's EhCache cache manager 
implementation.
+
+The following provider snippet demonstrates how to configure turning on the 
cache using the ShiroProvider. In addition to
+using `org.apache.knox.gateway.shirorealm.KnoxLdapRealm` in the Shiro 
configuration, and setting up the cache you *must* set
+the flag for enabling caching authentication to true. Please see the property, 
`main.ldapRealm.authenticationCachingEnabled` below.
+
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            
<value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.cacheManager</name>
+            <value>org.apache.knox.gateway.shirorealm.KnoxCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.authenticationCachingEnabled</name>
+            <value>true</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>guest-password</value>
+        </param>
+        <param>
+            <name>urls./**</name>
+            <value>authcBasic</value>
+        </param>
+    </provider>
+
+
+### Trying out caching ###
+
+Knox bundles a template topology files that can be used to try out the caching 
functionality.
+The template file located under `{GATEWAY_HOME}/templates` is 
`sandbox.knoxrealm.ehcache.xml`.
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm.ehcache.xml conf/topologies/sandbox.xml
+    bin/ldap.sh start
+    bin/gateway.sh start
+
+The following call to WebHDFS should report: `{"Path":"/user/tom"}`
+
+    curl  -i -v  -k -u tom:tom-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+In order to see the cache working, LDAP can now be shutdown and the user will 
still authenticate successfully.
+
+    bin/ldap.sh stop
+
+and then the following should still return successfully like it did earlier.
+
+    curl  -i -v  -k -u tom:tom-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+
+#### Advanced Caching Config ####
+
+By default the EhCache support in Shiro contains a ehcache.xml in its 
classpath which is the following
+
+    <ehcache name="knox-YOUR_TOPOLOGY_NAME">
+
+        <!-- Sets the path to the directory where cache .data files are 
created.
+
+             If the path is a Java System Property it is replaced by
+             its value in the running VM. The following properties are 
translated:
+
+                user.home - User's home directory
+                user.dir - User's current working directory
+                java.io.tmpdir - Default temp file path
+        -->
+        <diskStore path="java.io.tmpdir/shiro-ehcache"/>
+
+
+        <!--Default Cache configuration. These will applied to caches 
programmatically created through
+        the CacheManager.
+
+        The following attributes are required:
+
+        maxElementsInMemory            - Sets the maximum number of objects 
that will be created in memory
+        eternal                        - Sets whether elements are eternal. If 
eternal,  timeouts are ignored and the
+                                         element is never expired.
+        overflowToDisk                 - Sets whether elements can overflow to 
disk when the in-memory cache
+                                         has reached the maxInMemory limit.
+
+        The following attributes are optional:
+        timeToIdleSeconds              - Sets the time to idle for an element 
before it expires.
+                                         i.e. The maximum amount of time 
between accesses before an element expires
+                                         Is only used if the element is not 
eternal.
+                                         Optional attribute. A value of 0 
means that an Element can idle for infinity.
+                                         The default value is 0.
+        timeToLiveSeconds              - Sets the time to live for an element 
before it expires.
+                                         i.e. The maximum time between 
creation time and when an element expires.
+                                         Is only used if the element is not 
eternal.
+                                         Optional attribute. A value of 0 
means that and Element can live for infinity.
+                                         The default value is 0.
+        diskPersistent                 - Whether the disk store persists 
between restarts of the Virtual Machine.
+                                         The default value is false.
+        diskExpiryThreadIntervalSeconds- The number of seconds between runs of 
the disk expiry thread. The default value
+                                         is 120 seconds.
+        memoryStoreEvictionPolicy      - Policy would be enforced upon 
reaching the maxElementsInMemory limit. Default
+                                         policy is Least Recently Used 
(specified as LRU). Other policies available -
+                                         First In First Out (specified as 
FIFO) and Less Frequently Used
+                                         (specified as LFU)
+        -->
+
+        <defaultCache
+                maxElementsInMemory="10000"
+                eternal="false"
+                timeToIdleSeconds="120"
+                timeToLiveSeconds="120"
+                overflowToDisk="false"
+                diskPersistent="false"
+                diskExpiryThreadIntervalSeconds="120"
+                />
+
+        <!-- We want eternal="true" and no timeToIdle or timeToLive settings 
because Shiro manages session
+             expirations explicitly.  If we set it to false and then set 
corresponding timeToIdle and timeToLive properties,
+             ehcache would evict sessions without Shiro's knowledge, which 
would cause many problems
+            (e.g. "My Shiro session timeout is 30 minutes - why isn't a 
session available after 2 minutes?"
+                   Answer - ehcache expired it due to the timeToIdle property 
set to 120 seconds.)
+
+            diskPersistent=true since we want an enterprise session management 
feature - ability to use sessions after
+            even after a JVM restart.  -->
+        <cache name="shiro-activeSessionCache"
+               maxElementsInMemory="10000"
+               overflowToDisk="true"
+               eternal="true"
+               timeToLiveSeconds="0"
+               timeToIdleSeconds="0"
+               diskPersistent="true"
+               diskExpiryThreadIntervalSeconds="600"/>
+
+        <cache name="org.apache.shiro.realm.text.PropertiesRealm-0-accounts"
+               maxElementsInMemory="1000"
+               eternal="true"
+               overflowToDisk="true"/>
+
+    </ehcache>
+
+A custom configuration file (ehcache.xml) can be used in place of this in 
order to set specific caching configuration.
+
+In order to set the ehcache.xml file to use for a particular topology, set the 
following parameter in the configuration
+for the ShiroProvider:
+
+    <param>
+        <name>main.cacheManager.cacheManagerConfigFile</name>
+        <value>classpath:ehcache.xml</value>
+    </param>
+
+In the above example, place the ehcache.xml file under `{GATEWAY_HOME}/conf` 
and restart the gateway server.

Added: knox/trunk/books/1.4.0/config_ldap_group_lookup.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_ldap_group_lookup.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_ldap_group_lookup.md (added)
+++ knox/trunk/books/1.4.0/config_ldap_group_lookup.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,228 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### LDAP Group Lookup ###
+
+Knox can be configured to look up LDAP groups that the authenticated user 
belong to.
+Knox can look up both Static LDAP Groups and Dynamic LDAP Groups.
+The looked up groups are populated as Principal(s) in the Java Subject of the 
authenticated user.
+Therefore service authorization rules can be defined in terms of LDAP groups 
looked up from a LDAP directory.
+
+To look up LDAP groups of authenticated user from LDAP, you have to use 
`org.apache.knox.gateway.shirorealm.KnoxLdapRealm` in Shiro configuration.
+
+Please see below a sample Shiro configuration snippet from a topology file 
that was tested looking LDAP groups.
+
+    <provider>
+        <role>authentication</role>
+        <name>ShiroProvider</name>
+        <enabled>true</enabled>
+        <!-- 
+        session timeout in minutes,  this is really idle timeout,
+        defaults to 30mins, if the property value is not defined,, 
+        current client authentication would expire if client idles 
continuously for more than this value
+        -->
+        <!-- defaults to: 30 minutes
+        <param>
+            <name>sessionTimeout</name>
+            <value>30</value>
+        </param>
+        -->
+
+        <!--
+          Use single KnoxLdapRealm to do authentication and ldap group look up
+        -->
+        <param>
+            <name>main.ldapRealm</name>
+            <value>org.apache.knox.gateway.shirorealm.KnoxLdapRealm</value>
+        </param>
+        <param>
+            <name>main.ldapGroupContextFactory</name>
+            
<value>org.apache.knox.gateway.shirorealm.KnoxLdapContextFactory</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory</name>
+            <value>$ldapGroupContextFactory</value>
+        </param>
+        <!-- defaults to: simple
+        <param>
+            <name>main.ldapRealm.contextFactory.authenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.contextFactory.url</name>
+            <value>ldap://localhost:33389</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.userDnTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+
+        <param>
+            <name>main.ldapRealm.authorizationEnabled</name>
+            <!-- defaults to: false -->
+            <value>true</value>
+        </param>
+        <!-- defaults to: simple
+        <param>
+            
<name>main.ldapRealm.contextFactory.systemAuthenticationMechanism</name>
+            <value>simple</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.searchBase</name>
+            <value>ou=groups,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <!-- defaults to: groupOfNames
+        <param>
+            <name>main.ldapRealm.groupObjectClass</name>
+            <value>groupOfNames</value>
+        </param>
+        -->
+        <!-- defaults to: member
+        <param>
+            <name>main.ldapRealm.memberAttribute</name>
+            <value>member</value>
+        </param>
+        -->
+        <param>
+             <name>main.cacheManager</name>
+             
<value>org.apache.shiro.cache.MemoryConstrainedCacheManager</value>
+        </param>
+        <param>
+            <name>main.securityManager.cacheManager</name>
+            <value>$cacheManager</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>uid={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <!-- the above element is the template for most ldap servers 
+            for active directory use the following instead and
+            remove the above configuration.
+        <param>
+            <name>main.ldapRealm.memberAttributeValueTemplate</name>
+            <value>cn={0},ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        -->
+        <param>
+            <name>main.ldapRealm.contextFactory.systemUsername</name>
+            <value>uid=guest,ou=people,dc=hadoop,dc=apache,dc=org</value>
+        </param>
+        <param>
+            <name>main.ldapRealm.contextFactory.systemPassword</name>
+            <value>${ALIAS=ldcSystemPassword}</value>
+        </param>
+
+        <param>
+            <name>urls./**</name> 
+            <value>authcBasic</value>
+        </param>
+
+    </provider>
+
+The configuration shown above would look up Static LDAP groups of the 
authenticated user and populate the group principals in the Java Subject 
corresponding to the authenticated user.
+
+If you want to look up Dynamic LDAP Groups instead of Static LDAP Groups, you 
would have to specify groupObjectClass and memberAttribute params as shown 
below:
+
+    <param>
+        <name>main.ldapRealm.groupObjectClass</name>
+        <value>groupOfUrls</value>
+    </param>
+    <param>
+        <name>main.ldapRealm.memberAttribute</name>
+        <value>memberUrl</value>
+    </param>
+
+### Template topology files and LDIF files to try out LDAP Group Look up ###
+
+Knox bundles some template topology files and ldif files that you can use to 
try and test LDAP Group Lookup and associated authorization ACLs.
+All these template files are located under `{GATEWAY_HOME}/templates`.
+
+
+#### LDAP Static Group Lookup Templates, authentication and group lookup from 
the same directory ####
+
+* topology file: sandbox.knoxrealm1.xml
+* ldif file: users.ldapgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm1.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of group "analyst", authorization provider states 
user should be member of group "analyst"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/sam"}
+As sam is a member of group "analyst", authorization provider states user 
should be member of group "analyst"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+
+#### LDAP Static Group Lookup Templates, authentication and group lookup from 
different  directories ####
+
+* topology file: sandbox.knoxrealm2.xml
+* ldif file: users.ldapgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealm2.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of group "analyst", authorization provider states 
user should be member of group "analyst"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/sam"}
+As sam is a member of group "analyst", authorization provider states user 
should be member of group "analyst"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+#### LDAP Dynamic Group Lookup Templates, authentication and dynamic group 
lookup from same  directory ####
+
+* topology file: sandbox.knoxrealmdg.xml
+* ldif file: users.ldapdynamicgroups.ldif
+
+To try this out
+
+    cd {GATEWAY_HOME}
+    cp templates/sandbox.knoxrealmdg.xml conf/topologies/sandbox.xml
+    cp templates/users.ldapdynamicgroups.ldif conf/users.ldif
+    java -jar bin/ldap.jar conf
+    java -Dsandbox.ldcSystemPassword=guest-password -jar bin/gateway.jar 
-persist-master
+
+Please note that user.ldapdynamicgroups.ldif also loads necessary schema to 
create dynamic groups in Apache DS.
+
+Following call to WebHDFS should report HTTP/1.1 401 Unauthorized
+As guest is not a member of dynamic group "directors", authorization provider 
states user should be member of group "directors"
+
+    curl  -i -v  -k -u guest:guest-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+
+Following call to WebHDFS should report: {"Path":"/user/bob"}
+As bob is a member of dynamic group "directors", authorization provider states 
user should be member of group "directors"
+
+    curl  -i -v  -k -u sam:sam-password  -X GET 
https://localhost:8443/gateway/sandbox/webhdfs/v1?op=GETHOMEDIRECTORY
+

Added: knox/trunk/books/1.4.0/config_metrics.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_metrics.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_metrics.md (added)
+++ knox/trunk/books/1.4.0/config_metrics.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,49 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Metrics ###
+
+See the KIP for details on the implementation of metrics available in the 
gateway.
+
+[Metrics KIP](https://cwiki.apache.org/confluence/display/KNOX/KIP-2+Metrics)
+
+#### Metrics Configuration ####
+
+Metrics configuration can be done in `gateway-site.xml`.
+
+The initial configuration is mainly for turning on or off the metrics 
collection and then enabling reporters with their required config.
+
+The two initial reporters implemented are JMX and Graphite.
+
+    gateway.metrics.enabled 
+
+Turns on or off the metrics, default is 'true'
+ 
+    gateway.jmx.metrics.reporting.enabled
+
+Turns on or off the jmx reporter, default is 'true'
+
+    gateway.graphite.metrics.reporting.enabled
+
+Turns on or off the graphite reporter, default is 'false'
+
+    gateway.graphite.metrics.reporting.host
+    gateway.graphite.metrics.reporting.port
+    gateway.graphite.metrics.reporting.frequency
+
+The above are the host, port and frequency of reporting (in seconds) 
parameters for the graphite reporter.
+

Added: knox/trunk/books/1.4.0/config_mutual_authentication_ssl.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_mutual_authentication_ssl.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_mutual_authentication_ssl.md (added)
+++ knox/trunk/books/1.4.0/config_mutual_authentication_ssl.md Tue Jul 23 
21:27:15 2019
@@ -0,0 +1,43 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Mutual Authentication with SSL ###
+
+To establish a stronger trust relationship between client and server, we 
provide mutual authentication with SSL via client certs. This is particularly 
useful in providing additional validation for Preauthenticated SSO with HTTP 
Headers. Rather than just IP address validation, connections will only be 
accepted by Knox from clients presenting trusted certificates.
+
+This behavior is configured for the entire gateway instance within the 
gateway-site.xml file. All topologies deployed within the gateway instance with 
mutual authentication enabled will require incoming connections to present 
trusted client certificates during the SSL handshake. Otherwise, connections 
will be refused.
+
+The following table describes the configuration elements related to mutual 
authentication and their defaults:
+
+| Configuration Element                          | Description                 
                              |
+| 
-----------------------------------------------|-----------------------------------------------------------|
+| gateway.client.auth.needed                     | True\|False - indicating 
the need for client authentication. Default is False.|
+| gateway.truststore.path                        | Fully qualified path to the 
trust store to use. Default is the keystore used to hold the Gateway's 
identity.  See `gateway.tls.keystore.path`.|
+| gateway.truststore.type                        | Keystore type of the trust 
store. Default is JKS.         |
+| gateway.truststore.password.alias              | Alias for the password to 
the trust store.|
+| gateway.trust.all.certs                        | Allows for all certificates 
to be trusted. Default is false.|
+
+By only indicating that it is needed with `gateway.client.auth.needed`, the 
keystore identified by `gateway.tls.keystore.path` is used.  By default this is 
`{GATEWAY_HOME}/data/security/keystores/gateway.jks`. 
+This is the identity keystore for the server, which can also be used as the 
truststore.
+To use a dedicated truststore, `gateway.truststore.path` may be set to the 
absolute path of the truststore file.  
+The type of truststore file should be set using `gateway.truststore.type`; 
else, JKS will be assumed.  
+If the truststore password is different from the Gateway's master secret then 
it can be set using
+
+    knoxcli.sh create-alias {password-alias} --value {pwd} 
+  
+The password alias name (`{password-alias}`) is set using 
`gateway.truststore.password.alias`; else, the alias name of 
"gateway-truststore-password" should be used.  
+If a password is not found using the provided (or default) alias name, then 
the Gateway's master secret will be used.

Added: knox/trunk/books/1.4.0/config_pac4j_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_pac4j_provider.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_pac4j_provider.md (added)
+++ knox/trunk/books/1.4.0/config_pac4j_provider.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,197 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Pac4j Provider - CAS / OAuth / SAML / OpenID Connect ###
+
+<p align="center">
+  <img src="https://pac4j.github.io/pac4j/img/logo-knox.png"; width="300" />
+</p>
+
+[pac4j](https://github.com/pac4j/pac4j) is a Java security engine to 
authenticate users, get their profiles and manage their authorizations in order 
to secure Java web applications.
+
+It supports many authentication mechanisms for UI and web services and is 
implemented by many frameworks and tools.
+
+For Knox, it is used as a federation provider to support the OAuth, CAS, SAML 
and OpenID Connect protocols. It must be used for SSO, in association with the 
KnoxSSO service and optionally with the SSOCookieProvider for access to REST 
APIs.
+
+
+#### Configuration ####
+##### SSO topology #####
+
+To enable SSO for REST API access through the Knox gateway, you need to 
protect your Hadoop services with the SSOCookieProvider configured to use the 
KnoxSSO service (sandbox.xml topology):
+
+    <gateway>
+      <provider>
+        <role>webappsec</role>
+        <name>WebAppSec</name>
+        <enabled>true</enabled>
+        <param>
+          <name>cors.enabled</name>
+          <value>true</value>
+        </param>
+      </provider>
+      <provider>
+        <role>federation</role>
+        <name>SSOCookieProvider</name>
+        <enabled>true</enabled>
+        <param>
+          <name>sso.authentication.provider.url</name>
+          <value>https://127.0.0.1:8443/gateway/knoxsso/api/v1/websso</value>
+        </param>
+      </provider>
+      <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+      </provider>
+    </gateway>
+
+    <service>
+      <role>NAMENODE</role>
+      <url>hdfs://localhost:8020</url>
+    </service>
+
+    ...
+
+and protect the KnoxSSO service by the pac4j provider (knoxsso.xml topology):
+
+    <gateway>
+      <provider>
+        <role>federation</role>
+        <name>pac4j</name>
+        <enabled>true</enabled>
+        <param>
+          <name>pac4j.callbackUrl</name>
+          <value>https://127.0.0.1:8443/gateway/knoxsso/api/v1/websso</value>
+        </param>
+        <param>
+          <name>cas.loginUrl</name>
+          <value>https://casserverpac4j.herokuapp.com/login</value>
+        </param>
+      </provider>
+      <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+      </provider>
+    </gateway>
+    
+    <service>
+      <role>KNOXSSO</role>
+      <param>
+        <name>knoxsso.cookie.secure.only</name>
+        <value>true</value>
+      </param>
+      <param>
+        <name>knoxsso.token.ttl</name>
+        <value>100000</value>
+      </param>
+      <param>
+         <name>knoxsso.redirect.whitelist.regex</name>
+         
<value>^https?:\/\/(localhost|127\.0\.0\.1|0:0:0:0:0:0:0:1|::1):[0-9].*$</value>
+      </param>
+    </service>
+
+Notice that the pac4j callback URL is the KnoxSSO URL (`pac4j.callbackUrl` 
parameter). An additional `pac4j.cookie.domain.suffix` parameter allows you to 
define the domain suffix for the pac4j cookies.
+
+In this example, the pac4j provider is configured to authenticate users via a 
CAS server hosted at: https://casserverpac4j.herokuapp.com/login.
+
+##### Parameters #####
+
+You can define the identity provider client/s to be used for authentication 
with the appropriate parameters - as defined below.
+When configuring any pac4j identity provider client there is a mandatory 
parameter that must be defined to indicate the order in which the providers 
should be engaged with the first in the comma separated list being the default. 
Consuming applications may indicate their desire to use one of the configured 
clients with a query parameter called client_name. When there is no client_name 
specified, the default (first) provider is selected.
+
+    <param>
+      <name>clientName</name>
+      <value>CLIENTNAME[,CLIENTNAME]</value>
+    </param>
+
+Valid client names are: `FacebookClient`, `TwitterClient`, `CasClient`, 
`SAML2Client` or `OidcClient`
+
+For tests only, you can use a basic authentication where login equals password 
by defining the following configuration:
+
+    <param>
+      <name>clientName</name>
+      <value>testBasicAuth</value>
+    </param>
+
+NOTE: This is NOT a secure mechanism and must NOT be used in production 
deployments.
+
+Otherwise, you can use Facebook, Twitter, a CAS server, a SAML IdP or an 
OpenID Connect provider by using the following parameters:
+
+##### For OAuth support:
+
+Name | Value
+-----|------
+facebook.id     | Identifier of the OAuth Facebook application
+facebook.secret | Secret of the OAuth Facebook application
+facebook.scope  | Requested scope at Facebook login
+facebook.fields | Fields returned by Facebook
+twitter.id      | Identifier of the OAuth Twitter application
+twitter.secret  | Secret of the OAuth Twitter application
+
+##### For CAS support:
+
+Name | Value
+-----|------
+cas.loginUrl | Login URL of the CAS server
+cas.protocol | CAS protocol (`CAS10`, `CAS20`, `CAS20_PROXY`, `CAS30`, 
`CAS30_PROXY`, `SAML`)
+
+##### For SAML support:
+
+Name | Value
+-----|------
+saml.keystorePassword              | Password of the keystore (storepass)
+saml.privateKeyPassword            | Password for the private key (keypass)
+saml.keystorePath                  | Path of the keystore
+saml.identityProviderMetadataPath  | Path of the identity provider metadata
+saml.maximumAuthenticationLifetime | Maximum lifetime for authentication
+saml.serviceProviderEntityId       | Identifier of the service provider
+saml.serviceProviderMetadataPath   | Path of the service provider metadata
+
+> Get more details on the [pac4j 
wiki](https://github.com/pac4j/pac4j/wiki/Clients#saml-support).
+
+The SSO URL in your SAML 2 provider config will need to include a special 
query parameter that lets the pac4j provider know that the request is coming 
back from the provider rather than from a redirect from a KnoxSSO participating 
application. This query parameter is "pac4jCallback=true".
+
+This results in a URL that looks something like:
+
+    
https://hostname:8443/gateway/knoxsso/api/v1/websso?pac4jCallback=true&client_name=SAML2Client
+
+This also means that the SP Entity ID should also include this query parameter 
as appropriate for your provider.
+Often something like the above URL is used for both the SSO URL and SP Entity 
ID.
+
+##### For OpenID Connect support:
+
+Name | Value
+-----|------
+oidc.id                    | Identifier of the OpenID Connect provider
+oidc.secret                | Secret of the OpenID Connect provider
+oidc.discoveryUri          | Direcovery URI of the OpenID Connect provider
+oidc.useNonce              | Whether to use nonce during login process
+oidc.preferredJwsAlgorithm | Preferred JWS algorithm
+oidc.maxClockSkew          | Max clock skew during login process
+oidc.customParamKey1       | Key of the first custom parameter
+oidc.customParamValue1     | Value of the first custom parameter
+oidc.customParamKey2       | Key of the second custom parameter
+oidc.customParamValue2     | Value of the second custom parameter
+
+> Get more details on the [pac4j 
wiki](https://github.com/pac4j/pac4j/wiki/Clients#openid-connect-support).
+
+In fact, you can even define several identity providers at the same time, the 
first being chosen by default unless you define a `client_name` parameter to 
specify it (`FacebookClient`, `TwitterClient`, `CasClient`, `SAML2Client` or 
`OidcClient`).
+
+##### UI invocation
+
+In a browser, when calling your Hadoop service (for example: 
`https://127.0.0.1:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS`), you are 
redirected to the identity provider for login. Then, after a successful 
authentication, your are redirected back to your originally requested URL and 
your KnoxSSO session is initialized.

Added: knox/trunk/books/1.4.0/config_pam_authn.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_pam_authn.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_pam_authn.md (added)
+++ knox/trunk/books/1.4.0/config_pam_authn.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,98 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### PAM based Authentication ###
+
+There is a large number of pluggable authentication modules available on many 
Linux installations and from vendors of authentication solutions that are great 
to leverage for authenticating access to Hadoop through the Knox Gateway. In 
addition to LDAP support described in this guide, the ShiroProvider also 
includes support for PAM based authentication for unix based systems.
+
+This opens up the integration possibilities to many other readily available 
authentication mechanisms as well as other implementations for LDAP based 
authentication. More flexibility may be available through various PAM modules 
for group lookup, more complicated LDAP schemas or other areas where the 
KnoxLdapRealm is not sufficient.
+
+#### Configuration ####
+##### Overview #####
+The primary motivation for leveraging PAM based authentication is to provide 
the ability to use the configuration provided by existing PAM modules that are 
available in a system's `/etc/pam.d/` directory. Therefore, the solution 
provided here is as simple as possible in order to allow the PAM module config 
itself to be the source of truth. What we do need to configure is the fact that 
we are using PAM through the `main.pamRealm` parameter and the KnoxPamRealm 
classname and the particular PAM module to use with the `main.pamRealm.service` 
parameter in the below example we have 'login'.
+
+    <provider> 
+       <role>authentication</role> 
+       <name>ShiroProvider</name> 
+       <enabled>true</enabled> 
+       <param> 
+            <name>sessionTimeout</name> 
+            <value>30</value>
+        </param>                                              
+        <param>
+            <name>main.pamRealm</name> 
+            <value>org.apache.knox.gateway.shirorealm.KnoxPamRealm</value>
+        </param> 
+        <param>                                                    
+           <name>main.pamRealm.service</name> 
+           <value>login</value> 
+        </param>
+        <param>                                                    
+           <name>urls./**</name> 
+           <value>authcBasic</value> 
+       </param>
+    </provider>
+  
+
+As a non-normative example of a PAM config file see the below from my MacBook 
`/etc/pam.d/login`:
+
+    # login: auth account password session
+    auth       optional       pam_krb5.so use_kcminit
+    auth       optional       pam_ntlm.so try_first_pass
+    auth       optional       pam_mount.so try_first_pass
+    auth       required       pam_opendirectory.so try_first_pass
+    account    required       pam_nologin.so
+    account    required       pam_opendirectory.so
+    password   required       pam_opendirectory.so
+    session    required       pam_launchd.so
+    session    required       pam_uwtmp.so
+    session    optional       pam_mount.so
+
+The first four fields are: service-name, module-type, control-flag and 
module-filename. The fifth and greater fields are for optional arguments that 
are specific to the individual authentication modules.
+
+The second field in the configuration file is the module-type, it indicates 
which of the four PAM management services the corresponding module will provide 
to the application. Our sample configuration file refers to all four groups:
+
+* auth: identifies the PAMs that are invoked when the application calls 
pam_authenticate() and pam_setcred().
+* account: maps to the pam_acct_mgmt() function.
+* session: indicates the mapping for the pam_open_session() and 
pam_close_session() calls.
+* password: group refers to the pam_chauthtok() function.
+
+Generally, you only need to supply mappings for the functions that are needed 
by a specific application. For example, the standard password changing 
application, passwd, only requires a password group entry; any other entries 
are ignored.
+
+The third field indicates what action is to be taken based on the success or 
failure of the corresponding module. Choices for tokens to fill this field are:
+
+* requisite: Failure instantly returns control to the application indicating 
the nature of the first module failure.
+* required: All these modules are required to succeed for libpam to return 
success to the application.
+* sufficient: Given that all preceding modules have succeeded, the success of 
this module leads to an immediate and successful return to the application 
(failure of this module is ignored).
+* optional: The success or failure of this module is generally not recorded.
+
+The fourth field contains the name of the loadable module, pam_*.so. For the 
sake of readability, the full pathname of each module is not given. Before 
Linux-PAM-0.56 was released, there was no support for a default 
authentication-module directory. If you have an earlier version of Linux-PAM 
installed, you will have to specify the full path for each of the modules. Your 
distribution most likely placed these modules exclusively in one of the 
following directories: /lib/security/ or /usr/lib/security/.
+
+Also, find below a non-normative example of a PAM config 
file(/etc/pam.d/login) for Ubuntu:
+
+    #%PAM-1.0
+    
+    auth       required     pam_sepermit.so
+    # pam_selinux.so close should be the first session rule
+    session    required     pam_selinux.so close
+    session    required     pam_loginuid.so
+    # pam_selinux.so open should only be followed by sessions to be executed 
in the user context
+    session    required     pam_selinux.so open env_params
+    session    optional     pam_keyinit.so force revoke
+    
+    session    required     pam_env.so user_readenv=1 
envfile=/etc/default/locale
+    @include password-auth

Added: knox/trunk/books/1.4.0/config_preauth_sso_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_preauth_sso_provider.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_preauth_sso_provider.md (added)
+++ knox/trunk/books/1.4.0/config_preauth_sso_provider.md Tue Jul 23 21:27:15 
2019
@@ -0,0 +1,87 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Preauthenticated SSO Provider ###
+
+A number of SSO solutions provide mechanisms for federating an authenticated 
identity across applications. These mechanisms are at times simple HTTP Header 
type tokens that can be used to propagate the identity across process 
boundaries.
+
+Knox Gateway needs a pluggable mechanism for consuming these tokens and 
federating the asserted identity through an interaction with the Hadoop 
cluster. 
+
+**CAUTION: The use of this provider requires that proper network security and 
identity provider configuration and deployment does not allow requests directly 
to the Knox gateway. Otherwise, this provider will leave the gateway exposed to 
identity spoofing.**
+
+#### Configuration ####
+##### Overview #####
+This provider was designed for use with identity solutions such as those 
provided by CA's SiteMinder and IBM's Tivoli Access Manager. While direct 
testing with these products has not been done, there has been extensive unit 
and functional testing that ensure that it should work with such providers.
+
+The HeaderPreAuth provider is configured within the topology file and has a 
minimal configuration that assumes SM_USER for CA SiteMinder. The following 
example is the bare minimum configuration for SiteMinder (with no IP address 
validation).
+
+    <provider>
+        <role>federation</role>
+        <name>HeaderPreAuth</name>
+        <enabled>true</enabled>
+    </provider>
+
+The following table describes the configuration options for the web app 
security provider:
+
+##### Descriptions #####
+
+Name | Description | Default
+---------|-----------|--------
+preauth.validation.method   | Optional parameter that indicates the types of 
trust validation to perform on incoming requests. There could be one or more 
comma-separated validators defined in this property. If there are multiple 
validators, Apache Knox validates each validator in the same sequence as it is 
configured. This works similar to short-circuit AND operation i.e. if any 
validator fails, Knox does not perform further validation and returns overall 
failure immediately. Possible values are: null, preauth.default.validation, 
preauth.ip.validation, custom validator (details described in [Custom 
Validator](dev-guide.html#Validator)). Failure results in a 403 forbidden HTTP 
status response.| null - which means 'preauth.default.validation' that is  no 
validation will be performed and that we are assuming that the network security 
and external authentication system is sufficient. 
+preauth.ip.addresses        | Optional parameter that indicates the list of 
trusted ip addresses. When preauth.ip.validation is indicated as the validation 
method this parameter must be provided to indicate the trusted ip address set. 
Wildcarded IPs may be used to indicate subnet level trust. ie. 127.0.* | null - 
which means that no validation will be performed.
+preauth.custom.header       | Required parameter for indicating a custom 
header to use for extracting the preauthenticated principal. The value 
extracted from this header is utilized as the PrimaryPrincipal within the 
established Subject. An incoming request that is missing the configured header 
will be refused with a 401 unauthorized HTTP status. | SM_USER for SiteMinder 
usecase
+preauth.custom.group.header | Optional parameter for indicating a HTTP header 
name that contains a comma separated list of groups. These are added to the 
authenticated Subject as group principals. A missing group header will result 
in no groups being extracted from the incoming request and a log entry but 
processing will continue. | null - which means that there will be no group 
principals extracted from the request and added to the established Subject.
+
+NOTE: Mutual authentication can be used to establish a strong trust 
relationship between clients and servers while using the Preauthenticated SSO 
provider. See the configuration for Mutual Authentication with SSL in this 
document.
+
+##### Configuration for SiteMinder
+The following is an example of a configuration of the preauthenticated SSO 
provider that leverages the default SM_USER header name - assuming use with CA 
SiteMinder. It further configures the validation based on the IP address from 
the incoming request.
+
+    <provider>
+        <role>federation</role>
+        <name>HeaderPreAuth</name>
+        <enabled>true</enabled>
+        
<param><name>preauth.validation.method</name><value>preauth.ip.validation</value></param>
+        
<param><name>preauth.ip.addresses</name><value>127.0.0.2,127.0.0.1</value></param>
+    </provider>
+
+##### REST Invocation for SiteMinder
+The following curl command can be used to request a directory listing from 
HDFS while passing in the expected header SM_USER.
+
+    curl -k -i --header "SM_USER: guest" -v 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+
+Omitting the `--header "SM_USER: guest"` above will result in a rejected 
request.
+
+##### Configuration for IBM Tivoli AM
+As an example for configuring the preauthenticated SSO provider for another 
SSO provider, the following illustrates the values used for IBM's Tivoli Access 
Manager:
+
+    <provider>
+        <role>federation</role>
+        <name>HeaderPreAuth</name>
+        <enabled>true</enabled>
+        <param><name>preauth.custom.header</name><value>iv_user</value></param>
+        
<param><name>preauth.custom.group.header</name><value>iv_group</value></param>
+        
<param><name>preauth.validation.method</name><value>preauth.ip.validation</value></param>
+        
<param><name>preauth.ip.addresses</name><value>127.0.0.2,127.0.0.1</value></param>
+    </provider>
+
+##### REST Invocation for Tivoli AM
+The following curl command can be used to request a directory listing from 
HDFS while passing in the expected headers of iv_user and iv_group. Note that 
the iv_group value in this command matches the expected ACL for webhdfs in the 
above topology file. Changing this from "admin" to "admin2" should result in a 
401 unauthorized response.
+
+    curl -k -i --header "iv_user: guest" --header "iv_group: admin" -v 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+
+Omitting the `--header "iv_user: guest"` above will result in a rejected 
request.

Added: knox/trunk/books/1.4.0/config_sandbox.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_sandbox.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_sandbox.md (added)
+++ knox/trunk/books/1.4.0/config_sandbox.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,51 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## Sandbox Configuration ##
+
+### Sandbox 2.x Configuration ###
+
+TODO
+
+### Sandbox 1.x Configuration ###
+
+TODO - Update this section to use hostmap if that simplifies things.
+
+This version of the Apache Knox Gateway is tested against [Hortonworks Sandbox 
1.x][sandbox]
+
+Currently there is an issue with Sandbox that prevents it from being easily 
used with the gateway.
+In order to correct the issue, you can use the commands below to login to the 
Sandbox VM and modify the configuration.
+This assumes that the name sandbox is setup to resolve to the Sandbox VM.
+It may be necessary to use the IP address of the Sandbox VM instead.
+*This is frequently but not always `192.168.56.101`.*
+
+    ssh root@sandbox
+    cp /usr/lib/hadoop/conf/hdfs-site.xml 
/usr/lib/hadoop/conf/hdfs-site.xml.orig
+    sed -e s/localhost/sandbox/ /usr/lib/hadoop/conf/hdfs-site.xml.orig > 
/usr/lib/hadoop/conf/hdfs-site.xml
+    shutdown -r now
+
+In addition to make it very easy to follow along with the samples for the 
gateway you can configure your local system to resolve the address of the 
Sandbox by the names `vm` and `sandbox`.
+The IP address that is shown below should be that of the Sandbox VM as it is 
known on your system.
+*This will likely, but not always, be `192.168.56.101`.*
+
+On Linux or Macintosh systems add a line like this to the end of the file 
`/etc/hosts` on your local machine, *not the Sandbox VM*.
+_Note: The character between the 192.168.56.101 and vm below is a *tab* 
character._
+
+    192.168.56.101     vm sandbox
+
+On Windows systems a similar but different mechanism can be used.  On recent
+versions of windows the file that should be modified is 
`%systemroot%\system32\drivers\etc\hosts`

Added: knox/trunk/books/1.4.0/config_sso_cookie_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_sso_cookie_provider.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_sso_cookie_provider.md (added)
+++ knox/trunk/books/1.4.0/config_sso_cookie_provider.md Tue Jul 23 21:27:15 
2019
@@ -0,0 +1,106 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### SSO Cookie Provider ###
+
+#### Overview ####
+The SSOCookieProvider enables the federation of the authentication event that 
occurred through KnoxSSO. KnoxSSO is a typical SP initiated websso mechanism 
that sets a cookie to be presented by browsers to participating applications 
and cryptographically verified.
+
+Knox Gateway needs a pluggable mechanism for consuming these cookies and 
federating the KnoxSSO authentication event as an asserted identity in its 
interaction with the Hadoop cluster for REST API invocations. This provider is 
useful when an application that is integrated with KnoxSSO for authentication 
also consumes REST APIs through the Knox Gateway.
+
+Based on our understanding of the WebSSO flow it should behave like:
+
+* SSOCookieProvider checks for hadoop-jwt cookie and in its absence redirects 
to the configured SSO provider URL (knoxsso endpoint)
+* The configured Provider on the KnoxSSO endpoint challenges the user in a 
provider specific way (presents form, redirects to SAML IdP, etc.)
+* The authentication provider on KnoxSSO validates the identity of the user 
through credentials/tokens
+* The WebSSO service exchanges the normalized Java Subject into a JWT token 
and sets it on the response as a cookie named `hadoop-jwt`
+* The WebSSO service then redirects the user agent back to the originally 
requested URL - the requested Knox service subsequent invocations will find the 
cookie in the incoming request and not need to engage the WebSSO service again 
until it expires.
+
+#### Configuration ####
+##### sandbox.xml Topology Example
+Configuring one of the cluster topologies to use the SSOCookieProvider instead 
of the out of the box ShiroProvider would look something like the following:
+
+```
+<?xml version="1.0" encoding="utf-8"?>
+<topology>
+  <gateway>
+    <provider>
+        <role>federation</role>
+        <name>SSOCookieProvider</name>
+        <enabled>true</enabled>
+        <param>
+            <name>sso.authentication.provider.url</name>
+            <value>https://localhost:9443/gateway/idp/api/v1/websso</value>
+        </param>
+    </provider>
+    <provider>
+        <role>identity-assertion</role>
+        <name>Default</name>
+        <enabled>true</enabled>
+    </provider>
+  </gateway>    
+  <service>
+      <role>WEBHDFS</role>
+      <url>http://localhost:50070/webhdfs</url>
+  </service>
+  <service>
+      <role>WEBHCAT</role>
+      <url>http://localhost:50111/templeton</url>
+  </service>
+</topology>
+```
+
+The following table describes the configuration options for the sso cookie 
provider:
+
+##### Descriptions #####
+
+Name | Description | Default
+---------|-----------|---------
+sso.authentication.provider.url | Required parameter that indicates the 
location of the KnoxSSO endpoint and where to redirect the useragent when no 
SSO cookie is found in the incoming request. | N/A
+
+### JWT Provider ###
+
+#### Overview ####
+The JWT federation provider accepts JWT tokens as Bearer tokens within the 
Authorization header of the incoming request. Upon successfully extracting and 
verifying the token, the request is then processed on behalf of the user 
represented by the JWT token.
+
+This provider is closely related to the Knox Token Service and is essentially 
the provider that is used to consume the tokens issued by the Knox Token 
Service.
+
+Typical deployments have the KnoxToken service defined in a topology such as 
`sandbox.xml` that authenticates users based on username and password which as 
with the ShiroProvider. They also have a topology dedicated to clients that 
wish to use KnoxTokens to access Hadoop resources through Knox. 
+
+The following provider configuration can be used within such a topology.
+
+    <provider>
+       <role>federation</role>
+       <name>JWTProvider</name>
+       <enabled>true</enabled>
+       <param>
+           <name>knox.token.audiences</name>
+           <value>tokenbased</value>
+       </param>
+    </provider>
+
+The `knox.token.audiences` parameter above indicates that any token in an 
incoming request must contain an audience claim called "tokenbased". In this 
case, the idea is that the issuing KnoxToken service will be configured to 
include such an audience claim and that the resulting token is valid to use in 
the topology that contains configuration like above. This would generally be 
the name of the topology but you can standardize on anything.
+
+The following table describes the configuration options for the JWT federation 
provider:
+
+##### Descriptions #####
+
+Name | Description | Default
+---------|-----------|--------
+knox.token.audiences | Optional parameter. This parameter allows the 
administrator to constrain the use of tokens on this endpoint to those that 
have tokens with at least one of the configured audience claims. These claims 
have associated configuration within the KnoxToken service as well. This 
provides an interesting way to make sure that the token issued based on 
authentication to a particular LDAP server or other IdP is accepted but not 
others.|N/A
+
+See the documentation for the Knox Token service for related details.

Added: 
knox/trunk/books/1.4.0/config_tls_client_certificate_authentication_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_tls_client_certificate_authentication_provider.md?rev=1863668&view=auto
==============================================================================
--- 
knox/trunk/books/1.4.0/config_tls_client_certificate_authentication_provider.md 
(added)
+++ 
knox/trunk/books/1.4.0/config_tls_client_certificate_authentication_provider.md 
Tue Jul 23 21:27:15 2019
@@ -0,0 +1,31 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+## TLS Client Certificate Provider ##
+
+The TLS client certificate authentication provider enables establishing the 
user based on the client provided TLS certificate. The user will be the DN from 
the certificate. This provider requires that the gateway is configured to 
require client authentication with either `gateway.client.auth.wanted` or 
`gateway.client.auth.needed` ( #[Mutual Authentication with SSL] ).
+
+### Configuration ###
+
+```xml
+<provider>
+    <role>authentication</role>
+    <name>ClientCert</name>
+    <enabled>true</enabled>
+</provider>
+```
+

Added: knox/trunk/books/1.4.0/config_webappsec_provider.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/config_webappsec_provider.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/config_webappsec_provider.md (added)
+++ knox/trunk/books/1.4.0/config_webappsec_provider.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,132 @@
+<!---
+   Licensed to the Apache Software Foundation (ASF) under one or more
+   contributor license agreements.  See the NOTICE file distributed with
+   this work for additional information regarding copyright ownership.
+   The ASF licenses this file to You under the Apache License, Version 2.0
+   (the "License"); you may not use this file except in compliance with
+   the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.
+--->
+
+### Web App Security Provider ###
+Knox is a Web API (REST) Gateway for Hadoop. The fact that REST interactions 
are HTTP based means that they are vulnerable to a number of web application 
security vulnerabilities. This project introduces a web application security 
provider for plugging in various protection filters.
+
+There are three aspects of web application security that are handled now: 
Cross Site Request Forgery (CSRF), Cross Origin Resource Sharing and HTTP 
Strict-Transport-Security. Others will be added in future releases.
+
+#### CSRF
+Cross site request forgery (CSRF) attacks attempt to force an authenticated 
user to 
+execute functionality without their knowledge. By presenting them with a link 
or image that when clicked invokes a request to another site with which the 
user may have already established an active session.
+
+CSRF is entirely a browser-based attack. Some background knowledge of how 
browsers work enables us to provide a filter that will prevent CSRF attacks. 
HTTP requests from a web browser performed via form, image, iframe, etc. are 
unable to set custom HTTP headers. The only way to create a HTTP request from a 
browser with a custom HTTP header is to use a technology such as JavaScript 
XMLHttpRequest or Flash. These technologies can set custom HTTP headers but 
have security policies built in to prevent web sites from sending requests to 
each other 
+unless specifically allowed by policy. 
+
+This means that a website www.bad.com cannot send a request to  
http://bank.example.com with the custom header X-XSRF-Header unless they use a 
technology such as a XMLHttpRequest. That technology would prevent such a 
request from being made unless the bank.example.com domain specifically allowed 
it. This then results in a REST endpoint that can only be called via 
XMLHttpRequest (or similar technology).
+
+NOTE: by enabling this protection within the topology, this custom header will 
be required for *all* clients that interact with it - not just browsers.
+
+#### CORS
+For security reasons, browsers restrict cross-origin HTTP requests initiated 
from within scripts. For example, XMLHttpRequest follows the same-origin 
policy. So, a web application using XMLHttpRequest could only make HTTP 
requests to its own domain. To improve web applications, developers asked 
browser vendors to allow XMLHttpRequest to make cross-domain requests.
+
+Cross Origin Resource Sharing is a way to explicitly alter the same-origin 
policy for a given application or API. In order to allow for applications to 
make cross domain requests through Apache Knox, we need to configure the CORS 
filter of the WebAppSec provider.
+
+#### HTTP Strict-Transport-Security - HSTS
+HTTP Strict Transport Security (HSTS) is a web security policy mechanism which 
helps to protect websites against protocol downgrade attacks and cookie 
hijacking. It allows web servers to declare that web browsers (or other 
complying user agents) should only interact with it using secure HTTPS 
connections and never via the insecure HTTP protocol.
+
+
+#### Configuration ####
+##### Overview #####
+As with all providers in the Knox gateway, the web app security provider is 
configured through provider parameters. Unlike many other providers, the web 
app security provider may actually host multiple vulnerability/security 
filters. Currently, we only have implementations for CSRF, CORS and HTTP STS 
but others might follow, and you may be interested in creating your own.
+
+Because of this one-to-many provider/filter relationship, there is an extra 
configuration element for this provider per filter. As you can see in the 
sample below, the actual filter configuration is defined entirely within the 
parameters of the WebAppSec provider.
+
+    <provider>
+        <role>webappsec</role>
+        <name>WebAppSec</name>
+        <enabled>true</enabled>
+        <param><name>csrf.enabled</name><value>true</value></param>
+        
<param><name>csrf.customHeader</name><value>X-XSRF-Header</value></param>
+        
<param><name>csrf.methodsToIgnore</name><value>GET,OPTIONS,HEAD</value></param>
+        <param><name>cors.enabled</name><value>true</value></param>
+        <param><name>xframe.options.enabled</name><value>true</value></param>
+        <param><name>strict.transport.enabled</name><value>true</value></param>
+    </provider>
+
+#### Descriptions ####
+The following tables describes the configuration options for the web app 
security provider:
+
+##### CSRF
+
+###### Config
+
+Name                 | Description | Default
+---------------------|-------------|--------
+csrf.enabled         | This parameter enables the CSRF protection capabilities 
| false  
+csrf.customHeader    | This is an optional parameter that indicates the name 
of the header to be used in order to determine that the request is from a 
trusted source. It defaults to the header name described by the NSA in its 
guidelines for dealing with CSRF in REST. | X-XSRF-Header
+csrf.methodsToIgnore | This is also an optional parameter that enumerates the 
HTTP methods to allow through without the custom HTTP header. This is useful 
for allowing things like GET requests from the URL bar of a browser, but it 
assumes that the GET request adheres to REST principals in terms of being 
idempotent. If this cannot be assumed then it would be wise to not include GET 
in the list of methods to ignore. |  GET,OPTIONS,HEAD
+
+###### REST Invocation
+The following curl command can be used to request a directory listing from 
HDFS while passing in the expected header X-XSRF-Header.
+
+    curl -k -i --header "X-XSRF-Header: valid" -v -u guest:guest-password 
https://localhost:8443/gateway/sandbox/webhdfs/v1/tmp?op=LISTSTATUS
+
+Omitting the `--header "X-XSRF-Header: valid"` above should result in an HTTP 
400 bad_request.
+
+Disabling the provider will then allow a request that is missing the header 
through. 
+
+##### CORS
+
+###### Config
+
+Name                         | Description | Default
+-----------------------------|-------------|---------
+cors.enabled                 | This parameter enables the CORS 
capabilities|false
+cors.allowGenericHttpRequests| {true\|false} defaults to true. If true, 
generic HTTP requests will be allowed to pass through the filter, else only 
valid and accepted CORS requests will be allowed (strict CORS filtering).|true
+cors.allowOrigin             | {"\*"\|origin-list} defaults to "\*". 
Whitespace-separated list of origins that the CORS filter must allow. Requests 
from origins not included here will be refused with an HTTP 403 "Forbidden" 
response. If set to \* (asterisk) any origin will be allowed.|"\*"
+cors.allowSubdomains         | {true\|false} defaults to false. If true, the 
CORS filter will allow requests from any origin which is a subdomain origin of 
the allowed origins. A subdomain is matched by comparing its scheme and suffix 
(host name / IP address and optional port number).|false
+cors.supportedMethods        | {method-list} defaults to GET, POST, HEAD, 
OPTIONS. List of the supported HTTP methods. These are advertised through the 
Access-Control-Allow-Methods header and must also be implemented by the actual 
CORS web service. Requests for methods not included here will be refused by the 
CORS filter with an HTTP 405 "Method not allowed" response.| GET, POST, HEAD, 
OPTIONS
+cors.supportedHeaders        | {"\*"\|header-list} defaults to \*. The names 
of the supported author request headers. These are advertised through the 
Access-Control-Allow-Headers header. If the configuration property value is set 
to \* (asterisk) any author request header will be allowed. The CORS Filter 
implements this by simply echoing the requested value back to the browser.|\*
+cors.exposedHeaders          | {header-list} defaults to empty list. List of 
the response headers other than simple response headers that the browser should 
expose to the author of the cross-domain request through the 
XMLHttpRequest.getResponseHeader() method. The CORS filter supplies this 
information through the Access-Control-Expose-Headers header.| empty
+cors.supportsCredentials     | {true\|false} defaults to true. Indicates 
whether user credentials, such as cookies, HTTP authentication or client-side 
certificates, are supported. The CORS filter uses this value in constructing 
the Access-Control-Allow-Credentials header.|true
+cors.maxAge                  | {int} defaults to -1 (unspecified). Indicates 
how long the results of a preflight request can be cached by the web browser, 
in seconds. If -1 unspecified. This information is passed to the browser via 
the Access-Control-Max-Age header.| -1
+cors.tagRequests             | {true\|false} defaults to false (no tagging). 
Enables HTTP servlet request tagging to provide CORS information to downstream 
handlers (filters and/or servlets).| false
+
+##### X-Frame-Options
+
+Cross Frame Scripting and Clickjacking are attacks that can be prevented by 
controlling the ability for a third-party to embed an application or resource 
within a Frame, IFrame or Object html element. This can be done adding the 
X-Frame-Options HTTP header to responses.
+
+###### Config
+
+Name                   | Description | Default
+-----------------------|-------------|---------
+xframe-options.enabled | This parameter enables the X-Frame-Options 
capabilities|false
+xframe-options.value   | This parameter specifies a particular value for the 
X-Frame-Options header. Most often the default value of DENY will be most 
appropriate. You can also use SAMEORIGIN or ALLOW-FROM uri|DENY
+
+##### X-Content-Type-Options
+
+Browser MIME content type sniffing can be exploited for malicious purposes. 
Adding the X-Content-Type-Options HTTP header to responses directs the browser 
to honor the type specified in the Content-Type header, rather than trying to 
determine the type from the content itself. Most modern browsers support this.
+
+###### Config
+
+Name                         | Description | Default
+-----------------------------|-------------|---------
+xcontent-type.options.enabled                 | This param enables the 
X-Content-Type-Options header inclusion|false
+xcontent-type.options                | This param specifies a particular value 
for the X-Content-Type-Options header. The default value is really the only 
meaningful value|nosniff
+
+##### HTTP Strict Transport Security
+
+Web applications can be protected by protocol downgrade attacks and cookie 
hijacking by adding HTTP Strict Transport Security response header.
+
+###### Config
+
+Name                     | Description | Default
+-------------------------|-------------|---------
+strict.transport.enabled | This parameter enables the HTTP 
Strict-Transport-Security response header|false
+strict.transport         | This parameter specifies a particular value for the 
HTTP Strict-Transport-Security header. Default value is max-age=31536000. You 
can also use `max-age=<expire-time>` or `max-age=<expire-time>; 
includeSubDomains` or `max-age=<expire-time>;preload`|max-age=31536000
+

Added: knox/trunk/books/1.4.0/dev-guide/admin-ui.md
URL: 
http://svn.apache.org/viewvc/knox/trunk/books/1.4.0/dev-guide/admin-ui.md?rev=1863668&view=auto
==============================================================================
--- knox/trunk/books/1.4.0/dev-guide/admin-ui.md (added)
+++ knox/trunk/books/1.4.0/dev-guide/admin-ui.md Tue Jul 23 21:27:15 2019
@@ -0,0 +1,52 @@
+###Introduction
+
+The Admin UI is a work in progress. It has started with viewpoint of being a 
simple web interface for 
+ Admin API functions but will hopefully grow into being able to also provide 
visibility into the gateway
+ in terms of logs and metrics.
+
+###Source and Binaries
+
+The Admin UI application follows the architecture of a hosted application in 
Knox. To that end it needs to be 
+packaged up in the gateway-applications module in the source tree so that in 
the installation it can wind up here
+
+`<GATEWAY_HOME>/data/applications/admin-ui`
+
+However since the application is built using angular and various node modules 
the source tree is not something
+we want to place into the gateway-applications module. Instead we will place 
the production 'binaries' in gateway-applications
+ and have the source in a module called 'gateway-admin-ui'.
+ 
+To work with the angular application you need to install some prerequisite 
tools. 
+ 
+The main tool needed is the [angular 
cli](https://github.com/angular/angular-cli#installation) and while installing 
that you
+ will get its dependencies which should fulfill any other requirements 
[Prerequisites](https://github.com/angular/angular-cli#prerequisites)
+ 
+###Manager Topology
+
+The Admin UI is deployed to a fixed topology. The topology file can be found 
under
+
+`<GATEWAY_HOME>/conf/topologies/manager.xml`
+
+The topology hosts an instance of the Admin API for the UI to use. The reason 
for this is that the existing Admin API needs
+ to have a different security model from that used by the Admin UI. The key 
components of this topology are:
+ 
+```xml
+<provider>
+    <role>webappsec</role>
+    <name>WebAppSec</name>
+    <enabled>true</enabled>
+    <param><name>csrf.enabled</name><value>true</value></param>
+    <param><name>csrf.customHeader</name><value>X-XSRF-Header</value></param>
+    
<param><name>csrf.methodsToIgnore</name><value>GET,OPTIONS,HEAD</value></param>
+    <param><name>xframe-options.enabled</name><value>true</value></param>
+    <param><name>strict.transport.enabled</name><value>true</value></param>
+</provider>
+```
+ 
+and 
+ 
+```xml
+<application>
+    <role>admin-ui</role>
+</application>
+```
+


Reply via email to