http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/ha.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/ha.xml b/docs/user-manual/en/ha.xml
new file mode 100644
index 0000000..f4b2d2b
--- /dev/null
+++ b/docs/user-manual/en/ha.xml
@@ -0,0 +1,539 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+<chapter id="ha">
+    <title>High Availability and Failover</title>
+
+    <para>We define high availability as the <emphasis>ability for the system 
to continue
+       functioning after failure of one or more of the 
servers</emphasis>.</para>
+
+    <para>A part of high availability is <emphasis>failover</emphasis> which 
we define as the
+       <emphasis>ability for client connections to migrate from one server to 
another in event of
+          server failure so client applications can continue to 
operate</emphasis>.</para>
+
+    <section>
+        <title>Live - Backup Groups</title>
+
+        <para>HornetQ allows servers to be linked together as <emphasis>live - 
backup</emphasis>
+           groups where each live server can have 1 or more backup servers. A 
backup server is owned by
+           only one live server.  Backup servers are not operational until 
failover occurs, however 1
+           chosen backup, which will be in passive mode, announces its status 
and waits to take over
+           the live servers work</para>
+
+        <para>Before failover, only the live server is serving the HornetQ 
clients while the backup
+           servers remain passive or awaiting to become a backup server. When 
a live server crashes or
+           is brought down in the correct mode, the backup server currently in 
passive mode will become
+           live and another backup server will become passive. If a live 
server restarts after a
+           failover then it will have priority and be the next server to 
become live when the current
+           live server goes down, if the current live server is configured to 
allow automatic failback
+           then it will detect the live server coming back up and 
automatically stop.</para>
+
+        <section id="ha.mode">
+            <title>HA modes</title>
+            <para>HornetQ supports two different strategies for backing up a 
server <emphasis>shared
+               store</emphasis> and <emphasis>replication</emphasis>.</para>
+            <note>
+                <para>Only persistent message data will survive failover. Any 
non persistent message
+                   data will not be available after failover.</para>
+            </note>
+        </section>
+
+        <section id="ha.mode.replicated">
+            <title>Data Replication</title>
+            <para>Support for network-based data replication was added in 
version 2.3.</para>
+            <para>When using replication, the live and the backup servers do 
not share the same
+               data directories, all data synchronization is done over the 
network. Therefore all (persistent)
+               data received by the live server will be duplicated to the 
backup.</para>
+            <graphic fileref="images/ha-replicated-store.png" align="center"/>
+            <para>Notice that upon start-up the backup server will first need 
to synchronize all
+               existing data from the live server before becoming capable of 
replacing the live
+               server should it fail. So unlike when using shared storage, a 
replicating backup will
+               not be a fully operational backup right after start-up, but 
only after it finishes
+               synchronizing the data with its live server. The time it will 
take for this to happen
+               will depend on the amount of data to be synchronized and the 
connection speed.</para>
+
+            <note>
+                <para>Synchronization occurs in parallel with current network 
traffic so this won't cause any
+                  blocking on current clients.</para>
+            </note>
+            <para>Replication will create a copy of the data at the backup. 
One issue to be aware
+               of is: in case of a successful fail-over, the backup's data 
will be newer than
+               the one at the live's storage. If you configure your live 
server to perform a
+               <xref linkend="ha.allow-fail-back">'fail-back'</xref> when 
restarted, it will synchronize
+               its data with the backup's. If both servers are shutdown, the 
administrator will have
+               to determine which one has the lastest data.</para>
+
+            <para>The replicating live and backup pair must be part of a 
cluster.  The Cluster
+               Connection also defines how backup servers will find the remote 
live servers to pair
+               with.  Refer to <xref linkend="clusters"/> for details on how 
this is done, and how
+               to configure a cluster connection. Notice that:</para>
+
+            <itemizedlist>
+                <listitem>
+                    <para>Both live and backup servers must be part of the 
same cluster.  Notice
+                       that even a simple live/backup replicating pair will 
require a cluster configuration.</para>
+                </listitem>
+                <listitem>
+                    <para>Their cluster user and password must match.</para>
+                </listitem>
+            </itemizedlist>
+
+            <para>Within a cluster, there are two ways that a backup server 
will locate a live server to replicate
+               from, these are:</para>
+
+            <itemizedlist>
+                <listitem>
+                    <para><literal>specifying a node group</literal>. You can 
specify a group of live servers that a backup
+                       server can connect to. This is done by configuring 
<literal>backup-group-name</literal> in the main
+                       <literal>hornetq-configuration.xml</literal>. A Backup 
server will only connect to a live server that
+                       shares the same node group name</para>
+                </listitem>
+                <listitem>
+                   <para><literal>connecting to any live</literal>. Simply put 
not configuring <literal>backup-group-name</literal>
+                      will allow a backup server to connect to any live 
server</para>
+                </listitem>
+            </itemizedlist>
+            <note>
+                <para>A <literal>backup-group-name</literal> example: suppose 
you have 5 live servers and 6 backup
+                   servers:</para>
+                <itemizedlist>
+                    <listitem>
+                        <para><literal>live1</literal>, 
<literal>live2</literal>, <literal>live3</literal>: with
+                           <literal>backup-group-name=fish</literal></para>
+                    </listitem>
+                    <listitem>
+                       <para><literal>live4</literal>, 
<literal>live5</literal>: with <literal>backup-group-name=bird</literal></para>
+                    </listitem>
+                    <listitem>
+                       <para><literal>backup1</literal>, 
<literal>backup2</literal>, <literal>backup3</literal>,
+                          <literal>backup4</literal>: with 
<literal>backup-group-name=fish</literal></para>
+                    </listitem>
+                    <listitem>
+                       <para><literal>backup5</literal>, 
<literal>backup6</literal>: with
+                          <literal>backup-group-name=bird</literal></para>
+                    </listitem>
+                </itemizedlist>
+                <para>After joining the cluster the backups with 
<literal>backup-group-name=fish</literal> will
+                   search for live servers with 
<literal>backup-group-name=fish</literal> to pair with. Since there
+                   is one backup too many, the <literal>fish</literal> will 
remain with one spare backup.</para>
+                <para>The 2 backups with 
<literal>backup-group-name=bird</literal> (<literal>backup5</literal> and
+                   <literal>backup6</literal>) will pair with live servers 
<literal>live4</literal> and
+                   <literal>live5</literal>.</para>
+            </note>
+            <para>The backup will search for any live server that it is 
configured to connect to. It then tries to
+               replicate with each live server in turn until it finds a live 
server that has no current backup
+               configured. If no live server is available it will wait until 
the cluster topology changes and
+               repeats the process.</para>
+            <note>
+               <para>This is an important distinction from a shared-store 
backup, as in that case if
+                  the backup starts and does not find its live server, the 
server will just activate
+                  and start to serve client requests. In the replication case, 
the backup just keeps
+                  waiting for a live server to pair with. Notice that in 
replication the backup server
+                  does not know whether any data it might have is up to date, 
so it really cannot
+                  decide to activate automatically. To activate a replicating 
backup server using the data
+                  it has, the administrator must change its configuration to 
make a live server of it,
+                  that change <literal>backup=true</literal> to 
<literal>backup=false</literal>.</para>
+            </note>
+
+            <para>Much like in the shared-store case, when the live server 
stops or crashes,
+               its replicating backup will become active and take over its 
duties. Specifically,
+               the backup will become active when it loses connection to its 
live server. This can
+               be problematic because this can also happen because of a 
temporary network
+               problem. In order to address this issue, the backup will try to 
determine whether it
+               still can connect to the other servers in the cluster. If it 
can connect to more
+               than half the servers, it will become active, if more than half 
the servers also
+               disappeared with the live, the backup will wait and try 
reconnecting with the live.
+               This avoids a split brain situation.</para>
+
+            <section>
+                <title>Configuration</title>
+
+                <para>To configure the live and backup servers to be a 
replicating pair, configure
+                   both servers' <literal>hornetq-configuration.xml</literal> 
to have:</para>
+
+                <programlisting>
+&lt;!-- FOR BOTH LIVE AND BACKUP SERVERS' -->
+&lt;shared-store>false&lt;/shared-store>
+.
+.
+&lt;cluster-connections>
+   &lt;cluster-connection name="my-cluster">
+      ...
+   &lt;/cluster-connection>
+&lt;/cluster-connections>
+                </programlisting>
+
+                <para>The backup server must also be configured as a 
backup.</para>
+
+                <programlisting>
+&lt;backup>true&lt;/backup>
+</programlisting>
+            </section>
+        </section>
+
+        <section id="ha.mode.shared">
+            <title>Shared Store</title>
+            <para>When using a shared store, both live and backup servers 
share the
+               <emphasis>same</emphasis> entire data directory using a shared 
file system.
+               This means the paging directory, journal directory, large 
messages and binding
+               journal.</para>
+            <para>When failover occurs and a backup server takes over, it will 
load the
+               persistent storage from the shared file system and clients can 
connect to
+               it.</para>
+            <para>This style of high availability differs from data 
replication in that it
+               requires a shared file system which is accessible by both the 
live and backup
+               nodes. Typically this will be some kind of high performance 
Storage Area Network
+               (SAN). We do not recommend you use Network Attached Storage 
(NAS), e.g. NFS
+               mounts to store any shared journal (NFS is slow).</para>
+            <para>The advantage of shared-store high availability is that no 
replication occurs
+               between the live and backup nodes, this means it does not 
suffer any performance
+               penalties due to the overhead of replication during normal 
operation.</para>
+            <para>The disadvantage of shared store replication is that it 
requires a shared file
+               system, and when the backup server activates it needs to load 
the journal from
+               the shared store which can take some time depending on the 
amount of data in the
+               store.</para>
+            <para>If you require the highest performance during normal 
operation, have access to
+               a fast SAN, and can live with a slightly slower failover 
(depending on amount of
+               data), we recommend shared store high availability</para>
+            <graphic fileref="images/ha-shared-store.png" align="center"/>
+
+            <section id="ha/mode.shared.configuration">
+                <title>Configuration</title>
+                <para>To configure the live and backup servers to share their 
store, configure
+                   all <literal>hornetq-configuration.xml</literal>:</para>
+                <programlisting>
+&lt;shared-store>true&lt;/shared-store>
+                </programlisting>
+                <para>Additionally, each backup server must be flagged 
explicitly as a backup:</para>
+                <programlisting>
+&lt;backup>true&lt;/backup></programlisting>
+                <para>In order for live - backup groups to operate properly 
with a shared store,
+                   both servers must have configured the location of journal 
directory to point
+                   to the <emphasis>same shared location</emphasis> (as 
explained in
+                   <xref linkend="configuring.message.journal"/>)</para>
+                <note>
+                    <para>todo write something about GFS</para>
+                </note>
+                <para>Also each node, live and backups, will need to have a 
cluster connection defined even if not
+                   part of a cluster. The Cluster Connection info defines how 
backup servers announce there presence
+                   to its live server or any other nodes in the cluster. Refer 
to <xref linkend="clusters"/> for details
+                   on how this is done.</para>
+            </section>
+        </section>
+        <section id="ha.allow-fail-back">
+            <title>Failing Back to live Server</title>
+            <para>After a live server has failed and a backup taken has taken 
over its duties, you may want to
+               restart the live server and have clients fail back.</para>
+            <para>In case of "shared disk", simply restart the original live
+               server and kill the new live server. You can do this by killing 
the process itself or just waiting for the server to crash naturally.</para>
+            <para>In case of a replicating live server that has been replaced 
by a remote backup you will need to also set <link 
linkend="hq.check-for-live-server">check-for-live-server</link>. This option is 
necessary because a starting server cannot know whether there is a (remote) 
server running in its place, so with this option set, the server will check the 
cluster for another server using its node-ID and if it finds one it will try 
initiate a fail-back. This option only applies to live servers that are 
restarting, it is ignored by backup servers.</para>
+            <para>It is also possible to cause failover to occur on normal 
server shutdown, to enable
+               this set the following property to true in the 
<literal>hornetq-configuration.xml</literal>
+               configuration file like so:</para>
+            <programlisting>
+&lt;failover-on-shutdown>true&lt;/failover-on-shutdown></programlisting>
+            <para>By default this is set to false, if by some chance you have 
set this to false but still
+               want to stop the server normally and cause failover then you 
can do this by using the management
+               API as explained at <xref 
linkend="management.core.server"/></para>
+            <para>You can also force the running live server to shutdown when 
the old live server comes back up allowing
+               the original live server to take over automatically by setting 
the following property in the
+               <literal>hornetq-configuration.xml</literal> configuration file 
as follows:</para>
+            <programlisting>
+&lt;allow-failback>true&lt;/allow-failback></programlisting>
+            <para id="hq.check-for-live-server">In replication HA mode you 
need to set an extra property <literal>check-for-live-server</literal>
+               to <literal>true</literal>. If set to true, during start-up a 
live server will first search the cluster for another server using its nodeID. 
If it finds one, it will contact this server and try to "fail-back". Since this 
is a remote replication scenario, the "starting live" will have to synchronize 
its data with the server running with its ID, once they are in sync, it will 
request the other server (which it assumes it is a back that has assumed its 
duties) to shutdown for it to take over. This is necessary because otherwise 
the live server has no means to know whether there was a fail-over or not, and 
if there was if the server that took its duties is still running or not. To 
configure this option at your <literal>hornetq-configuration.xml</literal> 
configuration file as follows:</para>
+            <programlisting>
+&lt;check-for-live-server>true&lt;/check-for-live-server></programlisting>
+        </section>
+        <section id="ha.colocated">
+            <title>Colocated Backup Servers</title>
+            <para>It is also possible when running standalone to colocate 
backup servers in the same
+                JVM as another live server.The colocated backup will become a 
backup for another live
+                server in the cluster but not the one it shares the vm with. 
To configure a colocated
+                backup server simply add the following to the 
<literal>hornetq-configuration.xml</literal> file</para>
+            <programlisting>
+&lt;backup-servers>
+    &lt;backup-server name="backup2" inherit-configuration="true" 
port-offset="1000">
+        &lt;configuration>
+            
&lt;bindings-directory>target/server1/data/messaging/bindings&lt;/bindings-directory>
+            
&lt;journal-directory>target/server1/data/messaging/journal&lt;/journal-directory>
+            
&lt;large-messages-directory>target/server1/data/messaging/largemessages&lt;/large-messages-directory>
+            
&lt;paging-directory>target/server1/data/messaging/paging&lt;/paging-directory>
+        &lt;/configuration>
+    &lt;/backup-server>
+&lt;/backup-servers>
+            </programlisting>
+            <para> you will notice 3 attributes on the 
<literal>backup-server</literal>, <literal>name</literal>
+                which is a unique name used to identify the backup server, 
<literal>inherit-configuration</literal>
+            which if set to true means the server will inherit the 
configuration of its parent server
+            and <literal>port-offset</literal> which is what the port for any 
netty connectors or
+            acceptors will be increased by if the configuration is 
inherited.</para>
+            <para>it is also possible to configure the backup server in the 
normal way, in this example you will
+            notice we have changed the journal directories.</para>
+        </section>
+    </section>
+    <section id="failover">
+        <title>Failover Modes</title>
+        <para>HornetQ defines two types of client failover:</para>
+        <itemizedlist>
+            <listitem>
+                <para>Automatic client failover</para>
+            </listitem>
+            <listitem>
+                <para>Application-level client failover</para>
+            </listitem>
+        </itemizedlist>
+        <para>HornetQ also provides 100% transparent automatic reattachment of 
connections to the
+            same server (e.g. in case of transient network problems). This is 
similar to failover,
+            except it is reconnecting to the same server and is discussed in
+            <xref linkend="client-reconnection"/></para>
+        <para>During failover, if the client has consumers on any non 
persistent or temporary
+            queues, those queues will be automatically recreated during 
failover on the backup node,
+            since the backup node will not have any knowledge of non 
persistent queues.</para>
+        <section id="ha.automatic.failover">
+            <title>Automatic Client Failover</title>
+            <para>HornetQ clients can be configured to receive knowledge of 
all live and backup servers, so
+                that in event of connection failure at the client - live 
server connection, the
+                client will detect this and reconnect to the backup server. 
The backup server will
+                then automatically recreate any sessions and consumers that 
existed on each
+                connection before failover, thus saving the user from having 
to hand-code manual
+                reconnection logic.</para>
+            <para>HornetQ clients detect connection failure when it has not 
received packets from
+                the server within the time given by 
<literal>client-failure-check-period</literal>
+                as explained in section <xref linkend="connection-ttl"/>. If 
the client does not
+                receive data in good time, it will assume the connection has 
failed and attempt
+                failover. Also if the socket is closed by the OS, usually if 
the server process is
+                killed rather than the machine itself crashing, then the 
client will failover straight away.
+                </para>
+            <para>HornetQ clients can be configured to discover the list of 
live-backup server groups in a
+                number of different ways. They can be configured explicitly or 
probably the most
+                common way of doing this is to use <emphasis>server 
discovery</emphasis> for the
+                client to automatically discover the list. For full details on 
how to configure
+                server discovery, please see <xref linkend="clusters"/>.
+                Alternatively, the clients can explicitly connect to a 
specific server and download
+                the current servers and backups see <xref 
linkend="clusters"/>.</para>
+            <para>To enable automatic client failover, the client must be 
configured to allow
+                non-zero reconnection attempts (as explained in <xref 
linkend="client-reconnection"
+                />).</para>
+            <para>By default failover will only occur after at least one 
connection has been made to
+                the live server. In other words, by default, failover will not 
occur if the client
+                fails to make an initial connection to the live server - in 
this case it will simply
+                retry connecting to the live server according to the 
reconnect-attempts property and
+                fail after this number of attempts.</para>
+            <section>
+                <title>Failing over on the Initial Connection</title>
+                <para>
+                    Since the client does not learn about the full topology 
until after the first
+                    connection is made there is a window where it does not 
know about the backup. If a failure happens at
+                    this point the client can only try reconnecting to the 
original live server. To configure
+                    how many attempts the client will make you can set the 
property <literal>initialConnectAttempts</literal>
+                    on the <literal>ClientSessionFactoryImpl</literal> or 
<literal >HornetQConnectionFactory</literal> or
+                    <literal>initial-connect-attempts</literal> in xml. The 
default for this is <literal>0</literal>, that
+                    is try only once. Once the number of attempts has been 
made an exception will be thrown.
+                </para>
+            </section>
+            <para>For examples of automatic failover with transacted and 
non-transacted JMS
+                sessions, please see <xref 
linkend="examples.transaction-failover"/> and <xref
+                    linkend="examples.non-transaction-failover"/>.</para>
+            <section id="ha.automatic.failover.noteonreplication">
+                <title>A Note on Server Replication</title>
+                <para>HornetQ does not replicate full server state between 
live and backup servers.
+                    When the new session is automatically recreated on the 
backup it won't have any
+                    knowledge of messages already sent or acknowledged in that 
session. Any
+                    in-flight sends or acknowledgements at the time of 
failover might also be
+                    lost.</para>
+                <para>By replicating full server state, theoretically we could 
provide a 100%
+                    transparent seamless failover, which would avoid any lost 
messages or
+                    acknowledgements, however this comes at a great cost: 
replicating the full
+                    server state (including the queues, session, etc.). This 
would require
+                    replication of the entire server state machine; every 
operation on the live
+                    server would have to replicated on the replica server(s) 
in the exact same
+                    global order to ensure a consistent replica state. This is 
extremely hard to do
+                    in a performant and scalable way, especially when one 
considers that multiple
+                    threads are changing the live server state 
concurrently.</para>
+                <para>It is possible to provide full state machine replication 
using techniques such
+                    as <emphasis role="italic">virtual synchrony</emphasis>, 
but this does not scale
+                    well and effectively serializes all operations to a single 
thread, dramatically
+                    reducing concurrency.</para>
+                <para>Other techniques for multi-threaded active replication 
exist such as
+                    replicating lock states or replicating thread scheduling 
but this is very hard
+                    to achieve at a Java level.</para>
+                <para>Consequently it has decided it was not worth massively 
reducing performance
+                    and concurrency for the sake of 100% transparent failover. 
Even without 100%
+                    transparent failover, it is simple to guarantee <emphasis 
role="italic">once and
+                        only once</emphasis> delivery, even in the case of 
failure, by using a
+                    combination of duplicate detection and retrying of 
transactions. However this is
+                    not 100% transparent to the client code.</para>
+            </section>
+            <section id="ha.automatic.failover.blockingcalls">
+                <title>Handling Blocking Calls During Failover</title>
+                <para>If the client code is in a blocking call to the server, 
waiting for a response
+                    to continue its execution, when failover occurs, the new 
session will not have
+                    any knowledge of the call that was in progress. This call 
might otherwise hang
+                    for ever, waiting for a response that will never 
come.</para>
+                <para>To prevent this, HornetQ will unblock any blocking calls 
that were in progress
+                    at the time of failover by making them throw a <literal
+                        >javax.jms.JMSException</literal> (if using JMS), or a 
<literal
+                        >HornetQException</literal> with error code <literal
+                        >HornetQException.UNBLOCKED</literal>. It is up to the 
client code to catch
+                    this exception and retry any operations if desired.</para>
+                <para>If the method being unblocked is a call to commit(), or 
prepare(), then the
+                    transaction will be automatically rolled back and HornetQ 
will throw a <literal
+                        >javax.jms.TransactionRolledBackException</literal> 
(if using JMS), or a
+                        <literal>HornetQException</literal> with error code 
<literal
+                        >HornetQException.TRANSACTION_ROLLED_BACK</literal> if 
using the core
+                    API.</para>
+            </section>
+            <section id="ha.automatic.failover.transactions">
+                <title>Handling Failover With Transactions</title>
+                <para>If the session is transactional and messages have 
already been sent or
+                    acknowledged in the current transaction, then the server 
cannot be sure that
+                    messages sent or acknowledgements have not been lost 
during the failover.</para>
+                <para>Consequently the transaction will be marked as 
rollback-only, and any
+                    subsequent attempt to commit it will throw a <literal
+                        >javax.jms.TransactionRolledBackException</literal> 
(if using JMS), or a
+                        <literal>HornetQException</literal> with error code 
<literal
+                        >HornetQException.TRANSACTION_ROLLED_BACK</literal> if 
using the core
+                    API.</para>
+               <warning>
+                  <title>2 phase commit</title>
+                  <para>
+                     The caveat to this rule is when XA is used either via JMS 
or through the core API.
+                     If 2 phase commit is used and prepare has already been 
called then rolling back could
+                     cause a <literal>HeuristicMixedException</literal>. 
Because of this the commit will throw
+                     a <literal>XAException.XA_RETRY</literal> exception. This 
informs the Transaction Manager
+                     that it should retry the commit at some later point in 
time, a side effect of this is
+                     that any non persistent messages will be lost. To avoid 
this use persistent
+                     messages when using XA. With acknowledgements this is not 
an issue since they are
+                     flushed to the server before prepare gets called.
+                  </para>
+               </warning>
+                <para>It is up to the user to catch the exception, and perform 
any client side local
+                    rollback code as necessary. There is no need to manually 
rollback the session -
+                    it is already rolled back. The user can then just retry 
the transactional
+                    operations again on the same session.</para>
+                <para>HornetQ ships with a fully functioning example 
demonstrating how to do this,
+                    please see <xref 
linkend="examples.transaction-failover"/></para>
+                <para>If failover occurs when a commit call is being executed, 
the server, as
+                    previously described, will unblock the call to prevent a 
hang, since no response
+                    will come back. In this case it is not easy for the client 
to determine whether
+                    the transaction commit was actually processed on the live 
server before failure
+                    occurred.</para>
+               <note>
+                  <para>
+                     If XA is being used either via JMS or through the core 
API then an <literal>XAException.XA_RETRY</literal>
+                     is thrown. This is to inform Transaction Managers that a 
retry should occur at some point. At
+                     some later point in time the Transaction Manager will 
retry the commit. If the original
+                     commit has not occurred then it will still exist and be 
committed, if it does not exist
+                     then it is assumed to have been committed although the 
transaction manager may log a warning.
+                  </para>
+               </note>
+                <para>To remedy this, the client can simply enable duplicate 
detection (<xref
+                        linkend="duplicate-detection"/>) in the transaction, 
and retry the
+                    transaction operations again after the call is unblocked. 
If the transaction had
+                    indeed been committed on the live server successfully 
before failover, then when
+                    the transaction is retried, duplicate detection will 
ensure that any durable
+                    messages resent in the transaction will be ignored on the 
server to prevent them
+                    getting sent more than once.</para>
+                <note>
+                    <para>By catching the rollback exceptions and retrying, 
catching unblocked calls
+                        and enabling duplicate detection, once and only once 
delivery guarantees for
+                        messages can be provided in the case of failure, 
guaranteeing 100% no loss
+                        or duplication of messages.</para>
+                </note>
+            </section>
+            <section id="ha.automatic.failover.nontransactional">
+                <title>Handling Failover With Non Transactional 
Sessions</title>
+                <para>If the session is non transactional, messages or 
acknowledgements can be lost
+                    in the event of failover.</para>
+                <para>If you wish to provide <emphasis role="italic">once and 
only once</emphasis>
+                    delivery guarantees for non transacted sessions too, 
enabled duplicate
+                    detection, and catch unblock exceptions as described in 
<xref
+                        linkend="ha.automatic.failover.blockingcalls"/></para>
+            </section>
+        </section>
+        <section>
+            <title>Getting Notified of Connection Failure</title>
+            <para>JMS provides a standard mechanism for getting notified 
asynchronously of
+                connection failure: 
<literal>java.jms.ExceptionListener</literal>. Please consult
+                the JMS javadoc or any good JMS tutorial for more information 
on how to use
+                this.</para>
+            <para>The HornetQ core API also provides a similar feature in the 
form of the class
+                    
<literal>org.hornet.core.client.SessionFailureListener</literal></para>
+            <para>Any ExceptionListener or SessionFailureListener instance 
will always be called by
+                HornetQ on event of connection failure, <emphasis role="bold"
+                    >irrespective</emphasis> of whether the connection was 
successfully failed over,
+                reconnected or reattached, however you can find out if 
reconnect or reattach has happened
+            by either the <literal>failedOver</literal> flag passed in on the 
<literal>connectionFailed</literal>
+               on <literal>SessionfailureListener</literal> or by inspecting 
the error code on the
+               <literal>javax.jms.JMSException</literal> which will be one of 
the following:</para>
+           <table frame="topbot" border="2">
+              <title>JMSException error codes</title>
+              <tgroup cols="2">
+                 <colspec colname="c1" colnum="1"/>
+                 <colspec colname="c2" colnum="2"/>
+                 <thead>
+                    <row>
+                       <entry>error code</entry>
+                       <entry>Description</entry>
+                    </row>
+                 </thead>
+                 <tbody>
+                    <row>
+                       <entry>FAILOVER</entry>
+                       <entry>
+                          Failover has occurred and we have successfully 
reattached or reconnected.
+                       </entry>
+                    </row>
+                    <row>
+                       <entry>DISCONNECT</entry>
+                       <entry>
+                          No failover has occurred and we are disconnected.
+                       </entry>
+                    </row>
+                 </tbody>
+              </tgroup>
+           </table>
+        </section>
+        <section>
+            <title>Application-Level Failover</title>
+            <para>In some cases you may not want automatic client failover, 
and prefer to handle any
+                connection failure yourself, and code your own manually 
reconnection logic in your
+                own failure handler. We define this as 
<emphasis>application-level</emphasis>
+                failover, since the failover is handled at the user 
application level.</para>
+            <para>To implement application-level failover, if you're using JMS 
then you need to set
+                an <literal>ExceptionListener</literal> class on the JMS 
connection. The
+                <literal>ExceptionListener</literal> will be called by HornetQ 
in the event that
+                connection failure is detected. In your 
<literal>ExceptionListener</literal>, you
+                would close your old JMS connections, potentially look up new 
connection factory
+                instances from JNDI and creating new connections. In this case 
you may well be using
+                <ulink 
url="http://www.jboss.org/community/wiki/JBossHAJNDIImpl";>HA-JNDI</ulink>
+                to ensure that the new connection factory is looked up from a 
different server.</para>
+            <para>For a working example of application-level failover, please 
see
+                <xref linkend="application-level-failover"/>.</para>
+            <para>If you are using the core API, then the procedure is very 
similar: you would set a
+                    <literal>FailureListener</literal> on the core 
<literal>ClientSession</literal>
+                instances.</para>
+        </section>
+    </section>
+</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/architecture1.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/architecture1.jpg 
b/docs/user-manual/en/images/architecture1.jpg
new file mode 100644
index 0000000..cb1161f
Binary files /dev/null and b/docs/user-manual/en/images/architecture1.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/architecture2.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/architecture2.jpg 
b/docs/user-manual/en/images/architecture2.jpg
new file mode 100644
index 0000000..274f578
Binary files /dev/null and b/docs/user-manual/en/images/architecture2.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/architecture3.jpg
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/architecture3.jpg 
b/docs/user-manual/en/images/architecture3.jpg
new file mode 100644
index 0000000..3c1dfd5
Binary files /dev/null and b/docs/user-manual/en/images/architecture3.jpg differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/console1.png
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/console1.png 
b/docs/user-manual/en/images/console1.png
new file mode 100644
index 0000000..19b6cbd
Binary files /dev/null and b/docs/user-manual/en/images/console1.png differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/ha-replicated-store.png
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/ha-replicated-store.png 
b/docs/user-manual/en/images/ha-replicated-store.png
new file mode 100644
index 0000000..9065dfe
Binary files /dev/null and b/docs/user-manual/en/images/ha-replicated-store.png 
differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/ha-shared-store.png
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/ha-shared-store.png 
b/docs/user-manual/en/images/ha-shared-store.png
new file mode 100644
index 0000000..0be2766
Binary files /dev/null and b/docs/user-manual/en/images/ha-shared-store.png 
differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/hornetQ-banner_final.png
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/hornetQ-banner_final.png 
b/docs/user-manual/en/images/hornetQ-banner_final.png
new file mode 100644
index 0000000..6388dff
Binary files /dev/null and 
b/docs/user-manual/en/images/hornetQ-banner_final.png differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/images/hornetQ_logo_600px.png
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/images/hornetQ_logo_600px.png 
b/docs/user-manual/en/images/hornetQ_logo_600px.png
new file mode 100644
index 0000000..b71f4ba
Binary files /dev/null and b/docs/user-manual/en/images/hornetQ_logo_600px.png 
differ

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/intercepting-operations.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/intercepting-operations.xml 
b/docs/user-manual/en/intercepting-operations.xml
new file mode 100644
index 0000000..60db274
--- /dev/null
+++ b/docs/user-manual/en/intercepting-operations.xml
@@ -0,0 +1,100 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+
+<chapter id="intercepting-operations">
+   <title>Intercepting Operations</title>
+   <para>HornetQ supports <emphasis>interceptors</emphasis> to intercept 
packets entering
+       and exiting the server. Incoming and outgoing interceptors are be 
called for any packet
+       entering or exiting the server respectively. This allows custom code to 
be executed,
+       e.g. for auditing packets, filtering or other reasons. Interceptors can 
change the
+       packets they intercept. This makes interceptors powerful, but also 
potentially
+       dangerous.</para>
+   <section>
+      <title>Implementing The Interceptors</title>
+      <para>An interceptor must implement the <literal>Interceptor 
interface</literal>:</para>
+      <programlisting>
+package org.hornetq.api.core.interceptor;
+
+public interface Interceptor
+{   
+   boolean intercept(Packet packet, RemotingConnection connection) throws 
HornetQException;
+}</programlisting>
+      <para>The returned boolean value is important:</para>
+      <itemizedlist>
+         <listitem>
+            <para>if <literal>true</literal> is returned, the process 
continues normally</para>
+         </listitem>
+         <listitem>
+            <para>if <literal>false</literal> is returned, the process is 
aborted, no other interceptors
+                will be called and the packet will not be processed further by 
the server.</para>
+         </listitem>
+      </itemizedlist>
+   </section>
+   <section>
+      <title>Configuring The Interceptors</title>
+      <para>Both incoming and outgoing interceptors are configured in
+          <literal>hornetq-configuration.xml</literal>:</para>
+      <programlisting>
+&lt;remoting-incoming-interceptors>
+   &lt;class-name>org.hornetq.jms.example.LoginInterceptor&lt;/class-name>
+   
&lt;class-name>org.hornetq.jms.example.AdditionalPropertyInterceptor&lt;/class-name>
+&lt;/remoting-incoming-interceptors></programlisting>
+      <programlisting>
+&lt;remoting-outgoing-interceptors>
+   &lt;class-name>org.hornetq.jms.example.LogoutInterceptor&lt;/class-name>
+   
&lt;class-name>org.hornetq.jms.example.AdditionalPropertyInterceptor&lt;/class-name>
+&lt;/remoting-outgoing-interceptors></programlisting>
+      <para>The interceptors classes (and their dependencies) must be added to 
the server classpath
+         to be properly instantiated and called.</para>
+   </section>
+   <section>
+      <title>Interceptors on the Client Side</title>
+      <para>The interceptors can also be run on the client side to intercept 
packets either sent by the
+         client to the server or by the server to the client. This is done by 
adding the interceptor to
+         the <code>ServerLocator</code> with the 
<code>addIncomingInterceptor(Interceptor)</code> or
+         <code>addOutgoingInterceptor(Interceptor)</code> methods.</para>
+      <para>As noted above, if an interceptor returns <literal>false</literal> 
then the sending of the
+         packet is aborted which means that no other interceptors are be 
called and the packet is not
+         be processed further by the client. Typically this process happens 
transparently to the client
+         (i.e. it has no idea if a packet was aborted or not). However, in the 
case of an outgoing packet
+         that is sent in a <literal>blocking</literal> fashion a 
<literal>HornetQException</literal> will
+         be thrown to the caller. The exception is thrown because blocking 
sends provide reliability and
+         it is considered an error for them not to succeed. 
<literal>Blocking</literal> sends occurs when,
+         for example, an application invokes 
<literal>setBlockOnNonDurableSend(true)</literal> or
+         <literal>setBlockOnDurableSend(true)</literal> on its 
<literal>ServerLocator</literal> or if an
+         application is using a JMS connection factory retrieved from JNDI 
that has either
+         <literal>block-on-durable-send</literal> or 
<literal>block-on-non-durable-send</literal>
+         set to <literal>true</literal>. Blocking is also used for packets 
dealing with transactions (e.g.
+         commit, roll-back, etc.). The <literal>HornetQException</literal> 
thrown will contain the name
+         of the interceptor that returned false.</para>
+      <para>As on the server, the client interceptor classes (and their 
dependencies) must be added to the classpath
+         to be properly instantiated and invoked.</para>
+   </section>
+   <section>
+      <title>Example</title>
+      <para>See <xref linkend="examples.interceptor" /> for an example which
+         shows how to use interceptors to add properties to a message on the 
server.</para>
+   </section>
+</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/8ecd255f/docs/user-manual/en/interoperability.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/interoperability.xml 
b/docs/user-manual/en/interoperability.xml
new file mode 100644
index 0000000..e4261d7
--- /dev/null
+++ b/docs/user-manual/en/interoperability.xml
@@ -0,0 +1,288 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- 
============================================================================= 
-->
+<!-- Copyright © 2009 Red Hat, Inc. and others.                               
     -->
+<!--                                                                           
    -->
+<!-- The text of and illustrations in this document are licensed by Red Hat 
under  -->
+<!-- a Creative Commons Attribution–Share Alike 3.0 Unported license 
("CC-BY-SA"). -->
+<!--                                                                           
    -->
+<!-- An explanation of CC-BY-SA is available at                                
    -->
+<!--                                                                           
    -->
+<!--            http://creativecommons.org/licenses/by-sa/3.0/.                
    -->
+<!--                                                                           
    -->
+<!-- In accordance with CC-BY-SA, if you distribute this document or an 
adaptation -->
+<!-- of it, you must provide the URL for the original version.                 
    -->
+<!--                                                                           
    -->
+<!-- Red Hat, as the licensor of this document, waives the right to enforce,   
    -->
+<!-- and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent    
    -->
+<!-- permitted by applicable law.                                              
    -->
+<!-- 
============================================================================= 
-->
+
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" 
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd"; [
+<!ENTITY % BOOK_ENTITIES SYSTEM "HornetQ_User_Manual.ent">
+%BOOK_ENTITIES;
+]>
+<chapter id="interoperability">
+    <title>Interoperability</title>
+    <section id="stomp">
+        <title>Stomp</title>
+        <para><ulink url="http://stomp.github.com/";>Stomp</ulink> is a 
text-orientated wire protocol that allows
+            Stomp clients to communicate with Stomp Brokers. HornetQ now 
supports Stomp 1.0, 1.1 and 1.2.</para>
+        <para>Stomp clients are available for
+        several languages and platforms making it a good choice for 
interoperability.</para>
+        <section id="stomp.native">
+          <title>Native Stomp support</title>
+          <para>HornetQ provides native support for Stomp. To be able to send 
and receive Stomp messages,
+            you must configure a <literal>NettyAcceptor</literal> with a 
<literal>protocols</literal>
+            parameter set to have <literal>stomp</literal>:</para>
+<programlisting>
+&lt;acceptor name="stomp-acceptor">
+   
&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
+   &lt;param key="protocols"  value="STOMP"/>
+   &lt;param key="port"  value="61613"/>
+&lt;/acceptor></programlisting>
+          <para>With this configuration, HornetQ will accept Stomp connections 
on 
+            the port <literal>61613</literal> (which is the default port of 
the Stomp brokers).</para>
+          <para>See the <literal>stomp</literal> example which shows how to 
configure a HornetQ server with Stomp.</para>
+          <section>
+            <title>Limitations</title>
+            <para>Message acknowledgements are not transactional. The ACK 
frame can not be part of a transaction
+              (it will be ignored if its <literal>transaction</literal> header 
is set).</para>
+          </section>
+          <section>
+            <title>Stomp 1.1/1.2 Notes</title>
+            <section>
+                               <title>Virtual Hosting</title>
+                <para>HornetQ currently doesn't support virtual hosting, which 
means the 'host' header 
+                in CONNECT fram will be ignored.</para>
+            </section>
+            <section>
+                               <title>Heart-beating</title>
+                <para>HornetQ specifies a minimum value for both client and 
server heart-beat intervals. 
+                The minimum interval for both client and server heartbeats is 
500 milliseconds. That means if 
+                a client sends a CONNECT frame with heartbeat values lower 
than 500, the server will defaults 
+                the value to 500 milliseconds regardless the values of the 
'heart-beat' header in the frame.</para>
+            </section>
+          </section>
+        </section>
+
+        <section>
+          <title>Mapping Stomp destinations to HornetQ addresses and 
queues</title>
+          <para>Stomp clients deals with <emphasis>destinations</emphasis> 
when sending messages and subscribing.
+            Destination names are simply strings which are mapped to some form 
of destination on the 
+            server - how the server translates these is left to the server 
implementation.</para>
+           <para>In HornetQ, these destinations are mapped to 
<emphasis>addresses</emphasis> and <emphasis>queues</emphasis>.
+            When a Stomp client sends a message (using a 
<literal>SEND</literal> frame), the specified destination is mapped
+            to an address.
+            When a Stomp client subscribes (or unsubscribes) for a destination 
(using a <literal>SUBSCRIBE</literal>
+            or <literal>UNSUBSCRIBE</literal> frame), the destination is 
mapped to a HornetQ queue.</para>
+        </section>
+      <section>
+        <title>STOMP and connection-ttl</title>
+        <para>Well behaved STOMP clients will always send a DISCONNECT frame 
before closing their connections. In this case the server
+          will clear up any server side resources such as sessions and 
consumers synchronously. However if STOMP clients exit without
+        sending a DISCONNECT frame or if they crash the server will have no 
way of knowing immediately whether the client is still alive
+        or not. STOMP connections therefore default to a connection-ttl value 
of 1 minute (see chapter on <link linkend="connection-ttl"
+          >connection-ttl</link> for more information. This value can be 
overridden using connection-ttl-override.
+        </para>
+        <para>If you need a specific connection-ttl for your stomp connections 
without affecting the connection-ttl-override setting, you
+        can configure your stomp acceptor with the "connection-ttl" property, 
which is used to set the ttl for connections that are 
+        created from that acceptor. For example:
+        </para>
+<programlisting>
+&lt;acceptor name="stomp-acceptor">
+   
&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
+   &lt;param key="protocols"  value="STOMP"/>
+   &lt;param key="port"  value="61613"/>
+   &lt;param key="connection-ttl"  value="20000"/>
+&lt;/acceptor></programlisting>
+        <para>The above configuration will make sure that any stomp connection 
that is created from that acceptor will have its 
+        connection-ttl set to 20 seconds.</para>
+
+        <note><para>Please note that the STOMP protocol version 1.0 does not 
contain any heartbeat frame. It is therefore the user's 
+        responsibility to make sure data is sent within connection-ttl or the 
server will assume the client is dead and clean up server 
+        side resources. With <literal>Stomp 1.1</literal> users can use 
heart-beats to maintain the life cycle of stomp 
+        connections.</para></note>
+      </section>
+      
+        <section>
+          <title>Stomp and JMS interoperability</title>
+          <section>
+            <title>Using JMS destinations</title>
+            <para>As explained in <xref linkend="jms-core-mapping" />, JMS 
destinations are also mapped to HornetQ addresses and queues.
+              If you want to use Stomp to send messages to JMS destinations, 
the Stomp destinations must follow the same convention:</para>
+            <itemizedlist>
+              <listitem>
+                <para>send or subscribe to a JMS <emphasis>Queue</emphasis> by 
prepending the queue name by <literal>jms.queue.</literal>.</para>
+                <para>For example, to send a message to the 
<literal>orders</literal> JMS Queue, the Stomp client must send the 
frame:</para>
+                <programlisting>
+SEND
+destination:jms.queue.orders
+
+hello queue orders
+^@</programlisting>
+              </listitem>
+              <listitem>
+                <para>send or subscribe to a JMS <emphasis>Topic</emphasis> by 
prepending the topic name by <literal>jms.topic.</literal>.</para>
+                <para>For example to subscribe to the 
<literal>stocks</literal> JMS Topic, the Stomp client must send the 
frame:</para>
+                <programlisting>
+SUBSCRIBE
+destination:jms.topic.stocks
+
+^@</programlisting>
+              </listitem>
+             </itemizedlist>
+           </section>
+
+           <section>
+             <title>Sending and consuming Stomp message from JMS or HornetQ 
Core API</title>
+             <para>Stomp is mainly a text-orientated protocol. To make it 
simpler to interoperate with JMS and HornetQ Core API, 
+               our Stomp implementation checks for presence of the 
<literal>content-length</literal> header to decide how to map a Stomp message
+               to a JMS Message or a Core message.
+             </para>
+             <para>If the Stomp message does <emphasis>not</emphasis> have a 
<literal>content-length</literal> header, it will be mapped to a JMS 
<emphasis>TextMessage</emphasis>
+               or a Core message with a <emphasis>single nullable SimpleString 
in the body buffer</emphasis>.</para>
+             <para>Alternatively, if the Stomp message 
<emphasis>has</emphasis> a <literal>content-length</literal> header, 
+               it will be mapped to a JMS <emphasis>BytesMessage</emphasis>
+               or a Core message with a <emphasis>byte[] in the body 
buffer</emphasis>.</para>
+             <para>The same logic applies when mapping a JMS message or a Core 
message to Stomp. A Stomp client can check the presence
+                of the <literal>content-length</literal> header to determine 
the type of the message body (String or bytes).</para>
+          </section>
+          <section>
+            <title>Message IDs for Stomp messages</title>
+            <para>When receiving Stomp messages via a JMS consumer or a 
QueueBrowser, the messages have
+            no properties like JMSMessageID by default. However this may bring 
some inconvenience to 
+            clients who wants an ID for their purpose. HornetQ Stomp provides 
a parameter to enable
+            message ID on each incoming Stomp message. If you want each Stomp 
message to have a unique ID,
+            just set the <literal>stomp-enable-message-id</literal> to true. 
For example:</para>
+<programlisting>
+&lt;acceptor name="stomp-acceptor">
+   
&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
+   &lt;param key="protocols" value="STOMP"/>
+   &lt;param key="port" value="61613"/>
+   &lt;param key="stomp-enable-message-id" value="true"/>
+&lt;/acceptor></programlisting>
+            <para>When the server starts with the above setting, each stomp 
message sent through this
+            acceptor will have an extra property added. The property key is 
<literal>
+            hq-message-id</literal> and the value is a String representation 
of a long type internal
+            message id prefixed with "<literal>STOMP</literal>", like:
+<programlisting>
+hq-message-id : STOMP12345</programlisting>
+            If <literal>stomp-enable-message-id</literal> is not specified in 
the configuration, default
+            is <literal>false</literal>. </para>
+          </section>
+          <section>
+            <title>Handling of Large Messages with Stomp</title>
+            <para>Stomp clients may send very large bodys of frames which can 
exceed the size of HornetQ 
+            server's internal buffer, causing unexpected errors. To prevent 
this situation from happening,
+            HornetQ provides a stomp configuration attribute 
<literal>stomp-min-large-message-size</literal>. 
+            This attribute can be configured inside a stomp acceptor, as a 
parameter. For example: </para>
+<programlisting>
+   &lt;acceptor name="stomp-acceptor">
+   
&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
+   &lt;param key="protocols" value="STOMP"/>
+   &lt;param key="port" value="61613"/>
+   &lt;param key="stomp-min-large-message-size" value="10240"/>
+&lt;/acceptor></programlisting>
+            <para>The type of this attribute is integer. When this attributed 
is configured, HornetQ server 
+            will check the size of the body of each Stomp frame arrived from 
connections established with 
+            this acceptor. If the size of the body is equal or greater than 
the value of 
+            <literal>stomp-min-large-message</literal>, the message will be 
persisted as a large message.
+            When a large message is delievered to a stomp consumer, the 
HorentQ server will automatically 
+            handle the conversion from a large message to a normal message, 
before sending it to the client.</para>
+            <para>If a large message is compressed, the server will 
uncompressed it before sending it to
+            stomp clients. The default value of 
<literal>stomp-min-large-message-size</literal> is the same
+            as the default value of <link 
linkend="large-messages.core.config">min-large-message-size</link>.</para>
+          </section>
+        </section>
+        
+        <section id="stomp.websockets">
+         <title>Stomp Over Web Sockets</title>
+         <para>HornetQ also support Stomp over <ulink 
url="http://dev.w3.org/html5/websockets/";>Web Sockets</ulink>. Modern web 
browser which support Web Sockets can send and receive
+            Stomp messages from HornetQ.</para>
+         <para>To enable Stomp over Web Sockets, you must configure a 
<literal>NettyAcceptor</literal> with a <literal>protocol</literal>
+            parameter set to <literal>stomp_ws</literal>:</para>
+         <programlisting>
+&lt;acceptor name="stomp-ws-acceptor">
+   
&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
+   &lt;param key="protocols" value="STOMP_WS"/>
+   &lt;param key="port" value="61614"/>
+&lt;/acceptor></programlisting>
+         <para>With this configuration, HornetQ will accept Stomp connections 
over Web Sockets on 
+            the port <literal>61614</literal> with the URL path 
<literal>/stomp</literal>.
+            Web browser can then connect to 
<literal>ws://&lt;server>:61614/stomp</literal> using a Web Socket to send and 
receive Stomp
+            messages.</para>
+         <para>A companion JavaScript library to ease client-side development 
is available from 
+            <ulink 
url="http://github.com/jmesnil/stomp-websocket";>GitHub</ulink> (please see
+            its <ulink 
url="http://jmesnil.net/stomp-websocket/doc/";>documentation</ulink> for a 
complete description).</para>
+         <para>The <literal>stomp-websockets</literal> example shows how to 
configure HornetQ server to have web browsers and Java
+            applications exchanges messages on a JMS topic.</para>
+        </section>
+
+        <section id="stompconnect">
+          <title>StompConnect</title>
+          <para><ulink 
url="http://stomp.codehaus.org/StompConnect";>StompConnect</ulink> is a server 
that
+            can act as a Stomp broker and proxy the Stomp protocol to the 
standard JMS API.
+            Consequently, using StompConnect it is possible to turn HornetQ 
into a Stomp Broker and
+            use any of the available stomp clients. These include clients 
written in C, C++, c# and
+            .net etc.</para>
+          <para>To run StompConnect first start the HornetQ server and make 
sure that it is using
+            JNDI.</para>
+          <para>Stomp requires the file <literal>jndi.properties</literal> to 
be available on the
+            classpath. This should look something like:</para>
+          <programlisting>
+java.naming.factory.initial=org.jnp.interfaces.NamingContextFactory
+java.naming.provider.url=jnp://localhost:1099
+java.naming.factory.url.pkgs=org.jboss.naming:org.jnp.interfaces</programlisting>
+          <para>Make sure this file is in the classpath along with the 
StompConnect jar and the
+            HornetQ jars and simply run <literal>java 
org.codehaus.stomp.jms.Main</literal>.</para>
+        </section>
+        
+    </section>
+    <section>
+        <title>REST</title>
+        <para>Please see <xref linkend="rest"/></para>
+    </section>
+    <section>
+        <title>AMQP</title>
+        <para>HornetQ supports the <ulink 
url="https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=amqp";>AMQP 
1.0</ulink>
+        specification. To enable AMQP you must configure a Netty Acceptor to 
receive AMQP clients, like so:</para>
+        <programlisting>
+&lt;acceptor name="stomp-acceptor">
+&lt;factory-class>org.hornetq.core.remoting.impl.netty.NettyAcceptorFactory&lt;/factory-class>
+&lt;param key="protocols"  value="AMQP"/>
+&lt;param key="port"  value="5672"/>
+&lt;/acceptor>
+        </programlisting>
+        <para>HornetQ will then accept AMQP 1.0 clients on port 5672 which is 
the default AMQP port.</para>
+        <para>There are 2 Stomp examples available see proton-j and 
proton-ruby which use the qpid Java and Ruby clients
+        respectively</para>
+        <section>
+            <title>AMQP and security</title>
+        <para>The HornetQ Server accepts AMQP SASL Authentication and will use 
this to map onto the underlying session created
+            for the connection so you can use the normal HornetQ security 
configuration.</para>
+        </section>
+        <section>
+            <title>AMQP Links</title>
+        <para>An AMQP Link is a uni directional transport for messages between 
a source and a target, i.e. a client and the
+        HornetQ Broker. A link will have an endpoint of which there are 2 
kinds, a Sender and A Receiver. At the Broker a
+            Sender will have its messages converted into a HornetQ Message and 
forwarded to its destination or target. A
+        Receiver will map onto a HornetQ Server Consumer and convert HornetQ 
messages back into AMQP messages before being delivered.</para>
+        </section>
+        <section>
+            <title>AMQP and destinations</title>
+            <para>If an AMQP Link is dynamic then a temporary queue will be 
created and either the remote source or remote
+                target address will be set to the name of the temporary queue. 
If the Link is not dynamic then the the address
+                of the remote target or source will used for the queue. If 
this does not exist then an exception will be sent</para>
+            <note><para>For the next version we will add a flag to aut create 
durable queue but for now you will have to add them via
+                the configuration</para></note>
+        </section>
+        <section>
+            <title>AMQP and Coordinations - Handling Transactions</title>
+            <para>An AMQP links target can also be a Coordinator, the 
Coordinator is used to handle transactions. If a
+            coordinator is used the the underlying HormetQ Server session will 
be transacted and will be either rolled back
+                or committed via the coordinator.</para>
+            <note><para>AMQP allows the use of multiple transactions per 
session, <literal>amqp:multi-txns-per-ssn</literal>,
+                however in this version HornetQ will only support single 
transactions per session</para></note>
+        </section>
+    </section>
+</chapter>

Reply via email to