Github user alopresto commented on a diff in the pull request:

    https://github.com/apache/nifi/pull/491#discussion_r65753157
  
    --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc ---
    @@ -485,98 +485,137 @@ It is preferable to request upstream/downstream 
systems to switch to https://cwi
     Clustering Configuration
     ------------------------
     
    -This section provides a quick overview of NiFi Clustering and instructions 
on how to set up a basic cluster. In the future, we hope to provide 
supplemental documentation that covers the NiFi Cluster Architecture in depth.
    -
    -The design of NiFi clustering is a simple master/slave model where there 
is a master and one or more slaves.
    -While the model is that of master and slave, if the master dies, the 
slaves are all instructed to continue operating
    -as they were to ensure the dataflow remains live. The absence of the 
master simply means new slaves cannot join the
    -cluster and cluster flow changes cannot occur until the master is 
restored. In NiFi clustering, we call the master
    -the NiFi Cluster Manager (NCM), and the slaves are called Nodes. See a 
full description of each in the Terminology section below.
    +This section provides a quick overview of NiFi Clustering and instructions 
on how to set up a basic cluster.
    +In the future, we hope to provide supplemental documentation that covers 
the NiFi Cluster Architecture in depth.
    +
    +NiFi employs a Zero-Master Clustering paradigm. Each of the nodes in the 
cluster performs the same tasks on
    +the data but each operates on a different set of data. One of the nodes is 
automatically elected (via Apache
    +ZooKeeper) as the Cluster Coordinator. All nodes in the cluster will then 
send heartbeat/status information
    +to this node, and this node is responsible for disconnecting nodes that do 
not report any heartbeat status
    +for some amount of time. Additionally, when a new node elects to join the 
cluster, the new node must first
    +connect to the currently-elected Cluster Coordinator in order to obtain 
the most up-to-date flow. If the Cluster
    +Coordinator determines that the node is allowed to join (based on its 
configured Firewall file), the current
    +flow is provided to that node, and that node is able to join the cluster, 
assuming that the node's copy of the
    +flow matches the copy provided by the Cluster Coordinator. If the node's 
version of the flow configuration differs
    +from that of the Cluster Coordinator's, the node will not join the cluster.
     
     *Why Cluster?* +
     
    -NiFi Administrators or Dataflow Managers (DFMs) may find that using one 
instance of NiFi on a single server is not enough to process the amount of data 
they have. So, one solution is to run the same dataflow on multiple NiFi 
servers. However, this creates a management problem, because each time DFMs 
want to change or update the dataflow, they must make those changes on each 
server and then monitor each server individually. By clustering the NiFi 
servers, it's possible to have that increased processing capability along with 
a single interface through which to make dataflow changes and monitor the 
dataflow. Clustering allows the DFM to make each change only once, and that 
change is then replicated to all the nodes of the cluster. Through the single 
interface, the DFM may also monitor the health and status of all the nodes.
    +NiFi Administrators or Dataflow Managers (DFMs) may find that using one 
instance of NiFi on a single server is not
    +enough to process the amount of data they have. So, one solution is to run 
the same dataflow on multiple NiFi servers.
    +However, this creates a management problem, because each time DFMs want to 
change or update the dataflow, they must make
    +those changes on each server and then monitor each server individually. By 
clustering the NiFi servers, it's possible to
    +have that increased processing capability along with a single interface 
through which to make dataflow changes and monitor
    +the dataflow. Clustering allows the DFM to make each change only once, and 
that change is then replicated to all the nodes
    +of the cluster. Through the single interface, the DFM may also monitor the 
health and status of all the nodes.
     
     NiFi Clustering is unique and has its own terminology. It's important to 
understand the following terms before setting up a cluster.
     
     [template="glossary", id="terminology"]
     *Terminology* +
     
    -*NiFi Cluster Manager*: A NiFi Cluster Manager (NCM) is an instance of 
NiFi that provides the sole management point for the cluster. It communicates 
dataflow changes to the nodes and receives health and status information from 
the nodes. It also ensures that a uniform dataflow is maintained across the 
cluster.  When DFMs manage a dataflow in a cluster, they do so through the User 
Interface of the NCM (i.e., via the URL of the NCM's User Interface). 
Fundamentally, the NCM keeps the state of the cluster consistent.
    -
    -*Nodes*: Each cluster is made up of the NCM and one or more nodes. The 
nodes do the actual data processing. (The NCM does not process any data; all 
data runs through the nodes.)  While nodes are connected to a cluster, the DFM 
may not access the User Interface for any of the individual nodes. The User 
Interface of a node may only be accessed if the node is manually removed from 
the cluster.
    -
    -*Primary Node*: Every cluster has one Primary Node. On this node, it is 
possible to run "Isolated Processors" (see below). By default, the NCM will 
elect the first node that connects to the cluster as the Primary Node; however, 
the DFM may select a new node as the Primary Node in the Cluster Management 
page of the User Interface if desired. If the cluster restarts, the NCM will 
"remember" which node was the Primary Node and wait for that node to re-connect 
before allowing the DFM to make any changes to the dataflow. The ADMIN may 
adjust how long the NCM waits for the Primary Node to reconnect by adjusting 
the property _nifi.cluster.manager.safemode.duration_ in the _nifi.properties_ 
file, which is discussed in the <<system_properties>> section of this document.
    -
    -*Isolated Processors*: In a NiFi cluster, the same dataflow runs on all 
the nodes. As a result, every component in the flow runs on every node. 
However, there may be cases when the DFM would not want every processor to run 
on every node. The most common case is when using a processor that communicates 
with an external service using a protocol that does not scale well. For 
example, the GetSFTP processor pulls from a remote directory, and if the 
GetSFTP on every node in the cluster tries simultaneously to pull from the same 
remote directory, there could be race conditions. Therefore, the DFM could 
configure the GetSFTP on the Primary Node to run in isolation, meaning that it 
only runs on that node. It could pull in data and -with the proper dataflow 
configuration- load-balance it across the rest of the nodes in the cluster. 
Note that while this feature exists, it is also very common to simply use a 
standalone NiFi instance to pull data and feed it to the cluster. It just 
depends o
 n the resources available and how the Administrator decides to configure the 
cluster.
    -
    -*Heartbeats*: The nodes communicate their health and status to the NCM via 
"heartbeats", which let the NCM know they are still connected to the cluster 
and working properly. By default, the nodes emit heartbeats to the NCM every 5 
seconds, and if the NCM does not receive a heartbeat from a node within 45 
seconds, it disconnects the node due to "lack of heartbeat". (The 5-second and 
45-second settings are configurable in the _nifi.properties_ file. See the 
<<system_properties>> section of this document for more information.) The 
reason that the NCM disconnects the node is because the NCM needs to ensure 
that every node in the cluster is in sync, and if a node is not heard from 
regularly, the NCM cannot be sure it is still in sync with the rest of the 
cluster. If, after 45 seconds, the node does send a new heartbeat, the NCM will 
automatically reconnect the node to the cluster. Both the disconnection due to 
lack of heartbeat and the reconnection once a heartbeat is received are re
 ported to the DFM in the NCM's User Interface.
    +*NiFi Cluster Coordinator*: A NiFi Cluster Cluster Coordinator is the node 
in a NiFI cluster that is responsible for carrying out
    +tasks for managing which nodes are allowed in the cluster and providing 
the most up-to-date flow to newly joining nodes. When a
    +DataFlow Manager manages a dataflow in a cluster, they are able to do so 
through the User Interface of any node in the cluster. Any
    +change made is then replicated to all nodes in the cluster.
    +
    +*Nodes*: Each cluster is made up of one or more nodes. The nodes do the 
actual data processing.
    +
    +*Primary Node*: Every cluster has one Primary Node. On this node, it is 
possible to run "Isolated Processors" (see below).
    +ZooKeeper is used to automatically elect a Primary Node. If that node 
disconnects from the cluster for any reason, a new
    +Primary Node will automatically be elected. Users can determine which node 
is currently elected as the Primary Node by
    +looking at the Cluster Management page of the User Interface.
    +
    +*Isolated Processors*: In a NiFi cluster, the same dataflow runs on all 
the nodes. As a result, every component in the flow
    +runs on every node. However, there may be cases when the DFM would not 
want every processor to run on every node. The most
    +common case is when using a processor that communicates with an external 
service using a protocol that does not scale well.
    +For example, the GetSFTP processor pulls from a remote directory, and if 
the GetSFTP Processor runs on every node in the
    +cluster tries simultaneously to pull from the same remote directory, there 
could be race conditions. Therefore, the DFM could
    +configure the GetSFTP on the Primary Node to run in isolation, meaning 
that it only runs on that node. It could pull in data and -
    +with the proper dataflow configuration - load-balance it across the rest 
of the nodes in the cluster. Note that while this
    +feature exists, it is also very common to simply use a standalone NiFi 
instance to pull data and feed it to the cluster.
    +It just depends on the resources available and how the Administrator 
decides to configure the cluster.
    +
    +*Heartbeats*: The nodes communicate their health and status to the 
currently elected Cluster Coordinator via "heartbeats",
    +which let the Coordinator know they are still connected to the cluster and 
working properly. By default, the nodes emit
    +heartbeats every 5 seconds, and if the Cluster Coordinator does not 
receive a heartbeat from a node within 40 seconds, it
    +disconnects the node due to "lack of heartbeat". (The 5-second setting is 
configurable in the _nifi.properties_ file.
    +See the <<system_properties>> section of this document for more 
information.) The reason that the Cluster Coordinator
    +disconnects the node is because the Coordinator needs to ensure that every 
node in the cluster is in sync, and if a node
    +is not heard from regularly, the Coordinator cannot be sure it is still in 
sync with the rest of the cluster. If, after
    +40 seconds, the node does send a new heartbeat, the Coordinator will 
automatically reconnect the node to the cluster.
    +Both the disconnection due to lack of heartbeat and the reconnection once 
a heartbeat is received are reported to the DFM
    +in the User Interface.
     
     *Communication within the Cluster* +
     
    -As noted, the nodes communicate with the NCM via heartbeats. The 
communication that allows the nodes to find the NCM may be set up as multicast 
or unicast; this is configured in the _nifi.properties_ file (See 
<<system_properties>> ). By default, unicast is used. It is important to note 
that the nodes in a NiFi cluster are not aware of each other. They only 
communicate with the NCM. Therefore, if one of the nodes goes down, the other 
nodes in the cluster will not automatically pick up the load of the missing 
node. It is possible for the DFM to configure the dataflow for failover 
contingencies; however, this is dependent on the dataflow design and does not 
happen automatically.
    +As noted, the nodes communicate with the Cluster Coordinator via 
heartbeats. When a Cluster Coordinator is elected, it updates
    +a well-known ZNode in Apache ZooKeeper with its connection information so 
that nodes understand where to send heartbeats. If one
    +of the nodes goes down, the other nodes in the cluster will not 
automatically pick up the load of the missing node. It is possible
    +for the DFM to configure the dataflow for failover contingencies; however, 
this is dependent on the dataflow design and does not
    +happen automatically.
    +
    +When the DFM makes changes to the dataflow, the node that receives the 
request to change the flow communicates those changes to all
    +nodes and waits for each node to respond, indicating that it has made the 
change on its local flow.
     
    -When the DFM makes changes to the dataflow, the NCM communicates those 
changes to the nodes and waits for each node to respond, indicating that it has 
made the change on its local flow. If the DFM wants to make another change, the 
NCM will only allow this to happen once all the nodes have acknowledged that 
they've implemented the last change. This is a safeguard to ensure that all the 
nodes in the cluster have the correct and up-to-date flow.
     
     *Dealing with Disconnected Nodes* +
     
    -A DFM may manually disconnect a node from the cluster. But if a node 
becomes disconnected for any other reason (such as due to lack of heartbeat), 
the NCM will show a bulletin on the User Interface, and the DFM will not be 
able to make any changes to the dataflow until the issue of the disconnected 
node is resolved. The DFM or the Administrator will need to troubleshoot the 
issue with the node and resolve it before any new changes may be made to the 
dataflow. However, it is worth noting that just because a node is disconnected 
does not mean that it is not working; it just means that the NCM cannot 
communicate with the node.
    +A DFM may manually disconnect a node from the cluster. But if a node 
becomes disconnected for any other reason (such as due to lack of heartbeat),
    +the Cluster Coordinator will show a bulletin on the User Interface. The 
DFM will not be able to make any changes to the dataflow until the issue
    +of the disconnected node is resolved. The DFM or the Administrator will 
need to troubleshoot the issue with the node and resolve it before any
    +new changes may be made to the dataflow. However, it is worth noting that 
just because a node is disconnected does not mean that it is not working;
    +this may happen for a few reasons, including that the node is unable to 
communicate with the Cluster Coordinator due to network problems.
     
     
     *Basic Cluster Setup* +
     
    -This section describes the setup for a simple two-node, non-secure, 
unicast cluster comprised of three instances of NiFi:
    -
    -* The NCM
    -* Node 1
    -* Node 2
    +This section describes the setup for a simple three-node, non-secure 
cluster comprised of three instances of NiFi.
     
    -Administrators may install each instance on a separate server; however, it 
is also perfectly fine to install the NCM and one of the nodes on the same 
server, as the NCM is very lightweight. Just keep in mind that the ports 
assigned to each instance must not collide if the NCM and one of the nodes 
share the same server.
    +For each instance, certain properties in the _nifi.properties_ file will 
need to be updated. In particular, the Web and Clustering properties
    +should be evaluated for your situation and adjusted accordingly. All the 
properties are described in the <<system_properties>> section of this
    +guide; however, in this section, we will focus on the minimum properties 
that must be set for a simple cluster.
     
    -For each instance, certain properties in the _nifi.properties_ file will 
need to be updated. In particular, the Web and Clustering properties should be 
evaluated for your situation and adjusted accordingly. All the properties are 
described in the <<system_properties>> section of this guide; however, in this 
section, we will focus on the minimum properties that must be set for a simple 
cluster.
    +For all three instances, the Cluster Common Properties can be left with 
the default settings. Note, however, that if you change these settings,
    +they must be set the same on every instance in the cluster.
     
    -For all three instances, the Cluster Common Properties can be left with 
the default settings. Note, however, that if you change these settings, they 
must be set the same on every instance in the cluster (NCM and nodes).
    +For each Node, the minimum properties to configure are as follows:
     
    -For the NCM, the minimum properties to configure are as follows:
    -
    -* Under the Web Properties, set either the http or https port that you 
want the NCM to run on. If the NCM and one of the nodes are on the same server, 
make sure this port is different from the web port used by the node.
    -* Under the Cluster Manager Properties, set the following:
    -** nifi.cluster.is.manager - Set this to _true_.
    -** nifi.cluster.manager.protocol.port - Set this to an open port that is 
higher than 1024 (anything lower requires root). Take note of this setting, as 
you will need to reference it when you set up the nodes.
    -
    -For Node 1, the minimum properties to configure are as follows:
    -
    -* Under the Web Properties, set either the http or https port that you 
want Node 1 to run on. If the NCM is running on the same server, choose a 
different web port for Node 1. Also, consider whether you need to set the http 
or https host property.
    -* Under the State Management section, set the 
`nifi.state.management.provider.cluster` property to the identifier of the 
Cluster State Provider. Ensure that the Cluster State Provider has been 
configured in the _state-management.xml_ file. See <<state_providers>> for more 
information.
    -* Under Cluster Node Properties, set the following:
    -** nifi.cluster.is.node - Set this to _true_.
    -** nifi.cluster.node.address - Set this to the fully qualified hostname of 
the node. If left blank, it defaults to "localhost".
    -** nifi.cluster.node.protocol.port - Set this to an open port that is 
higher than 1024 (anything lower requires root). If Node 1 and the NCM are on 
the same server, make sure this port is different from the 
nifi.cluster.manager.protocol.port.
    -** nifi.cluster.node.unicast.manager.address - Set this to the NCM's fully 
qualified hostname.
    -** nifi.cluster.node.unicast.manager.protocol.port - Set this to exactly 
the same port that was set on the NCM for the property 
nifi.cluster.manager.protocol.port.
    -
    -For Node 2, the minimum properties to configure are as follows:
    -
    -* Under the Web Properties, set either the http or https port that you 
want Node 2 to run on. Also, consider whether you need to set the http or https 
host property.
    -* Under the State Management section, set the 
`nifi.state.management.provider.cluster` property to the identifier of the 
Cluster State Provider. Ensure that the Cluster State Provider has been 
configured in the _state-management.xml_ file. See <<state_providers>> for more 
information.
    -* Under the Cluster Node Properties, set the following:
    +* Under the _Web Properties_ section, set either the http or https port 
that you want the Node to run on.
    +  Also, consider whether you need to set the http or https host property.
    +* Under the _State Management section_, set the 
`nifi.state.management.provider.cluster` property
    +  to the identifier of the Cluster State Provider. Ensure that the Cluster 
State Provider has been
    +  configured in the _state-management.xml_ file. See <<state_providers>> 
for more information.
    +* Under _Cluster Node_ Properties, set the following:
     ** nifi.cluster.is.node - Set this to _true_.
     ** nifi.cluster.node.address - Set this to the fully qualified hostname of 
the node. If left blank, it defaults to "localhost".
     ** nifi.cluster.node.protocol.port - Set this to an open port that is 
higher than 1024 (anything lower requires root).
    -** nifi.cluster.node.unicast.manager.address - Set this to the NCM's fully 
qualified hostname.
    -** nifi.cluster.node.unicast.manager.protocol.port - Set this to exactly 
the same port that was set on the NCM for the property 
nifi.cluster.manager.protocol.port.
    -
    -Now, it is possible to start up the cluster. Technically, it does not 
matter which instance starts up first. However, you could start the NCM first, 
then Node 1 and then Node 2. Since the first node that connects is 
automatically elected as the Primary Node, this sequence should create a 
cluster where Node 1 is the Primary Node. Navigate to the URL for the NCM in 
your web browser, and the User Interface should look similar to the following:
    -
    -image:ncm.png["NCM User Interface", width=940]
    +** nifi.cluster.node.protocol.threads - The number of threads that should 
be used to communicate with other nodes in the cluster. This property
    +   defaults to 10, but for large clusters, this value may need to be 
larger.
    +** nifi.zookeeper.connect.string - The Connect String that is needed to 
connect to Apache ZooKeeper. This is a comma-separted list
    +   of hostname:port pairs. For example, 
localhost:2181,localhost:2182,localhost:2183. This should contain a list of all 
ZooKeeper
    +   instances in the ZooKeeper quorum. 
    +** nifi.zookeeper.root.node - The root ZNode that should be used in 
ZooKeeper. ZooKeeper provides a directory-like structure
    +   for storing data. Each 'directory' in this structure is referred to as 
a ZNode. This denotes the root ZNode, or 'directory',
    +   that should be used for storing data. The default value is _/root_. 
This is important to set correctly, as which cluster
    +   the NiFi instance attempts to join is determined by which ZooKeeper 
instance it connects to and the ZooKeeper Root Node
    +   that is specified. 
    +** nifi.cluster.request.replication.claim.timeout - Specifies how long a 
component can be 'locked' during a request replication
    +   before the lock expires and is automatically unlocked. See 
<<claim_management>> for more information.
    +
    +Now, it is possible to start up the cluster. It does not matter which 
order the instances start up. Navigate to the URL for
    +one of the nodes, and the User Interface should look similar to the 
following:
    +
    +image:ncm.png["Clustered User Interface", width=940]
     
     *Troubleshooting*
     
    -If you encounter issues and your cluster does not work as described, 
investigate the nifi.app log and nifi.user log on both the NCM and the nodes. 
If needed, you can change the logging level to DEBUG by editing the 
conf/logback.xml file. Specifically, set the level="DEBUG" in the following 
line (instead of "INFO"):
    +If you encounter issues and your cluster does not work as described, 
investigate the nifi.app log and nifi.user log
    --- End diff --
    
    The log filenames are `nifi-app.log` and `nifi-user.log`. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to