Thanks Michal, I only need to keep very few config data, so zookeeper
should be sufficient for me.
2017-05-26 15:12 GMT+08:00 Michal Klempa :
> Hi Ben,
> according to my experience, I would recommend not to use
> DistributedMapCache at all.
> First of all - its not load
Hi Ben,
according to my experience, I would recommend not to use
DistributedMapCache at all.
First of all - its not load balanced / HA and then, it is just a
HashMap implementation in Java, talking to clients using simple TCP
(=it consumes RAM from NiFi's Java heap space)
I guess its in NiFi as a
Thanks Joe, actually what I want to achieve is one of my processors need to
write some config data to be kept in cluster so that other processors would
be able to easily get the config data from the cluster.
I've considered using the default NIFI state map to do it, but as I know it
could only
So the client would be on each of nodea, nodeb, and nodec. The server
would be on nodea, nodeb, nodec. Each client would be configured to
talk to any one of the three servers nodea, nodeb, or nodec. It does
not offer HA. For more complete behavior it is a good idea to have a
client/service
Thanks Joe, so I need to setup the DistributedMapCacheServer on all nodes,
do you mean all DistributedMapCacheClientService should reference only one
of the servers(the same one on the same node)?
What if the DistributedMapCacheServer goes down on that node(or the node
itself goes down), does it
Hello
You put the DistributedMapCacheServer controller service on as well
and then point at it from the client services. So in your three nodes
all three will have the server service but on all the clients they'll
only point to the server service on one of the nodes.
Thanks
Joe
On Thu, May 25,
Hi guys, I'm currently using NIFI with 3 nodes as a cluster, when
using DistributedMapCacheClientService there's a configuration property
called 'Server Hostname'. I've tried with localhost on my local NIFI
standlone node
and it did work. My question is what should I set inside the NIFI cluster?