Hi:


A few questions about NiFi cluster:
1. If we have multiple worker nodes in the cluster, do they partition the work 
if the source allows partitioning - eg: HDFS, or do all the nodes work on the 
same data ?2. If the nodes partition the work, then how do they coordinate the 
work distribution and recovery etc ?  From the documentation it appears that 
the workers are not aware of each other.3. If I need to process multiple files 
- how do we design the work flow so that the nodes work on one file at a time 
?4. If I have multiple arguments and need to pass one parameter to each worker, 
how can I do that ?5. Is there any way to control how many workers are involved 
in processing the flow ?6. Does specifying the number of threads in the 
processor distribute work on multiple workers ?  Does it split the task across 
the threads or is it the responsibility of the application ?
I tried to find some answers from the documentation and users list but could 
not get a clear picture.
Thanks
Mans



   

Reply via email to