This is generally workable (heterogenous node capabilities) in NiFi
clustering.  But you do want to leverage back-pressure and load balanced
connections so that faster nodes will have an opportunity to take on the
workload for slower nodes.

Thanks

On Wed, Mar 6, 2019 at 3:48 PM James Srinivasan <james.sriniva...@gmail.com>
wrote:

> Yes, we hit this with the new load balanced queues (which, to be fair, we
> also had with remote process groups previously). Two "old" nodes got
> saturated and their queues filled while three "new" nodes were fine.
>
> My "solution" was to move everything to new hardware which we had inbound
> anyway.
>
> On Wed, 6 Mar 2019, 20:40 Jon Logan, <jmlo...@buffalo.edu> wrote:
>
>> You may run into issues with different processing power, as some machines
>> may be overwhelmed in order to saturate other machines.
>>
>> On Wed, Mar 6, 2019 at 3:34 PM Mark Payne <marka...@hotmail.com> wrote:
>>
>>> Chad,
>>>
>>> This should not be a problem, given that all nodes have enough storage
>>> available to handle the influx of data.
>>>
>>> Thanks
>>> -Mark
>>>
>>>
>>> > On Mar 6, 2019, at 1:44 PM, Chad Woodhead <chadwoodh...@gmail.com>
>>> wrote:
>>> >
>>> > Are there any negative effects of having filesystem mounts (dedicated
>>> mounts for each repo) used by the different NiFi repositories differ in
>>> size on NiFi nodes within the same cluster? For instance, if some nodes
>>> have a content_repo mount of 130 GB and other nodes have a content_repo
>>> mount of 125 GB, could that cause any problems or cause one node to be used
>>> more since it has more space? What about if the difference was larger, by
>>> say a 100 GB difference?
>>> >
>>> > Trying to repurpose old nodes and add them as NiFi nodes, but their
>>> mount sizes are different than my current cluster’s nodes and I’ve noticed
>>> I can’t set the max size limit to use of a particular mount for a repo.
>>> >
>>> > -Chad
>>>
>>>

Reply via email to