Yup, but because of the unfortunate way the source (outside NiFi)
works, it doesn't buffer for long when the connection doesn't pull or
drops. It behaves far more like a 5 Mbps UDP stream really :-(

On Wed, 6 Mar 2019 at 21:44, Bryan Bende <bbe...@gmail.com> wrote:
>
> James, just curious, what was your source processor in this case? ListenTCP?
>
> On Wed, Mar 6, 2019 at 4:26 PM Jon Logan <jmlo...@buffalo.edu> wrote:
> >
> > What really would resolve some of these issues is backpressure on CPU -- 
> > ie. let Nifi throttle itself down to not choke the machine until it dies if 
> > constrained on CPU. Easier said than done unfortunately.
> >
> > On Wed, Mar 6, 2019 at 4:23 PM James Srinivasan 
> > <james.sriniva...@gmail.com> wrote:
> >>
> >> In our case, backpressure applied all the way up to the TCP network
> >> source which meant we lost data. AIUI, current load balancing is round
> >> robin (and two other options prob not relevant). Would actual load
> >> balancing (e.g. send to node with lowest OS load, or number of active
> >> threads) be a reasonable request?
> >>
> >> On Wed, 6 Mar 2019 at 20:51, Joe Witt <joe.w...@gmail.com> wrote:
> >> >
> >> > This is generally workable (heterogenous node capabilities) in NiFi 
> >> > clustering.  But you do want to leverage back-pressure and load balanced 
> >> > connections so that faster nodes will have an opportunity to take on the 
> >> > workload for slower nodes.
> >> >
> >> > Thanks
> >> >
> >> > On Wed, Mar 6, 2019 at 3:48 PM James Srinivasan 
> >> > <james.sriniva...@gmail.com> wrote:
> >> >>
> >> >> Yes, we hit this with the new load balanced queues (which, to be fair, 
> >> >> we also had with remote process groups previously). Two "old" nodes got 
> >> >> saturated and their queues filled while three "new" nodes were fine.
> >> >>
> >> >> My "solution" was to move everything to new hardware which we had 
> >> >> inbound anyway.
> >> >>
> >> >> On Wed, 6 Mar 2019, 20:40 Jon Logan, <jmlo...@buffalo.edu> wrote:
> >> >>>
> >> >>> You may run into issues with different processing power, as some 
> >> >>> machines may be overwhelmed in order to saturate other machines.
> >> >>>
> >> >>> On Wed, Mar 6, 2019 at 3:34 PM Mark Payne <marka...@hotmail.com> wrote:
> >> >>>>
> >> >>>> Chad,
> >> >>>>
> >> >>>> This should not be a problem, given that all nodes have enough 
> >> >>>> storage available to handle the influx of data.
> >> >>>>
> >> >>>> Thanks
> >> >>>> -Mark
> >> >>>>
> >> >>>>
> >> >>>> > On Mar 6, 2019, at 1:44 PM, Chad Woodhead <chadwoodh...@gmail.com> 
> >> >>>> > wrote:
> >> >>>> >
> >> >>>> > Are there any negative effects of having filesystem mounts 
> >> >>>> > (dedicated mounts for each repo) used by the different NiFi 
> >> >>>> > repositories differ in size on NiFi nodes within the same cluster? 
> >> >>>> > For instance, if some nodes have a content_repo mount of 130 GB and 
> >> >>>> > other nodes have a content_repo mount of 125 GB, could that cause 
> >> >>>> > any problems or cause one node to be used more since it has more 
> >> >>>> > space? What about if the difference was larger, by say a 100 GB 
> >> >>>> > difference?
> >> >>>> >
> >> >>>> > Trying to repurpose old nodes and add them as NiFi nodes, but their 
> >> >>>> > mount sizes are different than my current cluster’s nodes and I’ve 
> >> >>>> > noticed I can’t set the max size limit to use of a particular mount 
> >> >>>> > for a repo.
> >> >>>> >
> >> >>>> > -Chad
> >> >>>>

Reply via email to