Joe,

I think that's currently a limitation in backpressure handling of NiFi.
Single loopback connections are handled properly (those are excluded),
although more complex loopbacks (such as this retry scenario) are not.

Although my gut feeling says that your configuration leaves space for
improvement. In case your both connections can fill up, you:
-May have too many retries
-Too big penalties for failures
-Simply the capacity of the connections are not big enough

What I would like to highlight here is that based on your average data
throughput and the maximum time you would like to keep retrying sending the
same flowfile, you should have a good estimation for the number of
flowfiles you can have in your loopback connections in such a scenario.
Apply some buffers and you should be good.
You can also apply maximum age for flowfiles, so they get dropped after a
while from these connections.

The long term solution would be to introduce a more complex logic to handle
loops when it comes to backpressure (such as the one in minifi cpp), but I
guess that will take a while. I'm pretty sure that with a good flow design
you can avoid this issue without it as well.

Regards,
Arpad

On Thu, Jun 17, 2021 at 6:59 PM Joe Obernberger <
joseph.obernber...@gmail.com> wrote:

> Hi All - I'm wondering if there is an approach to using the
> RetryFlowFile that doesn't get 'stuck' if there are lots of failures.
> I'm using InvokeHTTP and if it fails, the failure goes to a
> RetryFlowFile process.  The retry from the RetryFlowFile goes back to
> InvokeHTTP.  If the InvokeHTTP process fails for a long time, the
> failure queue fills, the retry queue fills, and when InvokeHTTP is
> brought back up, it won't start since both queues are full.
>
> Any ideas?
> Thank you!
>
> -Joe Obernberger
>
>

Reply via email to