read that again and hopefully it was obvious I was joking.   But I am
looking forward to hearing what you learn.

Thanks

On Tue, Sep 13, 2022 at 10:10 AM Joe Witt <joe.w...@gmail.com> wrote:
>
> Lars
>
> I need you to drive back to work because now I am very vested in the outcome 
> :)
>
> But yeah this was an annoying problem we saw hit some folks.  Changing
> that value after fixing the behavior was the answer.  I owe the
> community a blog on this....
>
> Thanks
>
> On Tue, Sep 13, 2022 at 9:57 AM Lars Winderling
> <lars.winderl...@posteo.de> wrote:
> >
> > Sorry, misread the jira. We're still on the old default value. Thank you 
> > for being persistant about it. I will try it tomorrow with the lower value 
> > and get back to you. Not at work atm, so I can't paste the config values in 
> > detail.
> >
> > On 13 September 2022 16:45:30 CEST, Joe Witt <joe.w...@gmail.com> wrote:
> >>
> >> Lars
> >>
> >> You should not have to update to 1.17.  While I'm always fond of
> >> peoople being on the latest the issue i mentioned is fixed in 1.16.3.
> >>
> >> HOWEVER, please do confirm your values.  The one I'd really focus you on is
> >> nifi.content.claim.max.appendable.size=50 KB
> >>
> >> Our default before was like 1MB and what we'd see is we'd hang on to
> >> large content way longer than we intended because some queue had one
> >> tiny object in it.  So that value became really important.
> >>
> >> If you're on 1MB change to 50KB and see what happens.
> >>
> >> Thanks
> >>
> >> On Tue, Sep 13, 2022 at 9:40 AM Lars Winderling
> >> <lars.winderl...@posteo.de> wrote:
> >>>
> >>>
> >>>  I guess the issue you linked, is related. I have seen similar messages 
> >>> in the log occasionally, but didn't directly connect it. Our config is 
> >>> pretty similar to the defaults, none of it should directly cause the 
> >>> issue. Will give 1.17.0 a try and come back if the issue persists. Your 
> >>> help is really appreciated, thanks!
> >>>
> >>>  On 13 September 2022 16:33:53 CEST, Joe Witt <joe.w...@gmail.com> wrote:
> >>>>
> >>>>
> >>>>  Lars
> >>>>
> >>>>  The issue that came to mind is
> >>>>  https://issues.apache.org/jira/browse/NIFI-10023 but that is fixed in
> >>>>  1.16.2 and 1.17.0 so that is why I asked.
> >>>>
> >>>>  What is in your nifi.properties for
> >>>>  # Content Repository
> >>>>  
> >>>> nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
> >>>>  nifi.content.claim.max.appendable.size=50 KB
> >>>>  nifi.content.repository.directory.default=./content_repository
> >>>>  nifi.content.repository.archive.max.retention.period=7 days
> >>>>  nifi.content.repository.archive.max.usage.percentage=50%
> >>>>  nifi.content.repository.archive.enabled=true
> >>>>  nifi.content.repository.always.sync=false
> >>>>
> >>>>  Thanks
> >>>>
> >>>>  On Tue, Sep 13, 2022 at 7:04 AM Lars Winderling
> >>>>  <lars.winderl...@posteo.de> wrote:
> >>>>>
> >>>>>
> >>>>>
> >>>>>   I'm using 1.16.3 from upstream (no custom build) on java 11 temurin, 
> >>>>> debian 10, virtualized, no docker setup.
> >>>>>
> >>>>>   On 13 September 2022 13:37:15 CEST, Joe Witt <joe.w...@gmail.com> 
> >>>>> wrote:
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>>   Lars
> >>>>>>
> >>>>>>   What version are you using?
> >>>>>>
> >>>>>>   Thanks
> >>>>>>
> >>>>>>   On Tue, Sep 13, 2022 at 3:11 AM Lars Winderling 
> >>>>>> <lars.winderl...@posteo.de> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>   Dear community,
> >>>>>>>
> >>>>>>>   sometimes our content repository grows out of bounds. Since it has 
> >>>>>>> been separated on disk from the rest of NiFi, we can still use the 
> >>>>>>> NiFi UI and empty the respective queues. However, the disk remains 
> >>>>>>> jammed. Sometimes, it gets cleaned up after a few mintes, but most of 
> >>>>>>> the time we need to restart NiFi manually, for the cleanup to happen.
> >>>>>>>   So. is there any way of triggering the content eviction manually 
> >>>>>>> without restarting NiFi?
> >>>>>>>   Btw. the respective files on disk are not archived in the content 
> >>>>>>> repository (thus not below */archive/*).
> >>>>>>>
> >>>>>>>   Thanks in advance for your support!
> >>>>>>>   Best,
> >>>>>>>   Lars

Reply via email to