Thanks Michael, I could try turning off the content repository archive
feature. One thing I'm curious though is when I watched the content
repository folder when my nifi flow is still actively running, it kept
writing new empty files to the archive folder and some of them are old
flowfiles' content after I checked(only a very small portion, most are
empty files, I used the default archive option of 12 hours and 50%), the
speed of creating these files differ slightly when I tune
the interval of my data extracting processor. The most important thing is
My custom processor that extracts data has no more data to extract at the
moment, so I didn't know what's going on inside NiFI that kept increasing
the use of inodes.

/Ben

2017-12-15 0:27 GMT+08:00 Michael Moser <moser...@gmail.com>:

> Maximum number of inodes is defined when you build a file system on a
> device.  You would have to backup your data, rebuild your file system and
> tell it to allocate more inodes than the default (an mkfs option, I
> think?), then restore your data.
>
> You can turn off the NiFi content_repository archive, if you don't need
> that feature, by setting nifi.content.repository.archive.enabled=false in
> your nifi.properties.
>
> -- Mike
>
>
> On Thu, Dec 14, 2017 at 2:22 AM, 尹文才 <batman...@gmail.com> wrote:
>
> > One Strange thing one of our testers found was that we're using the
> default
> > 12hours archive and 50% disk space configuration, he noticed that when
> nifi
> > removed the archived files inside the content_repository, he checked the
> > inodes ' count didn't go down, then he tried to remove some of the
> archived
> > files inside using the rm command, the inodes's count did go down.
> >
> > /Ben
> >
> > 2017-12-14 9:49 GMT+08:00 尹文才 <batman...@gmail.com>:
> >
> > > Hi Michael, the no space left on device occurred again and I checked
> the
> > > inodes at the time and found it was indeed full, why would the inodes
> > > become full and are there any solutions to get around this problem?
> > Thanks.
> > >
> > > /Ben
> > >
> > > 2017-12-13 13:36 GMT+08:00 尹文才 <batman...@gmail.com>:
> > >
> > >> Hi Michael, I checked the system available inodes by running df -i
> > >> command and there're quite enough inodes in the system. I then removed
> > all
> > >> the files in all repository folders and restarted
> > >> the system, I couldn't see the error again. I will continue to track
> the
> > >> problem to see what's causing it, but it seems not relevant to the
> inode
> > >> use-up reason you mentioned. Thanks.
> > >>
> > >> /Ben
> > >>
> > >> 2017-12-12 23:45 GMT+08:00 Michael Moser <moser...@gmail.com>:
> > >>
> > >>> Greetings Ben,
> > >>>
> > >>> The "No space left on device" error can also be caused by running out
> > of
> > >>> inodes on your device.  You can check this with "df -i".
> > >>>
> > >>> -- Mike
> > >>>
> > >>>
> > >>> On Tue, Dec 12, 2017 at 1:36 AM, 尹文才 <batman...@gmail.com> wrote:
> > >>>
> > >>> > sorry that I forgot to mention the environment that caused this
> > >>> problem,
> > >>> > I'm using the latest nifi 1.4.0 release and installed it on centos
> 7.
> > >>> >
> > >>> > 2017-12-12 14:35 GMT+08:00 尹文才 <batman...@gmail.com>:
> > >>> >
> > >>> > > Hi guys, I'm running into a very weird problem, I wrote a
> processor
> > >>> > > specifically to extract some data
> > >>> > > and I found starting from yesterday it kept showing errors in the
> > >>> log, as
> > >>> > > below:
> > >>> > >
> > >>> > > 2017-12-12 14:01:04,661 INFO [pool-10-thread-1] o.a.n.c.r.
> > >>> > WriteAheadFlowFileRepository
> > >>> > > Initiating checkpoint of FlowFile Repository
> > >>> > > 2017-12-12 14:01:04,676 ERROR [pool-10-thread-1] o.a.n.c.r.
> > >>> > WriteAheadFlowFileRepository
> > >>> > > Unable to checkpoint FlowFile Repository due to
> > >>> > > java.io.FileNotFoundException: ../flowfile_repository/
> > >>> > partition-5/96.journal
> > >>> > > (No space left on device)
> > >>> > > java.io.FileNotFoundException: ../flowfile_repository/
> > >>> > partition-5/96.journal
> > >>> > > (No space left on device)
> > >>> > >         at java.io.FileOutputStream.open0(Native Method)
> > >>> > >         at java.io.FileOutputStream.open(
> > FileOutputStream.java:270)
> > >>> > >         at java.io.FileOutputStream.<init
> > >>> >(FileOutputStream.java:213)
> > >>> > >         at java.io.FileOutputStream.<init
> > >>> >(FileOutputStream.java:162)
> > >>> > >         at org.wali.MinimalLockingWriteAheadLog$
> > Partition.rollover(
> > >>> > > MinimalLockingWriteAheadLog.java:779)
> > >>> > >         at org.wali.MinimalLockingWriteAheadLog.checkpoint(
> > >>> > > MinimalLockingWriteAheadLog.java:528)
> > >>> > >         at org.apache.nifi.controller.repository.
> > >>> > > WriteAheadFlowFileRepository.checkpoint(WriteAheadFlowFileRe
> > >>> pository.
> > >>> > > java:451)
> > >>> > >         at org.apache.nifi.controller.repository.
> > >>> > > WriteAheadFlowFileRepository$1.run(WriteAheadFlowFileRepository.
> > >>> > java:423)
> > >>> > >         at java.util.concurrent.Executors$RunnableAdapter.
> > >>> > > call(Executors.java:511)
> > >>> > >         at java.util.concurrent.FutureTask.runAndReset(
> > >>> > > FutureTask.java:308)
> > >>> > >         at java.util.concurrent.ScheduledThreadPoolExecutor$
> > >>> > > ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.
> > java:180)
> > >>> > >         at java.util.concurrent.ScheduledThreadPoolExecutor$
> > >>> > > ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> > >>> > >         at java.util.concurrent.ThreadPoolExecutor.runWorker(
> > >>> > > ThreadPoolExecutor.java:1142)
> > >>> > >         at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> > >>> > > ThreadPoolExecutor.java:617)
> > >>> > >         at java.lang.Thread.run(Thread.java:745)
> > >>> > >
> > >>> > >
> > >>> > > I noticed the log mentioned no space left on device and I went to
> > >>> check
> > >>> > > the available space and found 33G left. Does anyone know what
> could
> > >>> > > possibly cause this and how to resolve this problem, thanks
> > >>> > >
> > >>> > > /Ben
> > >>> > >
> > >>> >
> > >>>
> > >>
> > >>
> > >
> >
>

Reply via email to