James

Yes the problem is likely to reappear most likely.  I suspect you're
being impacted by https://issues.apache.org/jira/browse/NIFI-2395.

It shows as though it was not addressed in any 0.x release, however,
in looking at the JIRA notes it does appear that relevant aspects of
this was merged into 0.x.  Joe Skora/Mike Moser: can you please review
this JIRA and get it tagged to the proper fix version since it is now
on master in 0.x?

Thanks
Joe

On Mon, Mar 13, 2017 at 2:39 PM, James McMahon <jsmcmah...@gmail.com> wrote:
> Sent you that info Joe.
>
> Just to be certain I understood clearly: I should go ahead and rm -rf
> ./provenance_repository? Won't the problem just reappear again as soon as it
> recreates provenance_repository at start up?
>
> On Mon, Mar 13, 2017 at 5:04 PM, Joe Witt <joe.w...@gmail.com> wrote:
>>
>> Jim
>>
>> Blow away the prov repo and restart to get back going.  But to
>> troubleshoot why it isn't cleaning up it seems as though there is some
>> reason nifi is failing to rollover.
>>
>> What version precisely are you running?
>>
>> Thanks
>> Joe
>>
>> On Mar 13, 2017 2:01 PM, "James McMahon" <jsmcmah...@gmail.com> wrote:
>>>
>>> Here is all the info Joe. I had to manually recreate this here. Tried to
>>> avoid typos. Anything that looks bizarre or oddly misspelled is on me.
>>>
>>> Certainly Joe. I need to do whatever is humanly possible to get this up
>>> and
>>> rolling again. Here is what I can provide:
>>>
>>> (Background)
>>> /mnt/provenance_repo 50G size, 47G used, 17M available, 100% capacity
>>> under this is
>>> database_repository
>>> provenance_repository
>>> du -h output:
>>> 8.5M ./database_repository
>>> 16K ./lost+found
>>> 4.0K ./provenance_repository/index-1489...124
>>> 92K ./provenance_repository/toc
>>> 47G ./provenance_repository/journals
>>> 47G ./provenance_repository
>>> 47G .
>>>
>>> In my nifi.properties:
>>> #H2 Settings
>>> nifi.database.directory=/mnt/provenance_repo/database_repository
>>> nifi.h2.url.append=;LOCK_TIMEOUT=25000;WRITE_DELAY=0;AUTO_SERVER=FALSE
>>>
>>> nifi.provenance.repository.directory.default=/mnt/provenance_repo/provenance_repos
>>> itory
>>> nifi.provenance.repository.max.storage.time=24 hours
>>> nifi.provenance.repository.max.storage.size=1 GB
>>> nifi.provenance.repository.rollover.time=30 secs
>>> nifi.provenance.repository.rollover.size=100 MB
>>> nifi.provenance.repository.query.threads=2
>>> nifi.provenance.repository.index.threads=1
>>> nifi.provenance.repository.compress.on.rollover=true
>>> nifi.provenance.repository.always.sync=false
>>> nifi.provenance.repository.journal.count=16
>>>
>>> nifi.provenance.repository.indexed.fields=EventType,FlofFileUUID,Filename,ProcessorID,Relationship
>>> nifi.provenance.repository.indexed.attributes=
>>> nifi.provenance.repository.index.shard.size=500 MB
>>> nifi.provenance.repository.max.attributes.length=65536
>>> nifi.provenance.repository.buffer.size=100000
>>> (Errors)
>>> in nifi-bootstrap.log, java.io.IOException No space left on device
>>> /mnt/provenance_repo/database_repository/nifi-users.h2.db
>>> similar error reported in nifi-app.log, fails when trying to create
>>> org.springframework.beans.factory.BeanCreationException
>>>
>>> I think both are just symptoms of the true problem, which is to say too
>>> much stuff in
>>> journals under provenance_repository
>>> Jim
>>>
>>> On Mon, Mar 13, 2017 at 4:10 PM, Joe Witt <joe.w...@gmail.com> wrote:
>>>>
>>>> Jim,
>>>>
>>>> Processors create provenance events which get written to the
>>>> provenenance repo but that is the end of their relationship.  The
>>>> provenance repository is then self managing.  Can you share your
>>>> nifi.properties provenance configuration?
>>>>
>>>> Thanks
>>>> Joe
>>>>
>>>> On Mon, Mar 13, 2017 at 1:05 PM, James McMahon <jsmcmah...@gmail.com>
>>>> wrote:
>>>> > Good evening. I have a number of process groups in my NiFi instance
>>>> > that run
>>>> > concurrently. Evidently I have filled up my provenance repo. How do I
>>>> > recover from this? NiFi will not start back up after shutdown.
>>>> >
>>>> > I am in a dev environment right now so I can afford to lose what had
>>>> > been in
>>>> > my flows. Does that give me other options to recover?
>>>> >
>>>> > However I must determine what process group and processor are not
>>>> > freeing
>>>> > resources from the provenance repo. I can't allow this workflow to
>>>> > promote
>>>> > to production until I determine how to prevent this from reoccurring.
>>>> > How
>>>> > can I do that?
>>>> >
>>>> > My provenance, content, and flow repos are all on separate disk
>>>> > devices,
>>>> > each of which is 50GB in capacity.
>>>> >
>>>> > Thank you for your help. -Jim
>>>
>>>
>

Reply via email to