HI,
I am pushing data to s3 using puts3object. I have setup nifi 1.0 zero
master cluster.
Ec2 instance having only 8GB of hard disk. Content repository writing till
4.6 gb of data then it throws jvm out of memory error.
I changed nifi.properties for nifi.content.archive to false. but still it
i
This is the exact error.
On Tue, Sep 20, 2016 at 4:30 PM, Selvam Raman wrote:
> HI,
>
> I am pushing data to s3 using puts3object. I have setup nifi 1.0 zero
> master cluster.
>
> Ec2 instance having only 8GB of hard disk. Content repository writing till
> 4.6 gb of data then it throws jvm o
Hello
Please only post to one list. I have moved 'dev@nifi' to bcc.
In the docs for this processor [1] you'll find reference to "Multipart
Part Size". Set that to a smaller value appropriate for your JVM
memory settings. For instance, if you have a default JVM heap size of
512MB you'll want so
In my case it is going out of disk space.
i set nifi.content.repository.archive.enabled=false. (when i changed this
have restarted nifi cluster )
But still i can see the processor keep on writing here on the disk.
On Tue, Sep 20, 2016 at 4:34 PM, Joe Witt wrote:
> Hello
>
> Please only post to
Hi Selvam,
As mentioned, please keep messages to the one list. Moving dev to bcc
again.
Archiving is only applicable for that content which has exited the flow and
is not referenced by any FlowFiles currently in your processing graph,
similar to garbage collection in Java. For this particular in