Hi,
I'd suggest couple things. Have you configured backpressure controls on
connections? NiFi 1.0.0 adds 1evt/1GB by default IIRC. This can help
avoid overwhelming components in a flow.
Next, the 2 core CPU is really inadequate for high throughput system, see
if you can get something better.
Thanks James.. I am looking into permission issue and update the thread. I
will also make the changes as you per your recommendation.
On Fri, Oct 28, 2016 at 10:23 AM, James Wing wrote:
> From the screenshot and the error message, I interpret the sequence of
> events to be something like this:
>
>From the screenshot and the error message, I interpret the sequence of
events to be something like this:
1.) ListS3 succeeds and generates flowfiles with attributes referencing S3
objects, but no content (0 bytes)
2.) FetchS3Object fails to pull the S3 object content with an Access Denied
error,
Thanks Bryan, Joe, Adam and Pierre. I went past this issue by switching to
0.71. Now it is able to list the files from buckets and create those files
in the another bucket. But write is not happening and I am getting the
permission issue ( I have attached below for the reference) Could this be
the
Quick remark: the fix has also been merged in master and will be in release
1.1.0.
Pierre
2016-10-28 15:22 GMT+02:00 Gop Krr :
> Thanks Adam. I will try 0.7.1 and update the community on the outcome. If
> it works then I can create a patch for 1.x
> Thanks
> Rai
>
> On Thu, Oct 27, 2016 at 7:41
Thanks Adam. I will try 0.7.1 and update the community on the outcome. If
it works then I can create a patch for 1.x
Thanks
Rai
On Thu, Oct 27, 2016 at 7:41 PM, Adam Lamar wrote:
> Hey All,
>
> I believe OP is running into a bug fixed here:
> https://issues.apache.org/jira/browse/NIFI-2631
>
> B
Hello Witt,
before anything else thanks for your help.
Fortunatly I put down only the NIFI cluster, otherwise I was already in
vacation :)
After I posted this problem I kept to torture staging NIFI and
discovered that when CPU LOAD gets very high, nodes loose connection and
anything starts going
Alessio
You have two clusters here potentially. The NiFi cluster and the
Hadoop cluster. Which one went down?
If NiFi went down I'd suspect memory exhaustion issues because other
resource exhaustion issues like full file system, exhausted file
handles, pegged CPU, etc.. tend not to cause it to
Hello all,
yesterday, for a mistake, basically I executed " ls -R / " using the
ListHDFS processor and the whole cluster gone down ( not just a node ).
Something like this also happened when I was playing with some DO WHILE
/ WHILE DO patterns. I have only the nifi logs and they show the
heartbeat