Aaron,
Excellent! Glad that you're seeing better results. Sorry about that. Let us
know if you run into any other strangeness!
Thanks
-Mark
> On Aug 3, 2016, at 6:18 PM, Aaron Longfield wrote:
>
> I backported the patch from the master branch and it applies without
I backported the patch from the master branch and it applies without
changing much at all. Workflow processing works fine by my eye, but I do
see quite a few provenance warnings logged. I haven't tried out to see how
that repository is working yet, but I just pushed a few million flowfiles
Aaron,
Ok so from a production point of view I'd recommend a small patched
version of the 0.7 release you were working with. It might be the
case that grafting the master line patch for that JIRA into an 0.x
patch is pretty straight forward. You could take a look at that as a
short term option.
Joe,
Sure, I can give that a go. Any serious bugs that I might run across with
that branch that should make me worried about running it on a production
flow?
-Aaron
On Mon, Aug 1, 2016 at 4:01 PM, Joe Witt wrote:
> Aaron,
>
> It doesn't look like the 0.x version of that
Aaron,
It doesn't look like the 0.x version of that patch has been created
yet. Any chance you could build master (slated for upcoming 1.x
release) and try that?
Thanks
Joe
On Mon, Aug 1, 2016 at 3:30 PM, Aaron Longfield wrote:
> Great, glad there's already a fixed bug
Great, glad there's already a fixed bug for it! Is there anything I try to
work around it for now, or at least just get longer processing times
between restarts?
-Aaron
On Mon, Aug 1, 2016 at 11:54 AM, Mark Payne wrote:
> Aaron,
>
> Thanks for getting that to us quickly!
Aaron,
Thanks for getting that to us quickly! It is extremely useful.
Joe,
I do indeed believe this is the same thing. I was in the middle of typing a
response, but you beat me to it!
Thanks
-Mark
> On Aug 1, 2016, at 11:49 AM, Joe Witt wrote:
>
> Aaron, Mark,
>
> In
Aaron, Mark,
In looking at the thread-dump provided it looks to me like this is the
same as what was reported and addressed in
https://issues.apache.org/jira/browse/NIFI-2395
The fix for this has not yet been released but it slated to end up on
an 0.x and 1.0 release line.
Mark do you agree it
Aaron,
Any time that you find NiFi stop performing its work, the best thing to do is
to perform a thread-dump to and
to the mailing list. This allows us to determine what exactly is happening, so
we know what action is being
performed that prevents any other progress.
To do this, you can go to
I've been trying different things to try to fix my NiFi freeze problems,
and it seems the most frequent reason that my cluster gets stuck and stops
processing has to do with network related processors. My data enters the
environment from Kafka and leaves via a site-to-site output port. After
Hi Mark,
I've been using the G1 garbage collector. I brought the nodes down to 8GB
heap and let it run overnight, but processing still got stuck and requiring
NiFi to be restarted on all nodes. It took longer to happen, but they went
down after a few hours. Are there any other things I can
Aaron,
My guess would be that you are hitting a Full Garbage Collection. With such a
huge Java heap, that will cause a "stop the world" pause for quite a long time.
Which garbage collector are you using? Have you tried reducing the heap from 48
GB to say 4 or 8 GB?
Thanks
-Mark
> On Jul 14,
Hi,
I'm having an issue with a small (two node) NiFi cluster where the nodes
will stop processing any queued flowfiles. I haven't seen any error
messages logged related to it, and when attempting to restart the service,
NiFi doesn't respond and the script forcibly kills it. This causes
multiple
13 matches
Mail list logo