.
Martijn
On Tue, 6 Nov 2018, at 16:53, Karthik Kothareddy (karthikk) [CONT - Type 2]
wrote:
Hello Martijn,
No... we’re still seeing these warnings very frequently and sometimes node
disconnects as well. We tried everything we could and was not able to get to
the bottom of this. When you
issue, and
turned out to be related to hardware not keeping up with the increased volume
of work (which is why a node was added to begin with)
On Fri, 26 Oct 2018, at 19:18, Karthik Kothareddy (karthikk) [CONT - Type 2]
wrote:
Any light on this? We’re still seeing warnings every 10 seconds which
Any light on this? We’re still seeing warnings every 10 seconds which is really
annoying and no idea what’s triggering them.
-Karthik
From: Karthik Kothareddy (karthikk) [CONT - Type 2]
Sent: Tuesday, October 16, 2018 8:27 AM
To: users@nifi.apache.org
Subject: RE: [EXT] Re: Cluster Warnings
16, 2018, 9:59 AM Karthik Kothareddy (karthikk) [CONT - Type 2]
mailto:karth...@micron.com>> wrote:
Joe,
The slow node is Node04 in this case but we get one such slow response from a
random node(Node01, Node02,Node03) every time we see this warning.
-Karthik
From: Joe Witt [mailto
Karthik Kothareddy (karthikk) [CONT - Type 2]
mailto:karth...@micron.com>> wrote:
Hello,
We’re running a 4-node cluster on NiFi 1.7.1. The fourth node was added
recently and as soon as we added the 4th node, we started seeing below warnings
Response time from NODE2 was slow for each of the
Hello,
We're running a 4-node cluster on NiFi 1.7.1. The fourth node was added
recently and as soon as we added the 4th node, we started seeing below warnings
Response time from NODE2 was slow for each of the last 3 requests made. To see
more information about timing, enable DEBUG logging for
different way to implement this that would be useful,
particularly in these larger deployments, I'm happy to take a run at it.
Thanks,
Dan
On Tue, Aug 14, 2018 at 4:38 PM Karthik Kothareddy (karthikk) [CONT - Type 2]
mailto:karth...@micron.com>> wrote:
Bryan,
Thanks, I was thinking the s
making one huge
request.
On Tue, Aug 14, 2018 at 11:07 AM, Karthik Kothareddy (karthikk) [CONT
- Type 2] wrote:
> Hi Pierre,
>
>
>
> I tried this both on Standalone instance (1.7.1) and clustered
> instance
> (1.6.0) where the nifi.cluster.node.read.timeout is set to 60 s
18-08-14 0:12 GMT+02:00 Karthik Kothareddy (karthikk) [CONT - Type 2]
mailto:karth...@micron.com>>:
Joe,
I tried this call on both 1.7.1 and 1.6.0 and still getting the timeout
exception. I know that this is a very expensive call and requires lot of
caching from serverside. I was looking f
n Mon, Aug 13, 2018 at 3:08 PM Karthik Kothareddy (karthikk) [CONT - Type 2]
wrote:
>
> All,
>
>
>
> I was trying to get all processors from “root” Process group with the
> following rest call -
> /nifi-api/process-groups/root/processors?includeDescendantGroups=true
>
All,
I was trying to get all processors from "root" Process group with the following
rest call -
/nifi-api/process-groups/root/processors?includeDescendantGroups=true and this
call keeps timing out with the below exception
javax.ws.rs.ProcessingException: java.net.SocketTimeoutException: Rea
Hello All,
I'm using NiFi 1.4.0 and was playing with the ExecuteStreamCommand processor to
submit spark jobs. I wrapped the spark-submit command in a shell script and
tested it first with ExecuteProcess and everything works as expected, however
according to my use case I need to use many attrib
Hello,
I have a use case that goes as follows,
InputPort (Flat files ) --> ProcessFlowfiles (combination of ExecuteScripts to
filter out few fields) --> Load to Teradata(using a custom processor)
To do so I have to be sure that I process the flowfiles in certain order across
the nodes so that
designed to have multiple
NiFi instances writing to them.
Each node in a NiFi cluster has its own set of flow files, content, and
provenance, thus the separate repos.
-Bryan
On Wed, Dec 20, 2017 at 6:59 PM, Karthik Kothareddy (karthikk) [CONT - Type 2]
wrote:
> Joe,
>
>
>
>
works fine. In
such a case you need to have strong iops provisioning, you need to realize
there will be higher roundtrip times for rw ops. But in general this can be
managed.
Thanks
joe
On Dec 20, 2017 6:45 PM, "Karthik Kothareddy (karthikk) [CONT - Type 2]"
mailto:karth...@
same repositories.
The repositories for a given node could be on local disk, or on a mounted
network drive, but local disk will likely have the best performance.
Thanks,
Bryan
On Wed, Dec 20, 2017 at 1:25 PM, Karthik Kothareddy (karthikk) [CONT - Type 2]
wrote:
> All,
>
>
>
> We are t
All,
We are trying to setup a new NiFi (1.4.0) cluster with 3 nodes, and trying to
decide on how should we proceed with the repository structure. Should we be
pointing the same repositories for all nodes (shared filers accessible by all 3
nodes) or give separate local repositories for all three
Hello All,
We are currently running NiFi 1.3.0 on Linux(RHEL) box (standalone instance),
we are facing some strange issues with this instance. Whenever the total
processor count exceeds 1500 the whole instance is slowing down (I think there
is no limit on the number of processors an instance ca
Hello All,
I am using NiFi 1.2.0 with some customizations. I was trying to query a
Teradata table using ExecuteSQL processor and it is giving me the below error,
I am querying from a view that casts the VARCHAR columns into double. This
Error doesn't seem to occur when queried directly without
19 matches
Mail list logo