Yes that's correct. Is there a way to identify the thread pool name of the stuck ZK process with the dump command? Then it would be possible to use the Java attach API to find and stop the thread with an agent.
https://docs.oracle.com/javase/8/docs/jdk/api/attach/spec/com/sun/tools/attach/VirtualMachine.html Or a rest API call to interrupt internal threads safely? Sent from cellphone. On Mon, Apr 2, 2018, 5:52 PM Jeremy Dyer <[email protected]> wrote: > Hey Joseph, > > I don’t have a sure shot fix but I’m willing to bet this is the same issue > we all experience using any zookeeper based system. Phoenix for example. In > that the real problem is the JVM hangs up trying to communicate with > zookeeper more than the actual underlying system. > > Is your Kafka cluster using zookeeper or no? > > Sent from my iPhone > > > On Apr 2, 2018, at 5:31 PM, Joseph Niemiec <[email protected]> wrote: > > > > Hi all, > > > > We have some Kafka Processors that are getting stuck with 2 threads > always on, even if they get shutdown we can wait hours and they never stop. > I have seen this behavior before with HDFS processors but only on secure > kerberos clusters. This cluster is not secure at all. > > > > Kafka 0.10 > > NiFi 1.5.0 (Apache) > > > > > > I know you can do nifi.sh dump to get thread info, but thats not really > helping us manage the problem. If there was a hard-reset button that didn't > involve restarting the entire JVM that would be great... It takes a while > for our NiFi instances to restart at times and would rather not stop > everything for a single bad processor... > > > > Any tips/recommendations on how we can identify whats really making this > ConsumeKafka processor stuck? > > > > Thanks! > > > > -- > > Joseph >
