[jira] [Commented] (NIFI-12194) Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT securityProtocol

2023-10-30 Thread Peter Schmitzer (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17781213#comment-17781213
 ] 

Peter Schmitzer commented on NIFI-12194:


Hi [~pgrey]  thank you very much for following up on this! If it works out as 
you describe it definitely is a sufficient fix that mitigates the risk for us.

> Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT 
> securityProtocol
> -
>
> Key: NIFI-12194
> URL: https://issues.apache.org/jira/browse/NIFI-12194
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.21.0, 1.23.0
>Reporter: Peter Schmitzer
>Assignee: Paul Grey
>Priority: Major
> Attachments: image-2023-09-27-15-56-02-438.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When starting ConsumeKafka_2_6 processor with sasl mechanism GSSAPI and the 
> securityProtocol PLAINTEXT (although SSL would be correct) the UI crashed and 
> nifi was no longer accessible. Not only the frontend was not accessible 
> anymore, also the other processors in our flow stopped performing well 
> according to our dashboards.
> We were able to reproduce this by using the config as described above.
> Our nifi in preprod (where this was detected) runs in a kubernetes cluster.
>  * version 1.21.0
>  * 3 nodes
>  * jvmMemory: 1536m
>  * 3G memory (limit)
>  * 400m cpu (request)
>  * zookeeper
> The logs do not offer any unusual entries when the issue is triggered. 
> Inspecting the pod metrics we found a spike in memory.
> The issue is a bit scary for us because a rather innocent config parameter in 
> one single processor is able to let our whole cluster break down.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-12194) Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT securityProtocol

2023-10-13 Thread Peter Schmitzer (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-12194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schmitzer updated NIFI-12194:
---
Affects Version/s: 1.23.0

> Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT 
> securityProtocol
> -
>
> Key: NIFI-12194
> URL: https://issues.apache.org/jira/browse/NIFI-12194
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.21.0, 1.23.0
>Reporter: Peter Schmitzer
>Priority: Major
> Attachments: image-2023-09-27-15-56-02-438.png
>
>
> When starting ConsumeKafka_2_6 processor with sasl mechanism GSSAPI and the 
> securityProtocol PLAINTEXT (although SSL would be correct) the UI crashed and 
> nifi was no longer accessible. Not only the frontend was not accessible 
> anymore, also the other processors in our flow stopped performing well 
> according to our dashboards.
> We were able to reproduce this by using the config as described above.
> Our nifi in preprod (where this was detected) runs in a kubernetes cluster.
>  * version 1.21.0
>  * 3 nodes
>  * jvmMemory: 1536m
>  * 3G memory (limit)
>  * 400m cpu (request)
>  * zookeeper
> The logs do not offer any unusual entries when the issue is triggered. 
> Inspecting the pod metrics we found a spike in memory.
> The issue is a bit scary for us because a rather innocent config parameter in 
> one single processor is able to let our whole cluster break down.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-12194) Nifi fails when ConsumeKafka_2_6 processor is started with PLAINTEXT securityProtocol

2023-10-09 Thread Peter Schmitzer (Jira)
Peter Schmitzer created NIFI-12194:
--

 Summary: Nifi fails when ConsumeKafka_2_6 processor is started 
with PLAINTEXT securityProtocol
 Key: NIFI-12194
 URL: https://issues.apache.org/jira/browse/NIFI-12194
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.21.0
Reporter: Peter Schmitzer
 Attachments: image-2023-09-27-15-56-02-438.png

When starting ConsumeKafka_2_6 processor with sasl mechanism GSSAPI and the 
securityProtocol PLAINTEXT (although SSL would be correct) the UI crashed and 
nifi was no longer accessible. Not only the frontend was not accessible 
anymore, also the other processors in our flow stopped performing well 
according to our dashboards.

We were able to reproduce this by using the config as described above.

Our nifi in preprod (where this was detected) runs in a kubernetes cluster.
 * version 1.21.0
 * 3 nodes
 * jvmMemory: 1536m
 * 3G memory (limit)
 * 400m cpu (request)
 * zookeeper

The logs do not offer any unusual entries when the issue is triggered. 
Inspecting the pod metrics we found a spike in memory.

The issue is a bit scary for us because a rather innocent config parameter in 
one single processor is able to let our whole cluster break down.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2023-04-13 Thread Peter Schmitzer (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Schmitzer resolved NIFI-10353.

Resolution: Workaround

remove eventhub processor and use standard kafka consumer instead

>  ConsumeAzureEventHub does not stop even though output queue is backpressured
> -
>
> Key: NIFI-10353
> URL: https://issues.apache.org/jira/browse/NIFI-10353
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3
>Reporter: Peter Schmitzer
>Priority: Major
>
> ConsumeAzureEventHub seems to not care about backpressure and continues to 
> send flowfiles to the output queue even though it is backpressured. This 
> endlessly growing queue will ultimately lead to nifi going into overload and 
> be unhealthy.
> It was expected that the processor will stop putting further data in the 
> outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2023-04-13 Thread Peter Schmitzer (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17711708#comment-17711708
 ] 

Peter Schmitzer commented on NIFI-10353:


Update from our side:
Azure eventhub supports connecting with the standard kafka protocol so standard 
kafka consumers can (and I believe should be) used for that purpose. We have 
removed this processor from all our flows and have no need to improve this.

>  ConsumeAzureEventHub does not stop even though output queue is backpressured
> -
>
> Key: NIFI-10353
> URL: https://issues.apache.org/jira/browse/NIFI-10353
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3
>Reporter: Peter Schmitzer
>Priority: Major
>
> ConsumeAzureEventHub seems to not care about backpressure and continues to 
> send flowfiles to the output queue even though it is backpressured. This 
> endlessly growing queue will ultimately lead to nifi going into overload and 
> be unhealthy.
> It was expected that the processor will stop putting further data in the 
> outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2022-11-21 Thread Peter Schmitzer (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17636520#comment-17636520
 ] 

Peter Schmitzer commented on NIFI-10353:


Hi [~exceptionfactory] , is there any process we need to follow that this 
ticket is being looked at by someone? Thanks

>  ConsumeAzureEventHub does not stop even though output queue is backpressured
> -
>
> Key: NIFI-10353
> URL: https://issues.apache.org/jira/browse/NIFI-10353
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3
>Reporter: Peter Schmitzer
>Priority: Major
>
> ConsumeAzureEventHub seems to not care about backpressure and continues to 
> send flowfiles to the output queue even though it is backpressured. This 
> endlessly growing queue will ultimately lead to nifi going into overload and 
> be unhealthy.
> It was expected that the processor will stop putting further data in the 
> outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2022-08-15 Thread Peter Schmitzer (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17579725#comment-17579725
 ] 

Peter Schmitzer commented on NIFI-10353:


Hi guys, we are using an azure eventhub as a central input source for our data 
ingestion and thus have the ConsumeAzureEventHub processor in place. Our idea 
was in case of outages of downstream systems that we can use the backpressure 
mechanisms of the nifi queues to have all queues backpressured up to the azure 
eventhub itself and then basically have nifi to "stop processing". All the new 
messages will wait in the eventhub for us.
Unfortunately, that does not happen and it behaves as described above.

>  ConsumeAzureEventHub does not stop even though output queue is backpressured
> -
>
> Key: NIFI-10353
> URL: https://issues.apache.org/jira/browse/NIFI-10353
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.16.3
>Reporter: Peter Schmitzer
>Priority: Major
>
> ConsumeAzureEventHub seems to not care about backpressure and continues to 
> send flowfiles to the output queue even though it is backpressured. This 
> endlessly growing queue will ultimately lead to nifi going into overload and 
> be unhealthy.
> It was expected that the processor will stop putting further data in the 
> outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10353) ConsumeAzureEventHub does not stop even though output queue is backpressured

2022-08-15 Thread Peter Schmitzer (Jira)
Peter Schmitzer created NIFI-10353:
--

 Summary:  ConsumeAzureEventHub does not stop even though output 
queue is backpressured
 Key: NIFI-10353
 URL: https://issues.apache.org/jira/browse/NIFI-10353
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.16.3
Reporter: Peter Schmitzer


ConsumeAzureEventHub seems to not care about backpressure and continues to send 
flowfiles to the output queue even though it is backpressured. This endlessly 
growing queue will ultimately lead to nifi going into overload and be unhealthy.
It was expected that the processor will stop putting further data in the 
outgoing queue as soon as it is backpressured.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)