[jira] [Commented] (NIFI-12844) JASN1Reader does not recognize record

2024-04-09 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-12844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835320#comment-17835320
 ] 

Josef Zahner commented on NIFI-12844:
-

[~tpalfy] could you support us here?

> JASN1Reader does not recognize record
> -
>
> Key: NIFI-12844
> URL: https://issues.apache.org/jira/browse/NIFI-12844
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.23.2
>Reporter: Beat Fuellemann
>Priority: Major
> Attachments: Bildschirmfoto 2024-02-26 um 17.52.01.png, 
> Bildschirmfoto 2024-02-26 um 17.52.17.png, 
> ReadLocalFile_with_JASN1Reader.png, SingleASN1Packet_Printscreen.png, 
> TS33128IdentityAssociation.asn, keepalive.asn
>
>
> We would like to use NIFI 1.23.2 ListenTCPRecord Processor to read ASN1 
> stream. Unfortunately NIFI does not recognize any records.
> Is there a bug or do I something wrong ?
> Attached you can find a testfile with 3 Packets I receive via ASN1. Attached 
> are also the ASN1 Schema and some printscreens from my configuration.
> [~tpalfy] : Maybe you could help here ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-11251) Nifi Registry Client with Nested PG - sync issue

2023-03-07 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697290#comment-17697290
 ] 

Josef Zahner edited comment on NIFI-11251 at 3/7/23 8:22 AM:
-

[~simonbence] as my colleague already mentioned, we still have NiFi Registry 
sync issues with NiFi v 1.20.0 and nested flows. Seems that your fix  
https://issues.apache.org/jira/browse/NIFI-10973 doesn't cover this use case? 
Can you take care of it?


was (Author: jzahner):
[~simonbence] as my colleague already mentioned, we still have NiFi Registry 
sync issues with NiFi v 1.20.0 and nested flows. Seems that  your fix 
https://issues.apache.org/jira/browse/NIFI-10973 doesn't cover any use case? 
Can you take care of it?

> Nifi Registry Client with Nested PG - sync issue
> 
>
> Key: NIFI-11251
> URL: https://issues.apache.org/jira/browse/NIFI-11251
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.20.0
>Reporter: Beat Fuellemann
>Priority: Major
> Attachments: ChangeVersionError.png, FailureRepro.xml
>
>
> We upgraded to NIFI 1.20.0 and still have problems with NIFI Registry Client 
> with nested PG. PG and PG Child are both
> commited for versioning NIFI Registry.
> We use NIFI Registry to move NIFI Flows from Staging Environment to the 
> Production via the Registry versioning.
> The problem reliese on on the ExecuteScript Processor.
> How we reproduced the issue:
> On NIFI Instance 1
>  - Create Parent PG "ReproduceSyncProblem"
>  - Create Child PG "InnerProcessGroup"
>  - Add some Processors and add Groovy Script
>  - Commit Child PG "InnerProcessGroup"
>  - commit Parent PG "ReproduceSyncProblem"
> On NIFI Instance 2
>  - import PG from Reqistry into the canvas
>  - do some changes in the Child PG "InnerProcessGroup" -> Change something in 
> the groovy Script
>  - Commit Child PG "InnerProcessGroup"
>  - commit Parent PG "ReproduceSyncProblem"
> on NIFI Instance 1
>  - Update Version on Parent PG "ReproduceSyncProblem"
> -> Expectation: all changes should be done, as well the Child PG changes
> -> ISSUE: ERROR during Change version



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-11251) Nifi Registry Client with Nested PG - sync issue

2023-03-07 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17697290#comment-17697290
 ] 

Josef Zahner commented on NIFI-11251:
-

[~simonbence] as my colleague already mentioned, we still have NiFi Registry 
sync issues with NiFi v 1.20.0 and nested flows. Seems that  your fix 
https://issues.apache.org/jira/browse/NIFI-10973 doesn't cover any use case? 
Can you take care of it?

> Nifi Registry Client with Nested PG - sync issue
> 
>
> Key: NIFI-11251
> URL: https://issues.apache.org/jira/browse/NIFI-11251
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.20.0
>Reporter: Beat Fuellemann
>Priority: Major
> Attachments: ChangeVersionError.png, FailureRepro.xml
>
>
> We upgraded to NIFI 1.20.0 and still have problems with NIFI Registry Client 
> with nested PG. PG and PG Child are both
> commited for versioning NIFI Registry.
> We use NIFI Registry to move NIFI Flows from Staging Environment to the 
> Production via the Registry versioning.
> The problem reliese on on the ExecuteScript Processor.
> How we reproduced the issue:
> On NIFI Instance 1
>  - Create Parent PG "ReproduceSyncProblem"
>  - Create Child PG "InnerProcessGroup"
>  - Add some Processors and add Groovy Script
>  - Commit Child PG "InnerProcessGroup"
>  - commit Parent PG "ReproduceSyncProblem"
> On NIFI Instance 2
>  - import PG from Reqistry into the canvas
>  - do some changes in the Child PG "InnerProcessGroup" -> Change something in 
> the groovy Script
>  - Commit Child PG "InnerProcessGroup"
>  - commit Parent PG "ReproduceSyncProblem"
> on NIFI Instance 1
>  - Update Version on Parent PG "ReproduceSyncProblem"
> -> Expectation: all changes should be done, as well the Child PG changes
> -> ISSUE: ERROR during Change version



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10973) NiFi Registry Client with Nested PGs/Flows - sync issue (again)

2022-12-22 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17651125#comment-17651125
 ] 

Josef Zahner commented on NIFI-10973:
-

Yesterday it got even worse, during config sync (nifi registry "change 
version") the GUI showed for multiple seconds a progress bar of 40% until we 
saw that one of two cluster nodes lost it's connectivity. First impression was 
"oh that's not good, but it will connect again" - however that did not 
happened. The node tried to reconnect to the existing node but it never really 
connected, it was always a connect/disconnect message in the log. At the end we 
decided to stop/start everything from scratch and started one by one again the 
NiFis.

{color:#de350b}To be honest, we loose more and more the trust into the config 
sync via registry as it works that bad in our case at the moment. Every second 
sync or so doesn't work as expect (errors that something can't be stopped -> 
second try works, local changes although versions are correct, cluster 
disconnects,...). I don't know whether this stuff is all related to the nested 
flows, but I can't believe that we are the only ones with those issues. {color}

> NiFi Registry Client with Nested PGs/Flows - sync issue (again)
> ---
>
> Key: NIFI-10973
> URL: https://issues.apache.org/jira/browse/NIFI-10973
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Registry
>Affects Versions: 1.19.1
>Reporter: Josef Zahner
>Assignee: Simon Bence
>Priority: Critical
> Attachments: Nested_PG_Test.xml, Sync_issue_State.png, 
> Sync_issue_local_changes.png
>
>
> We just upgraded to NiFi 1.19.1 and we have an issue with the NiFi Registry 
> Client in combination with nested PGs (parent PG and child PGs are both 
> commited as NiFi Registry flows).
> Steps to reproduce the issue (based on the attached template xml):
> Hint: {color:#57d9a3}Green Color {color}-> NiFi Registry flow Master; 
> {color:#4c9aff}Blue Color{color} -> Secondary NiFi/Canvas where you import 
> the flow from Registry
>  * {color:#57d9a3}Import the XML template{color}
>  * {color:#57d9a3}Start Version Control for "Child PG"{color}
>  * {color:#57d9a3}Start Version Control for "Parent PG"{color}
>  * {color:#4c9aff}On an independent canvas/nifi click "Add Process Group" and 
> "Import from Registry", select the "Parent PG" flow{color}
>  * {color:#57d9a3}On the original "Child PG" rename the only existing 
> processor, eg. to "UpdateCounter New"{color}
>  * {color:#57d9a3}Commit the change for the "Child PG" (Processor 
> Rename){color}
>  * {color:#57d9a3}Commit the change for the "Parent PG" (Version change of 
> "Child PG"){color}
>  * {color:#4c9aff}Now click "Change Version" on the other NiFi/Canvas and 
> switch to the newest version for the "Parent PG", which seems to be 
> successful. *However* *now we are hitting the issue, NiFi shows a successful 
> change, but NiFi shows as well "Local Changes" for the "Child PG" (not for 
> the "Parent PG"). To get a good state you have to click to "Revert Local 
> Changes" on the "Child PG" which makes no sense, it should directly sync the 
> "Child PG" according to the commited version. At least the version number of 
> the "Child PG" has been changed but not the real configuration as you see 
> below in the screenshots. It shows that there has been a component name 
> change, which is true for the version but to get to the new version we have 
> to revert the local changes*{color}
>  
> Screenshots with the Failure State below, "Parent PG" is in sync, but not the 
> "Child PG". The only thing I've done is to change the Version on the "Parent 
> PG":
> {color:#4c9aff}*!Sync_issue_State.png!*{color}
>  
> {color:#4c9aff}*!Sync_issue_local_changes.png!*{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10985) Upgrade 1.18.0 - > 1.19.1 / many local changes

2022-12-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17649751#comment-17649751
 ] 

Josef Zahner commented on NIFI-10985:
-

We had a pretty similar issue when we upgraded from 1.18.0 to 1.19.1:

[https://lists.apache.org/thread/m203opy9fv205ztlgm3c40tqsphh1lym]

However in our case the changes were not immediately visible,  they show up 
after the first change. And additionally we had nested flows (within the 
registry flow we have other registry flows). Don't know if this is the case in 
your config.

> Upgrade 1.18.0 - > 1.19.1 / many local changes
> --
>
> Key: NIFI-10985
> URL: https://issues.apache.org/jira/browse/NIFI-10985
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.19.1
> Environment: Debian Linux
>Reporter: hipotures
>Priority: Major
>
> We have flows synchronization (DEV->PROD) using NiFi Registry. DEV was 
> upgraded 5 days ago. Today we were trying synchronize new version flows 
> DEV->PROD, but without success. Errror: _Failed to perform update flow 
> request java.lang.IllegalStateException: The given connection is not 
> currently registered for this Funnel_ (next ticket). We decide to upgrade 
> PROD, but after upgrade versioning system show hundreds local changes. We've 
> revert version to 1.18 and there was no local changes. We are stuck here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10996) CSV output with header - but always, even for 0 record flowfiles

2022-12-20 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-10996:
---

 Summary: CSV output with header - but always, even for 0 record 
flowfiles
 Key: NIFI-10996
 URL: https://issues.apache.org/jira/browse/NIFI-10996
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.19.1
Reporter: Josef Zahner
 Attachments: NiFi_CSV_header_true.png

We use a “ConvertRecord” processor where we convert an AVRO to a CSV. For that 
*CSV* output we would like to have the {*}header enabled{*}, so we tried to set 
“{{{}Include Header Line – true{}}}” for the Controller Service of the 
CSVRecordSetWriter. The issue is, *if we have zero records, the header doesn’t 
show up* (but it was there of course in the AVRO file). We need to have it as 
the columns are important for us, even if we have 0 records.

At the moment we solve it with an extra ExecuteScript processor just before the 
ConvertRecord, there we add always an extra record with the header lines as 
string. But it feels a bit hacky as the record.count attribute is 1 record too 
high (due to the fake header record).

!NiFi_CSV_header_true.png!

Comment from [~joewitt] from users mailinglist: _"Makes sense what you're 
looking for.  Just not sure where this 'concern' would live whether it is in 
the processors themselves or the controller services for the writers."_

It seems that I'm not alone with that requirement, at least one other person 
(Jens M. Kofoed) uses a similar workaround.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10973) NiFi Registry Client with Nested PGs/Flows - sync issue (again)

2022-12-13 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-10973:

Summary: NiFi Registry Client with Nested PGs/Flows - sync issue (again)  
(was: NiFi Registry Client with Nested PGs - sync issue (again))

> NiFi Registry Client with Nested PGs/Flows - sync issue (again)
> ---
>
> Key: NIFI-10973
> URL: https://issues.apache.org/jira/browse/NIFI-10973
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: NiFi Registry
>Affects Versions: 1.19.1
>Reporter: Josef Zahner
>Priority: Critical
> Attachments: Nested_PG_Test.xml, Sync_issue_State.png, 
> Sync_issue_local_changes.png
>
>
> We just upgraded to NiFi 1.19.1 and we have an issue with the NiFi Registry 
> Client in combination with nested PGs (parent PG and child PGs are both 
> commited as NiFi Registry flows).
> Steps to reproduce the issue (based on the attached template xml):
> Hint: {color:#57d9a3}Green Color {color}-> NiFi Registry flow Master; 
> {color:#4c9aff}Blue Color{color} -> Secondary NiFi/Canvas where you import 
> the flow from Registry
>  * {color:#57d9a3}Import the XML template{color}
>  * {color:#57d9a3}Start Version Control for "Child PG"{color}
>  * {color:#57d9a3}Start Version Control for "Parent PG"{color}
>  * {color:#4c9aff}On an independent canvas/nifi click "Add Process Group" and 
> "Import from Registry", select the "Parent PG" flow{color}
>  * {color:#57d9a3}On the original "Child PG" rename the only existing 
> processor, eg. to "UpdateCounter New"{color}
>  * {color:#57d9a3}Commit the change for the "Child PG" (Processor 
> Rename){color}
>  * {color:#57d9a3}Commit the change for the "Parent PG" (Version change of 
> "Child PG"){color}
>  * {color:#4c9aff}Now click "Change Version" on the other NiFi/Canvas and 
> switch to the newest version for the "Parent PG", which seems to be 
> successful. *However* *now we are hitting the issue, NiFi shows a successful 
> change, but NiFi shows as well "Local Changes" for the "Child PG" (not for 
> the "Parent PG"). To get a good state you have to click to "Revert Local 
> Changes" on the "Child PG" which makes no sense, it should directly sync the 
> "Child PG" according to the commited version. At least the version number of 
> the "Child PG" has been changed but not the real configuration as you see 
> below in the screenshots. It shows that there has been a component name 
> change, which is true for the version but to get to the new version we have 
> to revert the local changes*{color}
>  
> Screenshots with the Failure State below, "Parent PG" is in sync, but not the 
> "Child PG". The only thing I've done is to change the Version on the "Parent 
> PG":
> {color:#4c9aff}*!Sync_issue_State.png!*{color}
>  
> {color:#4c9aff}*!Sync_issue_local_changes.png!*{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10973) NiFi Registry Client with Nested PGs - sync issue (again)

2022-12-13 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-10973:
---

 Summary: NiFi Registry Client with Nested PGs - sync issue (again)
 Key: NIFI-10973
 URL: https://issues.apache.org/jira/browse/NIFI-10973
 Project: Apache NiFi
  Issue Type: Bug
  Components: NiFi Registry
Affects Versions: 1.19.1
Reporter: Josef Zahner
 Attachments: Nested_PG_Test.xml, Sync_issue_State.png, 
Sync_issue_local_changes.png

We just upgraded to NiFi 1.19.1 and we have an issue with the NiFi Registry 
Client in combination with nested PGs (parent PG and child PGs are both 
commited as NiFi Registry flows).

Steps to reproduce the issue (based on the attached template xml):

Hint: {color:#57d9a3}Green Color {color}-> NiFi Registry flow Master; 
{color:#4c9aff}Blue Color{color} -> Secondary NiFi/Canvas where you import the 
flow from Registry
 * {color:#57d9a3}Import the XML template{color}
 * {color:#57d9a3}Start Version Control for "Child PG"{color}
 * {color:#57d9a3}Start Version Control for "Parent PG"{color}
 * {color:#4c9aff}On an independent canvas/nifi click "Add Process Group" and 
"Import from Registry", select the "Parent PG" flow{color}
 * {color:#57d9a3}On the original "Child PG" rename the only existing 
processor, eg. to "UpdateCounter New"{color}
 * {color:#57d9a3}Commit the change for the "Child PG" (Processor Rename){color}
 * {color:#57d9a3}Commit the change for the "Parent PG" (Version change of 
"Child PG"){color}
 * {color:#4c9aff}Now click "Change Version" on the other NiFi/Canvas and 
switch to the newest version for the "Parent PG", which seems to be successful. 
*However* *now we are hitting the issue, NiFi shows a successful change, but 
NiFi shows as well "Local Changes" for the "Child PG" (not for the "Parent 
PG"). To get a good state you have to click to "Revert Local Changes" on the 
"Child PG" which makes no sense, it should directly sync the "Child PG" 
according to the commited version. At least the version number of the "Child 
PG" has been changed but not the real configuration as you see below in the 
screenshots. It shows that there has been a component name change, which is 
true for the version but to get to the new version we have to revert the local 
changes*{color}

 

Screenshots with the Failure State below, "Parent PG" is in sync, but not the 
"Child PG". The only thing I've done is to change the Version on the "Parent 
PG":

{color:#4c9aff}*!Sync_issue_State.png!*{color}

 

{color:#4c9aff}*!Sync_issue_local_changes.png!*{color}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2022-09-19 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17606443#comment-17606443
 ] 

Josef Zahner commented on NIFI-6860:


We are now on NiFi 1.15.3 and Java OpenJDK 11.0.16. Issue seems to be gone, at 
least for us, NiFi starts now without an issue.

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Assignee: Troy Melhase
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS, security
> Attachments: Screenshot 2019-11-11 at 11.14.52.png, authorizers.xml, 
> login-identity-providers.xml
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In 

[jira] [Commented] (NIFI-8358) Improve Search and Search Results

2022-09-12 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17603101#comment-17603101
 ] 

Josef Zahner commented on NIFI-8358:


The search function is an essential function, it would be definitely a huge win 
if we would see an extra search in the hamburger menu.

We have a lot of processors and scripts with a lot of code, so searching today 
is very annoying especially because the window always closes after a single 
search result.

> Improve Search and Search Results
> -
>
> Key: NIFI-8358
> URL: https://issues.apache.org/jira/browse/NIFI-8358
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core UI
>Reporter: John Wise
>Priority: Minor
>  Labels: EaseOfUse, search,
>
> Search needs the following improvements:
>  * Sticky panel for search results, so that multiple processors can be 
> visited without having to perform the same search again
>  * Case-sensitive search, for finding instances of the same string in 
> different cases – e.g. datastatus & dataStatus
>  * Add "Search Here" and "Search Here & Below" radio buttons (in addition to 
> "scope:here") to limit the search scope to the current process group, or 
> current process group and below
>  * Alternate result row colors for better separation of search results
>  * Bolded processor names; would require different class(es) for the Parent 
> and Versioned lines, because all of that bolded isn't pretty



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10226) Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503

2022-07-14 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566818#comment-17566818
 ] 

Josef Zahner commented on NIFI-10226:
-

Btw. today the cluster drops the same 1'500'000 flowfiles within a few seconds, 
so no SocketTimeout anymore. And we can't workaround the number of flowfiles, 
as we are doing a ListSFTP for a very large folder with 1'500'000 files, when 
it run once we always get all the files. The possibility to filter regex is 
there, but it would be very complicated to get all files.

> Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503
> -
>
> Key: NIFI-10226
> URL: https://issues.apache.org/jira/browse/NIFI-10226
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.15.3
> Environment: 8-Node cluster
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Screenshot 2022-07-13 at 13.46.23.png, 
> image-2022-07-14-12-23-15-980.png, image-2022-07-14-12-28-58-329.png
>
>
> We have a ListSFTP processor which produces 1'500'000 flowfiles as output. 
> When we try to "Empty Queue" it takes multiple minutes and the GUI shows a 
> HTTP 503 error during that period. We can't open the GUI again until the 
> deletion is complete. Those flowfiles are all on the primary node and not 
> load balanced within the cluster.  Below is the final GUI message. The 
> intermediate message looks similar but without the removed string.
> !Screenshot 2022-07-13 at 13.46.23.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-10226) Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503

2022-07-14 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17566755#comment-17566755
 ] 

Josef Zahner commented on NIFI-10226:
-

Hi [~joewitt] ,

I did some research and it looks like during that period, I don't know why, the 
emptying of the queue was abnormally slow. I can see that based on the 
following log messages:

 
{code:java}
2022-07-13 12:08:39,987 INFO [Drop FlowFiles for Connection 
592c3c09-4b56-1155--aaae5907] o.a.n.c.r.WriteAheadFlowFileRepository 
Repository updated to reflect that 1 FlowFiles were swapped in to 
FlowFileQueue[id=592c3c09-4b56-1155--aaae5907, Load Balance 
Strategy=DO_NOT_LOAD_BALANCE, size=QueueSize[FlowFiles=1536640, ContentSize=0 
Bytes]] {code}
The time between the next 10'000 drop message was sometimes at about 3s. Please 
check my splunk screenshot below. For the first flowfiles it was fast, but then 
the time between the next message dropped to 3s. So it took multiple minutes to 
drop all 1'500'000 flowfiles. 

 

!image-2022-07-14-12-23-15-980.png!

After a bit more than 1minute, we saw the first socket timeouts:

!image-2022-07-14-12-28-58-329.png!

To sum it up, please ensure that a slow dropping doesn't cause 
SocketTimeoutExceptions and please extend the GUI to not just show the 0% and 
100% as today. The log shows 10'000 of flowfiles dropping, why not the GUI? We 
had no idea when the dropping will be finished...

Hope this helps

> Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503
> -
>
> Key: NIFI-10226
> URL: https://issues.apache.org/jira/browse/NIFI-10226
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.15.3
> Environment: 8-Node cluster
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Screenshot 2022-07-13 at 13.46.23.png, 
> image-2022-07-14-12-23-15-980.png, image-2022-07-14-12-28-58-329.png
>
>
> We have a ListSFTP processor which produces 1'500'000 flowfiles as output. 
> When we try to "Empty Queue" it takes multiple minutes and the GUI shows a 
> HTTP 503 error during that period. We can't open the GUI again until the 
> deletion is complete. Those flowfiles are all on the primary node and not 
> load balanced within the cluster.  Below is the final GUI message. The 
> intermediate message looks similar but without the removed string.
> !Screenshot 2022-07-13 at 13.46.23.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10226) Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503

2022-07-14 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-10226:

Attachment: image-2022-07-14-12-28-58-329.png

> Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503
> -
>
> Key: NIFI-10226
> URL: https://issues.apache.org/jira/browse/NIFI-10226
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.15.3
> Environment: 8-Node cluster
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Screenshot 2022-07-13 at 13.46.23.png, 
> image-2022-07-14-12-23-15-980.png, image-2022-07-14-12-28-58-329.png
>
>
> We have a ListSFTP processor which produces 1'500'000 flowfiles as output. 
> When we try to "Empty Queue" it takes multiple minutes and the GUI shows a 
> HTTP 503 error during that period. We can't open the GUI again until the 
> deletion is complete. Those flowfiles are all on the primary node and not 
> load balanced within the cluster.  Below is the final GUI message. The 
> intermediate message looks similar but without the removed string.
> !Screenshot 2022-07-13 at 13.46.23.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10226) Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503

2022-07-14 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-10226:

Attachment: image-2022-07-14-12-23-15-980.png

> Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503
> -
>
> Key: NIFI-10226
> URL: https://issues.apache.org/jira/browse/NIFI-10226
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.15.3
> Environment: 8-Node cluster
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Screenshot 2022-07-13 at 13.46.23.png, 
> image-2022-07-14-12-23-15-980.png
>
>
> We have a ListSFTP processor which produces 1'500'000 flowfiles as output. 
> When we try to "Empty Queue" it takes multiple minutes and the GUI shows a 
> HTTP 503 error during that period. We can't open the GUI again until the 
> deletion is complete. Those flowfiles are all on the primary node and not 
> load balanced within the cluster.  Below is the final GUI message. The 
> intermediate message looks similar but without the removed string.
> !Screenshot 2022-07-13 at 13.46.23.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10227) Queue Order

2022-07-13 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-10227:
---

 Summary: Queue Order
 Key: NIFI-10227
 URL: https://issues.apache.org/jira/browse/NIFI-10227
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.15.3
 Environment: NiFi Cluster (2 & 8 Node)
Reporter: Josef Zahner
 Attachments: Screenshot 2022-07-11 at 11.25.29.png

When we do "List Queue" and the queue gets filled over the time, we often see 
that the order looks random (and yes, it's all on the same node). The Position 
has no correlation to the "Queued Duration" or any other row (at least for me). 
Please have a look at my screenshot, the queued duration is mixed up. This 
makes it complicated to find the newest/oldest flowfile.

!Screenshot 2022-07-11 at 11.25.29.png!

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-10226) Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503

2022-07-13 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-10226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-10226:

Summary: Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503  (was: 
Empty Queue of 1'500'000 flowfiles - NiFi GUI shows HTTP Error 503)

> Empty Queue 1'500'000 flowfiles - NiFi GUI HTTP Error 503
> -
>
> Key: NIFI-10226
> URL: https://issues.apache.org/jira/browse/NIFI-10226
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.15.3
> Environment: 8-Node cluster
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Screenshot 2022-07-13 at 13.46.23.png
>
>
> We have a ListSFTP processor which produces 1'500'000 flowfiles as output. 
> When we try to "Empty Queue" it takes multiple minutes and the GUI shows a 
> HTTP 503 error during that period. We can't open the GUI again until the 
> deletion is complete. Those flowfiles are all on the primary node and not 
> load balanced within the cluster.  Below is the final GUI message. The 
> intermediate message looks similar but without the removed string.
> !Screenshot 2022-07-13 at 13.46.23.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10226) Empty Queue of 1'500'000 flowfiles - NiFi GUI shows HTTP Error 503

2022-07-13 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-10226:
---

 Summary: Empty Queue of 1'500'000 flowfiles - NiFi GUI shows HTTP 
Error 503
 Key: NIFI-10226
 URL: https://issues.apache.org/jira/browse/NIFI-10226
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.15.3
 Environment: 8-Node cluster
Reporter: Josef Zahner
 Attachments: Screenshot 2022-07-13 at 13.46.23.png

We have a ListSFTP processor which produces 1'500'000 flowfiles as output. When 
we try to "Empty Queue" it takes multiple minutes and the GUI shows a HTTP 503 
error during that period. We can't open the GUI again until the deletion is 
complete. Those flowfiles are all on the primary node and not load balanced 
within the cluster.  Below is the final GUI message. The intermediate message 
looks similar but without the removed string.

!Screenshot 2022-07-13 at 13.46.23.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10107) Allow Sensitive & Non-Sensitive Parameters in one Property

2022-06-09 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-10107:
---

 Summary: Allow Sensitive & Non-Sensitive Parameters in one Property
 Key: NIFI-10107
 URL: https://issues.apache.org/jira/browse/NIFI-10107
 Project: Apache NiFi
  Issue Type: New Feature
Reporter: Josef Zahner
 Attachments: Processor_With_Sensitive_and_NonSensitive_Params.png, 
image-2022-06-09-08-18-25-664.png

There has also been some improvements to support dynamic sensitive properties 
https://issues.apache.org/jira/browse/NIFI-9957. However, we have the use case 
that we would like to combine Sensitive and Non-Sensitive parameters in one 
property. Please have a look at the UpdateAttribute processor below.   
!image-2022-06-09-08-18-25-664.png!

 

We have to verify a username (Non Sensitive) and a password (Sensitive). Today 
we have to use a non-sensitive parameter for the password to get it work. Not 
really ideal.

Please allow that we can use Sensitive and Non-Sensitive Parameters in one 
Property



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (NIFI-9825) Execution "Primary node" with incoming connections limitation bug

2022-06-01 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17544723#comment-17544723
 ] 

Josef Zahner commented on NIFI-9825:


[~markap14] the fact that in this case the "failure" output/loop is useless 
wasn't clear for me. But I fully get your point now. This is probably not clear 
for a lot of people. Because in my case the first impression was, that for a 
successful execution of the processor the flowfile comes out of the "success" 
output and for a failed execution it would use the "failure" output (with just 
attributes but no content) as usually with real incoming connections.

An UX improvement would definitely be appreciated for this use case. The 
introduced warning from the picture enforced this miss-understanding...

> Execution "Primary node" with incoming connections limitation bug
> -
>
> Key: NIFI-9825
> URL: https://issues.apache.org/jira/browse/NIFI-9825
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Priority: Major
>  Labels: ExecuteSQL, primary
> Attachments: Screenshot 2022-03-23 at 13.53.52.png
>
>
> Somewhen in the past an information for users has been added that if a 
> processor can have incoming connections, NiFi prevents the user to set 
> execution to "Primary node" only. In theory this is fine, but the problem is, 
> the "ExecuteSQL" processor can run with or without incoming connections.
> I'm using the processor on a cluster without incoming connections, but I 
> never wanna execute the same query on all cluster nodes and it's 
> uncomfortable to set a "GenerateFlowFile" processor with "Primary only" 
> execution mode in front of the "ExecuteSQL". At the moment I can't set the 
> "ExecuteSQL" to "Primary node" only without a *connected* incoming connection 
> as NiFi generates the error message "{_}'Execution Node' is invalid because 
> Processors with incoming connections cannot be scheduled for Primary Node 
> only{_}". Please check my screenshot.
> NiFi should not check for the possibility on the processor for incoming 
> connections but for real connected connections. Thanks
> !Screenshot 2022-03-23 at 13.53.52.png!



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (NIFI-9991) ASN1 RecordReader does process only the first record

2022-05-05 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-9991:
--

 Summary: ASN1 RecordReader does process only the first record
 Key: NIFI-9991
 URL: https://issues.apache.org/jira/browse/NIFI-9991
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.15.3
Reporter: Josef Zahner
 Attachments: ConvertRecord.png, JASN1Reader.png, 
asn1_browser_view-1.png, m2m.asn

We have created a ConvertRecord processor and added the JASN1Reader with the 
configurations below. As RecordWriter we use JSON or CSV, but NiFi always 
successfully writes only the very first record. All other records from the same 
ASN1-File will be ignored without any error or warning. 

I've attached the ASN1 schema "m2m.asn"

 

!ConvertRecord.png|width=658,height=248!

 

!JASN1Reader.png|width=697,height=233!

ASN1VE (our ASN1 browser GUI) shows 4 records. But NiFi shows only the first 
element:

!asn1_browser_view-1.png!

 

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Commented] (NIFI-8483) Restart NiFi - duplicate SFTP flowfiles

2022-03-29 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17513974#comment-17513974
 ] 

Josef Zahner commented on NIFI-8483:


We have upgraded to NiFi 1.15.3 and it seems that the issue is gone.

> Restart NiFi - duplicate SFTP flowfiles
> ---
>
> Key: NIFI-8483
> URL: https://issues.apache.org/jira/browse/NIFI-8483
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.13.2
> Environment: Java 1.8.0_282, CentOS 7, 8-Node Cluster
>Reporter: Josef Zahner
>Priority: Major
> Attachments: SFTP_failure.png
>
>
> Since the upgrade from NiFi 1.11.4 to 1.13.2 we faced an issue with the 
> FetchSFTP & PutSFTP processors. We have a 8-Node NiFi cluster. Pattern is 
> always ListSFTP (tracking timestamp) - FetchSFTP (and delete) and PutSFTP.
> If we do a restart of NiFi and NiFi comes back, we sometimes see flowfiles 
> for FetchSFTP (not found) and PutSFTP (already present on disk) which have 
> been processed successfully and have been stored already. So in fact we see 
> flowfiles in a failure queue which have been save to disk with PutSFTP, which 
> should never happen.  The files are always small (a few MBs) and the network 
> connectivity is insanely fast. The cluster shutdown is always before the 
> grace period runs out. The attached screeshot shows an example where the 
> FetchSFTP and the PutSFTP failure queue has files. Especially for the 
> FetchSFTP this shouldn't be possible and if I do a restart with the command 
> below, I would expect that within the grace period the processor has been 
> stopped and it can't be processed twice. 
> {code:java}
> /opt/nifi/bin/nifi.sh restart
> {code}
> At the moment we have no clue where the issue comes from and why it happens, 
> so I can't provide an exact scenario to reproduce it. I only know that it 
> sometimes happens after a restart of our 8-node cluster. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (NIFI-9825) Execution "Primary node" with incoming connections limitation bug

2022-03-23 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-9825:
--

 Summary: Execution "Primary node" with incoming connections 
limitation bug
 Key: NIFI-9825
 URL: https://issues.apache.org/jira/browse/NIFI-9825
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core UI
Affects Versions: 1.15.3
Reporter: Josef Zahner
 Attachments: Screenshot 2022-03-23 at 13.53.52.png

Somewhen in the past an information for users has been added that if a 
processor can have incoming connections, NiFi prevents the user to set 
execution to "Primary node" only. In theory this is fine, but the problem is, 
the "ExecuteSQL" processor can run with or without incoming connections.

I'm using the processor on a cluster without incoming connections, but I never 
wanna execute the same query on all cluster nodes and it's uncomfortable to set 
a "GenerateFlowFile" processor with "Primary only" execution mode in front of 
the "ExecuteSQL". At the moment I can't set the "ExecuteSQL" to "Primary node" 
only without a *connected* incoming connection as NiFi generates the error 
message "{_}'Execution Node' is invalid because Processors with incoming 
connections cannot be scheduled for Primary Node only{_}". Please check my 
screenshot.

NiFi should not check for the possibility on the processor for incoming 
connections but for real connected connections. Thanks

!Screenshot 2022-03-23 at 13.53.52.png!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-22 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17510264#comment-17510264
 ] 

Josef Zahner commented on NIFI-9820:


Ah right, you are referring to a case where NiFi version is smaller dann 
1.14.0. 

Anyway, it's always a tradeoff, in our case we had a massive memory issue due 
to the fact that the default was so high and would be too high with the new 
value (default = no of CPUs) as well. In my point of view it would be better to 
have worse performance per default, instead of having a crashing system like in 
our case. If the performance is worse, you recognise this due to the fact the 
the processor has a bad performance and the queues get filled. So you 
understand relatively fast where the issue could be (somewhere within the 
PutKudu processor) and it's an easy fix. On the other hand with the default 
high, you are getting a memory warning and you don't see where it comes from. 
So without deep analysis, it wouldn't be possible to find the culprit. 

> Change PutKudu Property "Kudu Client Worker Count" Default Value
> 
>
> Key: NIFI-9820
> URL: https://issues.apache.org/jira/browse/NIFI-9820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
> value. Please don't use the current "number of CPUs multiplied by 2" 
> behaviour as it leads to a massive amount of workers in our case with 
> physical servers. We have a 8-node cluster where each server has 64 CPUs. We 
> have about 30 PutKudu processors configured -> a lot of worker threads per 
> default just for kudu.
> We have changed the number of worker threads in our case to the number of 
> concurrent tasks. I don't know, maybe it would be great to set it a bit 
> higher than that, but to be honest, I don't exactly understand the impact. It 
> looks still fast with the current config.
> *To sum it up, please set a low default value (eg. 4 or 8) for the property 
> "Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
> processor.*
> Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509972#comment-17509972
 ] 

Josef Zahner commented on NIFI-9820:


Everything is better than the value today :).

Correct, an upgrade to a newer NiFi should not change the property or have an 
impact for existing users. However this isn't the case anyway - right? Because 
as soon as the processor is on the canvas, I would expect that the flow.xml.gz 
file stores the value in any case. So an existing processor shouldn't be 
impacted if NiFi changes the default

> Change PutKudu Property "Kudu Client Worker Count" Default Value
> 
>
> Key: NIFI-9820
> URL: https://issues.apache.org/jira/browse/NIFI-9820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Assignee: David Handermann
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
> value. Please don't use the current "number of CPUs multiplied by 2" 
> behaviour as it leads to a massive amount of workers in our case with 
> physical servers. We have a 8-node cluster where each server has 64 CPUs. We 
> have about 30 PutKudu processors configured -> a lot of worker threads per 
> default just for kudu.
> We have changed the number of worker threads in our case to the number of 
> concurrent tasks. I don't know, maybe it would be great to set it a bit 
> higher than that, but to be honest, I don't exactly understand the impact. It 
> looks still fast with the current config.
> *To sum it up, please set a low default value (eg. 4 or 8) for the property 
> "Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
> processor.*
> Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509910#comment-17509910
 ] 

Josef Zahner commented on NIFI-9820:


Btw. we haven't tested what happens if we use the default value (128) in our 
case. We have the issue that we have to test the behaviour in our production 
system.

Wouldn't it be great to have a save value to prevent a high memory issue per 
default? Other values are per default as well very small, like the number of 
tasks of a processor (defaults to 1). That's why I wrote a very small number in 
my initial post.

> Change PutKudu Property "Kudu Client Worker Count" Default Value
> 
>
> Key: NIFI-9820
> URL: https://issues.apache.org/jira/browse/NIFI-9820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Assignee: David Handermann
>Priority: Minor
>
> The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
> value. Please don't use the current "number of CPUs multiplied by 2" 
> behaviour as it leads to a massive amount of workers in our case with 
> physical servers. We have a 8-node cluster where each server has 64 CPUs. We 
> have about 30 PutKudu processors configured -> a lot of worker threads per 
> default just for kudu.
> We have changed the number of worker threads in our case to the number of 
> concurrent tasks. I don't know, maybe it would be great to set it a bit 
> higher than that, but to be honest, I don't exactly understand the impact. It 
> looks still fast with the current config.
> *To sum it up, please set a low default value (eg. 4 or 8) for the property 
> "Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
> processor.*
> Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Comment Edited] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509905#comment-17509905
 ] 

Josef Zahner edited comment on NIFI-9820 at 3/21/22, 2:05 PM:
--

It's really a big advantage that we can change the value with a property since 
1.14.0, however I could imagine that the internal default value from the Kudu 
Client library doesn't expect to have multiple clients (aka processors in nifi) 
on one node. So it makes sense to use the number of CPUs there. In NiFi however 
it's a different case, it's very likely that you don't have just one client.

The question is, why should we use a "dynamic" calculated value as any other 
property (eg. FlowFiles per Batch) is as well just on a fixed value. The user 
has to test/find anyway a good value in his setup.


was (Author: jzahner):
It's really a big advantage that we can change the value with a property since 
1.14.0, however I could imagine that the internal default value from the Kudu 
Client library doesn't expect to have multiple clients (aka processors in nifi) 
on one node. So it makes sense to use the number of CPUs there. In NiFi however 
it's a different case, it's very likely that you don't have just one client.

The question is, why should we use a "dynamic" calculated value as any other 
property (eg. FlowFiles per Batch) is as well just on a fixed value. The use 
has to test/find anyway a good value in his setup.

> Change PutKudu Property "Kudu Client Worker Count" Default Value
> 
>
> Key: NIFI-9820
> URL: https://issues.apache.org/jira/browse/NIFI-9820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Assignee: David Handermann
>Priority: Minor
>
> The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
> value. Please don't use the current "number of CPUs multiplied by 2" 
> behaviour as it leads to a massive amount of workers in our case with 
> physical servers. We have a 8-node cluster where each server has 64 CPUs. We 
> have about 30 PutKudu processors configured -> a lot of worker threads per 
> default just for kudu.
> We have changed the number of worker threads in our case to the number of 
> concurrent tasks. I don't know, maybe it would be great to set it a bit 
> higher than that, but to be honest, I don't exactly understand the impact. It 
> looks still fast with the current config.
> *To sum it up, please set a low default value (eg. 4 or 8) for the property 
> "Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
> processor.*
> Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509905#comment-17509905
 ] 

Josef Zahner commented on NIFI-9820:


It's really a big advantage that we can change the value with a property since 
1.14.0, however I could imagine that the internal default value from the Kudu 
Client library doesn't expect to have multiple clients (aka processors in nifi) 
on one node. So it makes sense to use the number of CPUs there. In NiFi however 
it's a different case, it's very likely that you don't have just one client.

The question is, why should we use a "dynamic" calculated value as any other 
property (eg. FlowFiles per Batch) is as well just on a fixed value. The use 
has to test/find anyway a good value in his setup.

> Change PutKudu Property "Kudu Client Worker Count" Default Value
> 
>
> Key: NIFI-9820
> URL: https://issues.apache.org/jira/browse/NIFI-9820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Assignee: David Handermann
>Priority: Minor
>
> The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
> value. Please don't use the current "number of CPUs multiplied by 2" 
> behaviour as it leads to a massive amount of workers in our case with 
> physical servers. We have a 8-node cluster where each server has 64 CPUs. We 
> have about 30 PutKudu processors configured -> a lot of worker threads per 
> default just for kudu.
> We have changed the number of worker threads in our case to the number of 
> concurrent tasks. I don't know, maybe it would be great to set it a bit 
> higher than that, but to be honest, I don't exactly understand the impact. It 
> looks still fast with the current config.
> *To sum it up, please set a low default value (eg. 4 or 8) for the property 
> "Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
> processor.*
> Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-21 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-9820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-9820:
---
Priority: Minor  (was: Major)

> Change PutKudu Property "Kudu Client Worker Count" Default Value
> 
>
> Key: NIFI-9820
> URL: https://issues.apache.org/jira/browse/NIFI-9820
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Core Framework
>Affects Versions: 1.15.3
>Reporter: Josef Zahner
>Priority: Minor
>
> The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
> value. Please don't use the current "number of CPUs multiplied by 2" 
> behaviour as it leads to a massive amount of workers in our case with 
> physical servers. We have a 8-node cluster where each server has 64 CPUs. We 
> have about 30 PutKudu processors configured -> a lot of worker threads per 
> default just for kudu.
> We have changed the number of worker threads in our case to the number of 
> concurrent tasks. I don't know, maybe it would be great to set it a bit 
> higher than that, but to be honest, I don't exactly understand the impact. It 
> looks still fast with the current config.
> *To sum it up, please set a low default value (eg. 4 or 8) for the property 
> "Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
> processor.*
> Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (NIFI-9820) Change PutKudu Property "Kudu Client Worker Count" Default Value

2022-03-21 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-9820:
--

 Summary: Change PutKudu Property "Kudu Client Worker Count" 
Default Value
 Key: NIFI-9820
 URL: https://issues.apache.org/jira/browse/NIFI-9820
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Core Framework
Affects Versions: 1.15.3
Reporter: Josef Zahner


The PutKudu processor property "Kudu Client Worker Count" has a suboptimal 
value. Please don't use the current "number of CPUs multiplied by 2" behaviour 
as it leads to a massive amount of workers in our case with physical servers. 
We have a 8-node cluster where each server has 64 CPUs. We have about 30 
PutKudu processors configured -> a lot of worker threads per default just for 
kudu.

We have changed the number of worker threads in our case to the number of 
concurrent tasks. I don't know, maybe it would be great to set it a bit higher 
than that, but to be honest, I don't exactly understand the impact. It looks 
still fast with the current config.

*To sum it up, please set a low default value (eg. 4 or 8) for the property 
"Kudu Client Worker Count" and not a pseudo dynamic one for the PutKudu 
processor.*

Btw. are there any suggestions how big the number should be?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2022-03-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509834#comment-17509834
 ] 

Josef Zahner commented on NIFI-8435:


Just for reference. We have upgraded from NiFi 1.13.2 to NiFi 1.15.3 and 
adapted the property "Kudu Client Worker Count" to the number of "Concurrent 
Tasks" accordingly. So far no leaks or high memory situations anymore. However 
it's not ideal to have the property "Kudu Client Worker Count" as high as the 
number of CPUs as we have bladeservers with 64 CPUs and about 30 PutKudus in 
place -> leads to a lot of kudu client workers.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Fix For: 1.14.0
>
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-5149) GetSplunk Processor should have input flow, which is currently missing

2021-12-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-5149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17462584#comment-17462584
 ] 

Josef Zahner commented on NIFI-5149:


Just for other people, instead of creating/using a dedicated QuerySplunk 
processor, we are now using directly the HTTP REST API for Splunk with a NiFi 
InvokeHTTP processor. Works good enough for our case either with synchronous or 
as well as asynchronous queries.

 

> GetSplunk Processor should have input flow, which is currently missing
> --
>
> Key: NIFI-5149
> URL: https://issues.apache.org/jira/browse/NIFI-5149
> Project: Apache NiFi
>  Issue Type: New Feature
>  Components: Core Framework
>Affects Versions: 1.6.0
>Reporter: Brajendra
>Priority: Critical
>  Labels: getSplunk
>
> Hi Team,
> We have found there is only 'GetSplunk' processor to connect and query to 
> Splunk in Apache NiFi.
> Hence, we this processor does not take any type or input.
>  
>  Do we have another type to Apache NiFi processor which can take parameters 
> as input (details of Splunk indexes, query, instance etc.) from other 
> processor?
> If not then please suggest when such type of processor can be expected in 
> upcoming release?
>  
> Environment: NiFi 1.5.0 and 1.6.0
>  Brajendra Mishra



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-05-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348169#comment-17348169
 ] 

Josef Zahner commented on NIFI-8435:


All right, we try to test the 1.14.0 snapshot build, but it will take a few 
days as my colleague is in vacation. I'll post the result when we have finished 
the test.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Fix For: 1.14.0
>
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-05-19 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348073#comment-17348073
 ] 

Josef Zahner commented on NIFI-8435:


Thanks [~pvillard], did you do some tests to verify your change? How big is the 
chance that the issue has been fixed by your change?

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Fix For: 1.14.0
>
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-29 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17335553#comment-17335553
 ] 

Josef Zahner commented on NIFI-8435:


I don't understand the details, but wouldn't it be a quick fix to force PutKudu 
to use the netty version from NiFi v.1.11.4 in the main pom? Or are there any 
features required from the new netty version?

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8483) Restart NiFi - duplicate SFTP flowfiles

2021-04-27 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-8483:
--

 Summary: Restart NiFi - duplicate SFTP flowfiles
 Key: NIFI-8483
 URL: https://issues.apache.org/jira/browse/NIFI-8483
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.13.2
 Environment: Java 1.8.0_282, CentOS 7, 8-Node Cluster
Reporter: Josef Zahner
 Attachments: SFTP_failure.png

Since the upgrade from NiFi 1.11.4 to 1.13.2 we faced an issue with the 
FetchSFTP & PutSFTP processors. We have a 8-Node NiFi cluster. Pattern is 
always ListSFTP (tracking timestamp) - FetchSFTP (and delete) and PutSFTP.

If we do a restart of NiFi and NiFi comes back, we sometimes see flowfiles for 
FetchSFTP (not found) and PutSFTP (already present on disk) which have been 
processed successfully and have been stored already. So in fact we see 
flowfiles in a failure queue which have been save to disk with PutSFTP, which 
should never happen.  The files are always small (a few MBs) and the network 
connectivity is insanely fast. The cluster shutdown is always before the grace 
period runs out. The attached screeshot shows an example where the FetchSFTP 
and the PutSFTP failure queue has files. Especially for the FetchSFTP this 
shouldn't be possible and if I do a restart with the command below, I would 
expect that within the grace period the processor has been stopped and it can't 
be processed twice. 
{code:java}
/opt/nifi/bin/nifi.sh restart
{code}
At the moment we have no clue where the issue comes from and why it happens, so 
I can't provide an exact scenario to reproduce it. I only know that it 
sometimes happens after a restart of our 8-node cluster. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-27 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner closed NIFI-8423.
--

Found a custom processor which sets the java default time due to bad 
programming. Removing the questioned line helped to solve the issue.

Code:
{code:java}
TimeZone.setDefault(TimeZone.getTimeZone("UTC")
{code}

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17327105#comment-17327105
 ] 

Josef Zahner commented on NIFI-8423:


All our data sources have timestamps which we have to modify (eg. localtime to 
UTC). So I'll try to isolate the custom / scripted processors. This will take a 
while as we have more than just a few of them. I'll give feedback later.

Yes I can confirm that all servers use the same OS and the same JDK - we use 
ansible to deploy everything from the OS as well as NiFi.

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326478#comment-17326478
 ] 

Josef Zahner edited comment on NIFI-8423 at 4/21/21, 12:34 PM:
---

Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed 
the wrong time) I've just restarted p-li-nifi-05. Then every single node showed 
the same timestamp. Very unpredictable.

Today I've restarted all NiFi nodes (not the OS, just the application) multiple 
times and the result is still shocking, even though I've configured the 
timezone manually in bootstrap.conf. What I get in the UI on top right:
 * UTC keyword, but time is in CEST in the UI
 * CEST keyword, but time is in UTC in the UI
 * CEST keyword and time is really CEST

Additionally the logmessages from the nodes directly on the PGs/Processors can 
show another time than the UI on the top right. 

What I found is that if I'm stopping all my ListSFTP (and any other 
processor/input which could load/generate flowfiles) and the cluster doesn't 
use any thread, most of the time the cluster UI on the top right is showing the 
correct timezone and time. So if there is no load on the cluster, it's very 
likely the the UI time & timezone is correct. If everything is up and running 
it's nearly impossible after a restart of all NiFi at the same time, to get a 
correct timezone.

To sum up, there is clearly huge bug which leads to this behavior and in our 
case it seems to be load dependent. I have screenshots for all the cases above. 
And it's not possible in my eyes that just one node or the OS is causing the 
issue as it looks random to me where the issue is.

Question, I'm restarting always everything at the same time. What is best 
practices to restart a cluster? Node by node or with a big bang?

EDIT: we are using multiple custom processor.s Could one of the custom 
processors lead to this bug?


was (Author: jzahner):
Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed 
the wrong time) I've just restarted p-li-nifi-05. Then every single node showed 
the same timestamp. Very unpredictable.

Today I've restarted all NiFi nodes (not the OS, just the application) multiple 
times and the result is still shocking, even though I've configured the 
timezone manually in bootstrap.conf. What I get in the UI on top right:
 * UTC keyword, but time is in CEST in the UI
 * CEST keyword, but time is in UTC in the UI
 * CEST keyword and time is really CEST

Additionally the logmessages from the nodes directly on the PGs/Processors can 
show another time than the UI on the top right. 

What I found is that if I'm stopping all my ListSFTP (and any other 
processor/input which could load/generate flowfiles) and the cluster doesn't 
use any thread, most of the time the cluster UI on the top right is showing the 
correct timezone and time. So if there is no load on the cluster, it's very 
likely the the UI time & timezone is correct. If everything is up and running 
it's nearly impossible after a restart of all NiFi at the same time, to get a 
correct timezone.

To sum up, there is clearly huge bug which leads to this behavior and in our 
case it seems to be load dependent. I have screenshots for all the cases above. 
And it's not possible in my eyes that just one node or the OS is causing the 
issue as it looks random to me where the issue is.

Question, I'm restarting always everything at the same time. What is best 
practices to restart a cluster? Node by node or with a big bang?

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the 

[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326478#comment-17326478
 ] 

Josef Zahner commented on NIFI-8423:


Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed 
the wrong time I've just restarted p-li-nifi-05. Then every single node showed 
the same timestamp. Very unpredictable.

Today I've restarted all NiFi nodes (not the OS, just the application) multiple 
times and the result is still shocking, even though I've configured the 
timezone manually in bootstrap.conf. What I get in the UI on top right:
 * UTC keyword, but time is in CEST in the UI
 * CEST keyword, but time is in UTC in the UI
 * CEST keyword and time is really CEST

Additionally the logmessages from the nodes directly on the PGs/Processors can 
show another time than the UI on the top right. 

What I found is that if I'm stopping all my ListSFTP (and any other 
processor/input which could load/generate flowfiles) and the cluster doesn't 
use any thread, most of the time the cluster UI on the top right is showing the 
correct timezone and time. So if there is no load on the cluster, it's very 
likely the the UI time & timezone is correct. If everything is up and running 
it's nearly impossible after a restart of all NiFi at the same time, to get a 
correct timezone.

To sum up, there is clearly huge bug which leads to this behavior and in our 
case it seems to be load dependent. I have screenshots for all the cases above. 
And it's not possible in my eyes that just one node or the OS is causing the 
issue as it looks random to me where the issue is.

Question, I'm restarting always everything at the same time. What is best 
practices to restart a cluster? Node by node or with a big bang?

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326478#comment-17326478
 ] 

Josef Zahner edited comment on NIFI-8423 at 4/21/21, 12:19 PM:
---

Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed 
the wrong time) I've just restarted p-li-nifi-05. Then every single node showed 
the same timestamp. Very unpredictable.

Today I've restarted all NiFi nodes (not the OS, just the application) multiple 
times and the result is still shocking, even though I've configured the 
timezone manually in bootstrap.conf. What I get in the UI on top right:
 * UTC keyword, but time is in CEST in the UI
 * CEST keyword, but time is in UTC in the UI
 * CEST keyword and time is really CEST

Additionally the logmessages from the nodes directly on the PGs/Processors can 
show another time than the UI on the top right. 

What I found is that if I'm stopping all my ListSFTP (and any other 
processor/input which could load/generate flowfiles) and the cluster doesn't 
use any thread, most of the time the cluster UI on the top right is showing the 
correct timezone and time. So if there is no load on the cluster, it's very 
likely the the UI time & timezone is correct. If everything is up and running 
it's nearly impossible after a restart of all NiFi at the same time, to get a 
correct timezone.

To sum up, there is clearly huge bug which leads to this behavior and in our 
case it seems to be load dependent. I have screenshots for all the cases above. 
And it's not possible in my eyes that just one node or the OS is causing the 
issue as it looks random to me where the issue is.

Question, I'm restarting always everything at the same time. What is best 
practices to restart a cluster? Node by node or with a big bang?


was (Author: jzahner):
Hi Joe, after the screenshot above (where p-li-nifi-05 & p-li-nifi-10 showed 
the wrong time I've just restarted p-li-nifi-05. Then every single node showed 
the same timestamp. Very unpredictable.

Today I've restarted all NiFi nodes (not the OS, just the application) multiple 
times and the result is still shocking, even though I've configured the 
timezone manually in bootstrap.conf. What I get in the UI on top right:
 * UTC keyword, but time is in CEST in the UI
 * CEST keyword, but time is in UTC in the UI
 * CEST keyword and time is really CEST

Additionally the logmessages from the nodes directly on the PGs/Processors can 
show another time than the UI on the top right. 

What I found is that if I'm stopping all my ListSFTP (and any other 
processor/input which could load/generate flowfiles) and the cluster doesn't 
use any thread, most of the time the cluster UI on the top right is showing the 
correct timezone and time. So if there is no load on the cluster, it's very 
likely the the UI time & timezone is correct. If everything is up and running 
it's nearly impossible after a restart of all NiFi at the same time, to get a 
correct timezone.

To sum up, there is clearly huge bug which leads to this behavior and in our 
case it seems to be load dependent. I have screenshots for all the cases above. 
And it's not possible in my eyes that just one node or the OS is causing the 
issue as it looks random to me where the issue is.

Question, I'm restarting always everything at the same time. What is best 
practices to restart a cluster? Node by node or with a big bang?

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to 

[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-21 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326351#comment-17326351
 ] 

Josef Zahner commented on NIFI-8435:


[~exceptionfactory] yes I have a list. My heapdump which I'm referencing to has 
about *1000* "kudu-nio-xxx" threads running. The whole heap dump has a size of 
about 15 GB.

I've also a heap dump which I've created shortly after a fresh start of NiFi 
and it shows only about 120 "kudu-nio-xxx" threads, size is only 2.5 GB. So you 
see the number of threads is growing over time.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325779#comment-17325779
 ] 

Josef Zahner edited comment on NIFI-8423 at 4/20/21, 1:04 PM:
--

Oh man, I've overseen that in my screenshot above 2 nodes had the wrong 
timestamp, but I've restarted only p-li-nifi-05 and NOT p-li-nifi-10. But after 
a restart of p-li-nifi-05 the problem was solved. Makes absolutely no sense for 
me.

!Screenshot 2021-04-20 at 15.00.22.png|width=694,height=252!


was (Author: jzahner):
Oh man, I've seen that in my screenshot above 2 nodes had the wrong timestamp, 
but I've started only p-li-nifi-05 and NOT p-li-nifi-10. But after a restart of 
p-li-nifi-05 the problem was solved. Makes absolutely no sense for me.

!Screenshot 2021-04-20 at 15.00.22.png|width=694,height=252!

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325779#comment-17325779
 ] 

Josef Zahner commented on NIFI-8423:


Oh man, I've seen that in my screenshot above 2 nodes had the wrong timestamp, 
but I've started only p-li-nifi-05 and NOT p-li-nifi-10. But after a restart of 
p-li-nifi-05 the problem was solved. Makes absolutely no sense for me.

!Screenshot 2021-04-20 at 15.00.22.png|width=694,height=252!

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Attachment: Screenshot 2021-04-20 at 15.00.22.png

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, Screenshot 2021-04-20 at 15.00.22.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325761#comment-17325761
 ] 

Josef Zahner commented on NIFI-8435:


voilà, you got it ;). that's exactly what we see. You can press "GC" in 
visualvm to force garbage collection, if it doesn't go back to the lowest value 
on the left hand side you have for sure a leak.

If you create a heap dump you will probably see that PutKudu leaks.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: Screenshot 2021-04-20 at 14.27.11.png, 
> grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325757#comment-17325757
 ] 

Josef Zahner commented on NIFI-8423:


Ok, I found a way to generate a logmessage. It looks like only p-li-nifi-05 
shows the time issue (as it is the only one on top):

!Screenshot 2021-04-20 at 14.24.17.png|width=471,height=458!

 

I've restarted p-li-nifi-05 and voilà, it works as it should. 

!Screenshot 2021-04-20 at 14.34.36.png|width=574,height=118!

 

I changed nothing on NiFi or on the system, but now it's fine.

So the question remains, where does this strange behavior come from...

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png, 
> manual_configured_timezone_gui_output.png, nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Attachment: Screenshot 2021-04-20 at 14.34.36.png

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, Screenshot 
> 2021-04-20 at 14.34.36.png, image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png, 
> manual_configured_timezone_gui_output.png, nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Attachment: Screenshot 2021-04-20 at 14.24.17.png

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: Screenshot 2021-04-20 at 14.24.17.png, 
> image-2021-04-13-15-14-02-162.png, image-2021-04-13-15-14-06-930.png, 
> image-2021-04-13-15-14-56-690.png, manual_configured_timezone_gui_output.png, 
> nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325735#comment-17325735
 ] 

Josef Zahner edited comment on NIFI-8423 at 4/20/21, 12:06 PM:
---

I don't know whether p-li-nifi-05 is the only node which shows the discrepancy, 
under normal circumstances no ERRORs will be generated so I don't know when and 
if it occurs on other nodes. I've already restarted NiFi on all nodes multiple 
times. It's really difficult to tell if it's always the same node (or multiple 
nodes)

What I can tell is that NiFi runs only once:

 
{code:java}
[user@p-li-nifi-05 ~]$ ps aux | grep nifi 
user  19597  0.0  0.0 112808   968 pts/0S+   13:13   0:00 grep 
--color=auto nifi
nifi 51798  0.0  0.0 113412   760 ?S10:46   0:00 /bin/sh 
/opt/nifi/bin/nifi.sh start
nifi 51800  0.0  0.0 7093000 197364 ?  Sl   10:46   0:06 
/usr/lib/jvm/java/bin/java -cp 
/opt/nifi-1.13.2/conf:/opt/nifi-1.13.2/lib/bootstrap/* -Xms12m -Xmx24m 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi 
-Dorg.apache.nifi.bootstrap.config.pid.dir=/var/run/nifi 
-Dorg.apache.nifi.bootstrap.config.file=/opt/nifi-1.13.2/conf/bootstrap.conf 
org.apache.nifi.bootstrap.RunNiFi start
nifi 51872  332 14.0 74496404 37058284 ?   Sl   10:47 487:17 
/usr/lib/jvm/java/bin/java -classpath 
/opt/nifi-1.13.2/./conf:/opt/nifi-1.13.2/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi-1.13.2/./lib/jetty-schemas-3.1.jar:/opt/nifi-1.13.2/./lib/logback-classic-1.2.3.jar:/opt/nifi-1.13.2/./lib/logback-core-1.2.3.jar:/opt/nifi-1.13.2/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/slf4j-api-1.7.30.jar:/opt/nifi-1.13.2/./lib/nifi-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-framework-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-server-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-runtime-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-nar-utils-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-properties-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-stateless-bootstrap-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-stateless-api-1.13.2.jar
 -XX:+PrintGCDetails -Dcom.sun.management.jmxremote.rmi.port=30008 
-XX:+UseGCLogFileRotation -Xloggc:/var/log/nifi/gc.log 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=30008 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.local.only=true -Duser.timezone=Europe/Zurich 
-Dorg.apache.jasper.compiler.disablejsr199=true -Xmx31g -Xms8g 
-Dcurator-log-only-first-connection-issue-as-error-level=true 
-Djavax.security.auth.useSubjectCredsOnly=true 
-Djava.security.egd=file:/dev/urandom -Dzookeeper.admin.enableServer=false 
-Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -XX:ReservedCodeCacheSize=256m -XX:+UseG1GC 
-Djava.protocol.handler.pkgs=sun.net.www.protocol -XX:NumberOfGCLogFiles=10 
-XX:+UseCodeCacheFlushing -XX:CodeCacheMinimumFreeSpace=10m 
-XX:GCLogFileSize=10m 
-Dnifi.properties.file.path=/opt/nifi-1.13.2/./conf/nifi.properties 
-Dnifi.bootstrap.listen.port=39853 -Dapp=NiFi 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi org.apache.nifi.NiFi
nifi 53618  0.3  0.0  31964 23484 ?Ssl  09:11   0:44 
/opt/pushgateway/pushgateway --web.listen-address 0.0.0.0:9091
{code}
We have a systemd services for NiFi, so in theory it should never start more 
than one NiFi:
{code:java}
[user@p-li-nifi-05 ~]$ cat /etc/systemd/system/nifi.service
[Unit]
Description=Apache NiFi
After=network.target[Service]
Type=forking
User=nifi
Group=nifi
RuntimeDirectory=nifi
RuntimeDirectoryMode=0775
ExecStart=/opt/nifi/bin/nifi.sh start
ExecStop=/opt/nifi/bin/nifi.sh stop
ExecReload=/opt/nifi/bin/nifi.sh restart
LimitNOFILE=100[Install]
WantedBy=multi-user.target
{code}
 Is there an easy way to generate an ERROR on NiFi? 

 

 


was (Author: jzahner):
I don't know whether p-li-nifi-05 is the only node which shows the discrepancy, 
under normal circumstances no ERRORs will be generated so I don't know when and 
if it occurs on other nodes. I've already restarted NiFi on all nodes multiple 
times. It's really difficult to tell if it's always the same node (or multiple 
nodes)

What I can tell is that NiFi runs only once:

 
{code:java}
[user@p-li-nifi-05 ~]$ ps aux | grep nifi 
user  19597  0.0  0.0 112808   968 pts/0S+   13:13   0:00 grep 
--color=auto nifi
nifi 51798  0.0  0.0 113412   760 ?S10:46   0:00 /bin/sh 
/opt/nifi/bin/nifi.sh start
nifi 51800  0.0  0.0 7093000 197364 ?  Sl   10:46   0:06 
/usr/lib/jvm/java/bin/java -cp 
/opt/nifi-1.13.2/conf:/opt/nifi-1.13.2/lib/bootstrap/* -Xms12m -Xmx24m 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi 
-Dorg.apache.nifi.bootstrap.config.pid.dir=/var/run/nifi 

[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325735#comment-17325735
 ] 

Josef Zahner commented on NIFI-8423:


I don't know whether p-li-nifi-05 is the only node which shows the discrepancy, 
under normal circumstances no ERRORs will be generated so I don't know when and 
if it occurs on other nodes. I've already restarted NiFi on all nodes multiple 
times. It's really difficult to tell if it's always the same node (or multiple 
nodes)

What I can tell is that NiFi runs only once:

 
{code:java}
[user@p-li-nifi-05 ~]$ ps aux | grep nifi 
user  19597  0.0  0.0 112808   968 pts/0S+   13:13   0:00 grep 
--color=auto nifi
nifi 51798  0.0  0.0 113412   760 ?S10:46   0:00 /bin/sh 
/opt/nifi/bin/nifi.sh start
nifi 51800  0.0  0.0 7093000 197364 ?  Sl   10:46   0:06 
/usr/lib/jvm/java/bin/java -cp 
/opt/nifi-1.13.2/conf:/opt/nifi-1.13.2/lib/bootstrap/* -Xms12m -Xmx24m 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi 
-Dorg.apache.nifi.bootstrap.config.pid.dir=/var/run/nifi 
-Dorg.apache.nifi.bootstrap.config.file=/opt/nifi-1.13.2/conf/bootstrap.conf 
org.apache.nifi.bootstrap.RunNiFi start
nifi 51872  332 14.0 74496404 37058284 ?   Sl   10:47 487:17 
/usr/lib/jvm/java/bin/java -classpath 
/opt/nifi-1.13.2/./conf:/opt/nifi-1.13.2/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi-1.13.2/./lib/jetty-schemas-3.1.jar:/opt/nifi-1.13.2/./lib/logback-classic-1.2.3.jar:/opt/nifi-1.13.2/./lib/logback-core-1.2.3.jar:/opt/nifi-1.13.2/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/slf4j-api-1.7.30.jar:/opt/nifi-1.13.2/./lib/nifi-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-framework-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-server-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-runtime-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-nar-utils-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-properties-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-stateless-bootstrap-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-stateless-api-1.13.2.jar
 -XX:+PrintGCDetails -Dcom.sun.management.jmxremote.rmi.port=30008 
-XX:+UseGCLogFileRotation -Xloggc:/var/log/nifi/gc.log 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=30008 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.local.only=true -Duser.timezone=Europe/Zurich 
-Dorg.apache.jasper.compiler.disablejsr199=true -Xmx31g -Xms8g 
-Dcurator-log-only-first-connection-issue-as-error-level=true 
-Djavax.security.auth.useSubjectCredsOnly=true 
-Djava.security.egd=file:/dev/urandom -Dzookeeper.admin.enableServer=false 
-Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -XX:ReservedCodeCacheSize=256m -XX:+UseG1GC 
-Djava.protocol.handler.pkgs=sun.net.www.protocol -XX:NumberOfGCLogFiles=10 
-XX:+UseCodeCacheFlushing -XX:CodeCacheMinimumFreeSpace=10m 
-XX:GCLogFileSize=10m 
-Dnifi.properties.file.path=/opt/nifi-1.13.2/./conf/nifi.properties 
-Dnifi.bootstrap.listen.port=39853 -Dapp=NiFi 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi org.apache.nifi.NiFi
nifi 53618  0.3  0.0  31964 23484 ?Ssl  09:11   0:44 
/opt/pushgateway/pushgateway --web.listen-address 0.0.0.0:9091
{code}
We have a systemd services for NiFi, so in theory it should never start more 
than one NiFi:
{code:java}
[ldr@p-li-nifi-05 ~]$ cat /etc/systemd/system/nifi.service
[Unit]
Description=Apache NiFi
After=network.target[Service]
Type=forking
User=nifi
Group=nifi
RuntimeDirectory=nifi
RuntimeDirectoryMode=0775
ExecStart=/opt/nifi/bin/nifi.sh start
ExecStop=/opt/nifi/bin/nifi.sh stop
ExecReload=/opt/nifi/bin/nifi.sh restart
LimitNOFILE=100[Install]
WantedBy=multi-user.target
{code}
 

 

 

 

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png, 
> manual_configured_timezone_gui_output.png, nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS 

[jira] [Comment Edited] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325735#comment-17325735
 ] 

Josef Zahner edited comment on NIFI-8423 at 4/20/21, 12:03 PM:
---

I don't know whether p-li-nifi-05 is the only node which shows the discrepancy, 
under normal circumstances no ERRORs will be generated so I don't know when and 
if it occurs on other nodes. I've already restarted NiFi on all nodes multiple 
times. It's really difficult to tell if it's always the same node (or multiple 
nodes)

What I can tell is that NiFi runs only once:

 
{code:java}
[user@p-li-nifi-05 ~]$ ps aux | grep nifi 
user  19597  0.0  0.0 112808   968 pts/0S+   13:13   0:00 grep 
--color=auto nifi
nifi 51798  0.0  0.0 113412   760 ?S10:46   0:00 /bin/sh 
/opt/nifi/bin/nifi.sh start
nifi 51800  0.0  0.0 7093000 197364 ?  Sl   10:46   0:06 
/usr/lib/jvm/java/bin/java -cp 
/opt/nifi-1.13.2/conf:/opt/nifi-1.13.2/lib/bootstrap/* -Xms12m -Xmx24m 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi 
-Dorg.apache.nifi.bootstrap.config.pid.dir=/var/run/nifi 
-Dorg.apache.nifi.bootstrap.config.file=/opt/nifi-1.13.2/conf/bootstrap.conf 
org.apache.nifi.bootstrap.RunNiFi start
nifi 51872  332 14.0 74496404 37058284 ?   Sl   10:47 487:17 
/usr/lib/jvm/java/bin/java -classpath 
/opt/nifi-1.13.2/./conf:/opt/nifi-1.13.2/./lib/javax.servlet-api-3.1.0.jar:/opt/nifi-1.13.2/./lib/jetty-schemas-3.1.jar:/opt/nifi-1.13.2/./lib/logback-classic-1.2.3.jar:/opt/nifi-1.13.2/./lib/logback-core-1.2.3.jar:/opt/nifi-1.13.2/./lib/jcl-over-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/jul-to-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/log4j-over-slf4j-1.7.30.jar:/opt/nifi-1.13.2/./lib/slf4j-api-1.7.30.jar:/opt/nifi-1.13.2/./lib/nifi-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-framework-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-server-api-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-runtime-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-nar-utils-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-properties-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-stateless-bootstrap-1.13.2.jar:/opt/nifi-1.13.2/./lib/nifi-stateless-api-1.13.2.jar
 -XX:+PrintGCDetails -Dcom.sun.management.jmxremote.rmi.port=30008 
-XX:+UseGCLogFileRotation -Xloggc:/var/log/nifi/gc.log 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=30008 
-Dcom.sun.management.jmxremote.ssl=false 
-Dcom.sun.management.jmxremote.local.only=true -Duser.timezone=Europe/Zurich 
-Dorg.apache.jasper.compiler.disablejsr199=true -Xmx31g -Xms8g 
-Dcurator-log-only-first-connection-issue-as-error-level=true 
-Djavax.security.auth.useSubjectCredsOnly=true 
-Djava.security.egd=file:/dev/urandom -Dzookeeper.admin.enableServer=false 
-Dsun.net.http.allowRestrictedHeaders=true -Djava.net.preferIPv4Stack=true 
-Djava.awt.headless=true -XX:ReservedCodeCacheSize=256m -XX:+UseG1GC 
-Djava.protocol.handler.pkgs=sun.net.www.protocol -XX:NumberOfGCLogFiles=10 
-XX:+UseCodeCacheFlushing -XX:CodeCacheMinimumFreeSpace=10m 
-XX:GCLogFileSize=10m 
-Dnifi.properties.file.path=/opt/nifi-1.13.2/./conf/nifi.properties 
-Dnifi.bootstrap.listen.port=39853 -Dapp=NiFi 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi org.apache.nifi.NiFi
nifi 53618  0.3  0.0  31964 23484 ?Ssl  09:11   0:44 
/opt/pushgateway/pushgateway --web.listen-address 0.0.0.0:9091
{code}
We have a systemd services for NiFi, so in theory it should never start more 
than one NiFi:
{code:java}
[user@p-li-nifi-05 ~]$ cat /etc/systemd/system/nifi.service
[Unit]
Description=Apache NiFi
After=network.target[Service]
Type=forking
User=nifi
Group=nifi
RuntimeDirectory=nifi
RuntimeDirectoryMode=0775
ExecStart=/opt/nifi/bin/nifi.sh start
ExecStop=/opt/nifi/bin/nifi.sh stop
ExecReload=/opt/nifi/bin/nifi.sh restart
LimitNOFILE=100[Install]
WantedBy=multi-user.target
{code}
 

 

 

 


was (Author: jzahner):
I don't know whether p-li-nifi-05 is the only node which shows the discrepancy, 
under normal circumstances no ERRORs will be generated so I don't know when and 
if it occurs on other nodes. I've already restarted NiFi on all nodes multiple 
times. It's really difficult to tell if it's always the same node (or multiple 
nodes)

What I can tell is that NiFi runs only once:

 
{code:java}
[user@p-li-nifi-05 ~]$ ps aux | grep nifi 
user  19597  0.0  0.0 112808   968 pts/0S+   13:13   0:00 grep 
--color=auto nifi
nifi 51798  0.0  0.0 113412   760 ?S10:46   0:00 /bin/sh 
/opt/nifi/bin/nifi.sh start
nifi 51800  0.0  0.0 7093000 197364 ?  Sl   10:46   0:06 
/usr/lib/jvm/java/bin/java -cp 
/opt/nifi-1.13.2/conf:/opt/nifi-1.13.2/lib/bootstrap/* -Xms12m -Xmx24m 
-Dorg.apache.nifi.bootstrap.config.log.dir=/var/log/nifi 
-Dorg.apache.nifi.bootstrap.config.pid.dir=/var/run/nifi 
-Dorg.apache.nifi.bootstrap.config.file=/opt/nifi-1.13.2/conf/bootstrap.conf 

[jira] [Comment Edited] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325717#comment-17325717
 ] 

Josef Zahner edited comment on NIFI-8435 at 4/20/21, 11:10 AM:
---

Ok perfect, I can see the sawtooth as well until the heap is exhausted (and the 
sawtooth gets smaller as more the memory has been filled up). Please have a 
look at the heap "byte" objects, at least if you generate enough flows/records. 
I don't think that 3 small records every 0.1 seconds are enough. In my initial 
grafana screenshot, where you see the heap size, we were inserting about 600K 
(200k but 3 times replicated) records per second, just that you get a rough 
estimation of our workload and how long it took to see the issue.

!kudu_inserts_per_sec.png!  


was (Author: jzahner):
Ok perfect, I can see the sawtooth as well until the heap is exhausted (and the 
sawtooth gets smaller as more the memory has been filled up). Please have a 
look at the heap "byte" objects, at least if you generate enough flows/records. 
I don't think that 3 small records every 0.1 seconds are enough. In my initial 
grafana screenshot, where you see the heap size, we were inserting about 600K 
records per second, just that you get a rough estimation of our workload and 
how long it took to see the issue.

!kudu_inserts_per_sec.png!  

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325717#comment-17325717
 ] 

Josef Zahner commented on NIFI-8435:


Ok perfect, I can see the sawtooth as well until the heap is exhausted (and the 
sawtooth gets smaller as more the memory has been filled up). Please have a 
look at the heap "byte" objects, at least if you generate enough flows/records. 
I don't think that 3 small records every 0.1 seconds are enough. In my initial 
grafana screenshot, where you see the heap size, we were inserting about 600K 
records per second, just that you get a rough estimation of our workload and 
how long it took to see the issue.

!kudu_inserts_per_sec.png!  

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8435:
---
Attachment: kudu_inserts_per_sec.png

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, kudu_inserts_per_sec.png, 
> putkudu_processor_config.png, visualvm_bytes_detail_view.png, 
> visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325679#comment-17325679
 ] 

Josef Zahner commented on NIFI-8435:


Sorry, I missed to ask. But how did you checked the memory leak? Did you 
analysed the HEAP dump as I've done it with visualvm? Of course it happens only 
if you produce data/flows and if you have just a little bit of data you won't 
see the issue that fast without a tool to analyse the heap dump. We are 
inserting overall a few GB per minutes to kudu in production.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, putkudu_processor_config.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325677#comment-17325677
 ] 

Josef Zahner edited comment on NIFI-8435 at 4/20/21, 10:09 AM:
---

[~pvillard] I've added the processor config below. We can clearly reproduce the 
memory leak issue within our lab with the same config on a 4-node NiFi cluster. 
We can reproduce it as well on a single node NiFi with just one PutKudu.

Yes we are using multiple concurrent tasks and on our actual config we have 
about 34 PutKudu processors running in parallel (with multiple threads).

!putkudu_processor_config.png|width=1042,height=729!

 

As emergency fix we have built the NAR from 1.11.4 for NiFi 1.13.2. It runs 
since a few hours and no leak so far. But of course we would like to use the 
official PutKudu and not an old version of it.


was (Author: jzahner):
[~pvillard] I've added the processor config below. We can clearly reproduce the 
memory leak issue within our lab with the same config on a 4-node NiFi cluster.

Yes we are using multiple concurrent tasks and on our actual config we have 
about 34 PutKudu processors running in parallel (with multiple threads).

!putkudu_processor_config.png|width=1042,height=729!

 

As emergency fix we have built the NAR from 1.11.4 for NiFi 1.13.2. It runs 
since a few hours and no leak so far. But of course we would like to use the 
official PutKudu and not an old version of it.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, putkudu_processor_config.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325677#comment-17325677
 ] 

Josef Zahner commented on NIFI-8435:


[~pvillard] I've added the processor config below. We can clearly reproduce the 
memory leak issue within our lab with the same config on a 4-node NiFi cluster.

Yes we are using multiple concurrent tasks and on our actual config we have 
about 34 PutKudu processors running in parallel (with multiple threads).

!putkudu_processor_config.png|width=1042,height=729!

 

As emergency fix we have built the NAR from 1.11.4 for NiFi 1.13.2. It runs 
since a few hours and no leak so far. But of course we would like to use the 
official PutKudu and not an old version of it.

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, putkudu_processor_config.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325674#comment-17325674
 ] 

Josef Zahner commented on NIFI-8423:


Ok, so at least the smoke test showed that the UI on the top right shows now 
the correct CEST time. However, the problem persists that within logmessages in 
the GUI we see two different timezones within the cluster. The nifi-app.log 
shows the correct timezone (which is clearly different in the UI!). 

Please have a look at the first timestamp 09:03:00 from p-li-nifi-05 which is 2 
hours before the other error messages. 

!manual_configured_timezone_gui_output.png|width=761,height=569!

nifi-app.log shows the same entry with the correct time - 11:03:00 instead of 
09:03:00. So this approves that the system time of p-li-nifi-05 is or was 
correct. And at the same time I've checked on the top right the GUI time on 
p-li-nifi-05, it showed correctly the 11:03:xx CEST.

!nifi-app_log.png!

Additionally, when I recorded the screenshot of the UI where the ERRORs on the 
PG appeared, the global messages icon on the top right didn't show any info at 
all!

It's really a mess here and it does appear in our case only on the cluster.

 

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png, 
> manual_configured_timezone_gui_output.png, nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Attachment: nifi-app_log.png

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png, 
> manual_configured_timezone_gui_output.png, nifi-app_log.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Attachment: manual_configured_timezone_gui_output.png

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png, 
> manual_configured_timezone_gui_output.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-20 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8435:
---
Attachment: putkudu_processor_config.png

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Assignee: Peter Gyori
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, putkudu_processor_config.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-20 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17325492#comment-17325492
 ] 

Josef Zahner commented on NIFI-8423:


You are right [~jgresock], the GMT+1 issue seems to be solved without the 
quotes, it shows CEST with the correct time number - at least on the single 
NiFi node (which was anyway always ok)! I don't know why I haven't tried this.

I'll deploy it today to our 8 node cluster and see how it goes. Nothing speaks 
against manually setting the timezone.

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-15 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17322186#comment-17322186
 ] 

Josef Zahner edited comment on NIFI-8435 at 4/15/21, 1:46 PM:
--

[~granthenke] hope you can help here.

The processor has the same config as shown in my old bug report here (of course 
the new properties are missing, but they have been set to default during 
upgrade):

https://issues.apache.org/jira/browse/NIFI-6908

 

 


was (Author: jzahner):
[~granthenke] hope you can help here.

The processor has the same config as shown in my old bug report here (of course 
the new properties are missing):

https://issues.apache.org/jira/browse/NIFI-6908

 

 

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-15 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8435:
---
Description: 
We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with PutKudu.

PutKudu on the 1.13.2 eats up all the heap memory and garbage collection can't 
anymore free up the memory. We allow Java to use 31GB memory and as you can see 
with NiFi 1.11.4 it will be used like it should with GC. However with NiFi 
1.13.2 with our actual load it fills up the memory relatively fast. Manual GC 
via visualvm tool didn't help at all to free up memory.

!grafana_heap_overview.png!

 

Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!

!visualvm_bytes_detail_view.png!

The bytes array shows millions of char data which isn't cleaned up. In fact 
here 14,9GB memory (heapdump has been taken after a while of full load). If we 
check the same on NiFi 1.11.4, the bytes array is nearly empty, around a few 
hundred MBs.

As you could imagine we can't upload the heap dump as currently we have only 
productive data on the system. But don't hesitate to ask questions about the 
heapdump if you need more information.

I haven't done any screenshot of the processor config, but I can do that if you 
wish (we are back to NiFi 1.11.4 at the moment). 

  was:
We just upgraded from NiFi 1.11.4 to 1.13.2 and face a huge issue with PutKudu.

PutKudu on the 1.13.2 eats up all the heap memory and garbage collection can't 
anymore free up the memory. We allow Java to use 31GB memory and as you can see 
with NiFi 1.11.4 it will be used like it should with GC. However with NiFi 
1.13.2 with our actual load it fills up the memory relatively fast. Manual GC 
via visualvm tool didn't help at all to free up memory.

!grafana_heap_overview.png!

 

Visual VM shows the following culprit: !visualvm_total_bytes_used.png!

!visualvm_bytes_detail_view.png!

The bytes array shows millions of char data which isn't cleaned up. In fact 
here 14,9GB memory (heapdump has been taken after a while of full load). If we 
check the same on NiFi 1.11.4, the bytes array is nearly empty, around a few 
hundred MBs.

As you could imagine we can't upload the heap dump as currently we have only 
productive data on the system. But don't hesitate to ask questions about the 
heapdump if you need more information.

I haven't done any screenshot of the processor config, but I can do that if you 
wish (we are back to NiFi 1.11.4 at the moment). 


> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and faced a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit:  !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-15 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17322186#comment-17322186
 ] 

Josef Zahner commented on NIFI-8435:


[~granthenke] hope you can help here.

The processor has the same config as shown in my old bug report here (of course 
the new properties are missing):

https://issues.apache.org/jira/browse/NIFI-6908

 

 

> PutKudu 1.13.2 Memory Leak
> --
>
> Key: NIFI-8435
> URL: https://issues.apache.org/jira/browse/NIFI-8435
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.13.2
> Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
> 1.10.0
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: kudu, nifi, oom
> Attachments: grafana_heap_overview.png, 
> visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png
>
>
> We just upgraded from NiFi 1.11.4 to 1.13.2 and face a huge issue with 
> PutKudu.
> PutKudu on the 1.13.2 eats up all the heap memory and garbage collection 
> can't anymore free up the memory. We allow Java to use 31GB memory and as you 
> can see with NiFi 1.11.4 it will be used like it should with GC. However with 
> NiFi 1.13.2 with our actual load it fills up the memory relatively fast. 
> Manual GC via visualvm tool didn't help at all to free up memory.
> !grafana_heap_overview.png!
>  
> Visual VM shows the following culprit: !visualvm_total_bytes_used.png!
> !visualvm_bytes_detail_view.png!
> The bytes array shows millions of char data which isn't cleaned up. In fact 
> here 14,9GB memory (heapdump has been taken after a while of full load). If 
> we check the same on NiFi 1.11.4, the bytes array is nearly empty, around a 
> few hundred MBs.
> As you could imagine we can't upload the heap dump as currently we have only 
> productive data on the system. But don't hesitate to ask questions about the 
> heapdump if you need more information.
> I haven't done any screenshot of the processor config, but I can do that if 
> you wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8435) PutKudu 1.13.2 Memory Leak

2021-04-15 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-8435:
--

 Summary: PutKudu 1.13.2 Memory Leak
 Key: NIFI-8435
 URL: https://issues.apache.org/jira/browse/NIFI-8435
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.13.2
 Environment: NiFi 1.13.2, 8-Node Cluster running on CentOS 7, Kudu 
1.10.0
Reporter: Josef Zahner
 Attachments: grafana_heap_overview.png, 
visualvm_bytes_detail_view.png, visualvm_total_bytes_used.png

We just upgraded from NiFi 1.11.4 to 1.13.2 and face a huge issue with PutKudu.

PutKudu on the 1.13.2 eats up all the heap memory and garbage collection can't 
anymore free up the memory. We allow Java to use 31GB memory and as you can see 
with NiFi 1.11.4 it will be used like it should with GC. However with NiFi 
1.13.2 with our actual load it fills up the memory relatively fast. Manual GC 
via visualvm tool didn't help at all to free up memory.

!grafana_heap_overview.png!

 

Visual VM shows the following culprit: !visualvm_total_bytes_used.png!

!visualvm_bytes_detail_view.png!

The bytes array shows millions of char data which isn't cleaned up. In fact 
here 14,9GB memory (heapdump has been taken after a while of full load). If we 
check the same on NiFi 1.11.4, the bytes array is nearly empty, around a few 
hundred MBs.

As you could imagine we can't upload the heap dump as currently we have only 
productive data on the system. But don't hesitate to ask questions about the 
heapdump if you need more information.

I haven't done any screenshot of the processor config, but I can do that if you 
wish (we are back to NiFi 1.11.4 at the moment). 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-13 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320228#comment-17320228
 ] 

Josef Zahner commented on NIFI-8423:


It's getting even more confusing for us. The error message timestamp are mixed 
between two timezones. Assumption: localtime is 16:00:00 CEST. If I'm hovering 
over the paper icon just below the hamburger menu on the top right corner, the 
error messages are shown like this "16:00:00 CEST". If I hover over the process 
group, which contains the error, I'm getting this "14:00:00 CEST".

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
>  NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-13 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Description: 
We just upgraded to NiFi 1.13.2 and Java 1.8.0_282

On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
have the issue that the UI does display the correct timezone (CEST, so UTC 
+2h), but in fact the time is displayed as UTC. NTP is enabled and working. The 
OS configuration/location is everywhere the same (doesn't matter if single or 
cluster NiFi). My tests below are all done at around 15:xx:xx local time (CEST).

As you can see below, the timezone seems to be correct, but the time itself 
within NiFi is 2h behind (so in fact UTC) compared to Windows. In earlier 
NiFi/java versions it was enough to multiple times restart the cluster, but on 
the newest versions this doesn't help anymore. It shows most of the time CEST 
with the wrong time or directly UTC.

!image-2021-04-13-15-14-06-930.png!

 

The single NiFi instances or the 2 node clusters are always fine. The issue 
exists only on our 8 node cluster. 
 NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):

!image-2021-04-13-15-14-02-162.png!

 

If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
shows no summer time, so only GMT+1 instead of GMT+2. As well not what we want.
{code:java}
java.arg.20=-Duser.timezone="Europe/Zurich"{code}
!image-2021-04-13-15-14-56-690.png!

 

What Matt below suggested has been verified, all servers (single nodes as well 
as clusters) are reporting the same time/timezone.

[https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]

 

So the question remains, where on a NiFi cluster comes the time from the UI and 
what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
getting CEST but the time is anyhow UTC instead of CEST... I really need to 
have the correct time in the UI as I don't know what the impact could be on our 
dataflows.

 

Any help would be really appreciated.

  was:
We just upgraded to NiFi 1.13.2 and Java 1.8.0_282

On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
have the issue that the UI does display the correct timezone (CEST, so UTC 
+2h), but in fact the time is displayed as UTC. NTP is enabled and working. The 
OS configuration/location is everywhere the same (doesn't matter if single or 
cluster NiFi). My tests below are all done at around 15:xx:xx local time (CEST).

As you can see below, the timezone seems to be correct, but the time itself is 
within NiFi 2h behind (so in fact UTC) compared to Windows. In earlier 
NiFi/java versions it was enough to multiple times restart the cluster, but on 
the newest versions this doesn't help anymore. It shows most of the time CEST 
with the wrong time or directly UTC.

!image-2021-04-13-15-14-06-930.png!

 

The single NiFi instances or the 2 node clusters are always fine. The issue 
exists only on our 8 node cluster. 
NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):

!image-2021-04-13-15-14-02-162.png!

 

If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
shows no summer time, so only GMT+1 instead of GMT+2. As well not what we want.
{code:java}
java.arg.20=-Duser.timezone="Europe/Zurich"{code}
!image-2021-04-13-15-14-56-690.png!

 

What Matt below suggested has been verified, all servers (single nodes as well 
as clusters) are reporting the same time/timezone.

[https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]

 

So the question remains, where on a NiFi cluster comes the time from the UI and 
what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
getting CEST but the time is anyhow UTC instead of CEST... I really need to 
have the correct time in the UI as I don't know what the impact could be on our 
dataflows.

 

Any help would be really appreciated.


> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, 

[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-13 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Labels: cluster openjdk timezone  (was: )

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> is within NiFi 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
> NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-13 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Labels: centos cluster openjdk timezone  (was: cluster openjdk timezone)

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
>  Labels: centos, cluster, openjdk, timezone
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> is within NiFi 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
> NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-13 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-8423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-8423:
---
Component/s: Core UI

> Timezone wrong in UI for an 8 node cluster
> --
>
> Key: NIFI-8423
> URL: https://issues.apache.org/jira/browse/NIFI-8423
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core UI
>Affects Versions: 1.13.2
> Environment: 8 Node NiFi Cluster on CentOS 7
> OpenJDK 1.8.0_282
> Local timezone: Europe/Zurich (CEST or UTC+2h)
>Reporter: Josef Zahner
>Priority: Critical
> Attachments: image-2021-04-13-15-14-02-162.png, 
> image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png
>
>
> We just upgraded to NiFi 1.13.2 and Java 1.8.0_282
> On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
> have the issue that the UI does display the correct timezone (CEST, so UTC 
> +2h), but in fact the time is displayed as UTC. NTP is enabled and working. 
> The OS configuration/location is everywhere the same (doesn't matter if 
> single or cluster NiFi). My tests below are all done at around 15:xx:xx local 
> time (CEST).
> As you can see below, the timezone seems to be correct, but the time itself 
> is within NiFi 2h behind (so in fact UTC) compared to Windows. In earlier 
> NiFi/java versions it was enough to multiple times restart the cluster, but 
> on the newest versions this doesn't help anymore. It shows most of the time 
> CEST with the wrong time or directly UTC.
> !image-2021-04-13-15-14-06-930.png!
>  
> The single NiFi instances or the 2 node clusters are always fine. The issue 
> exists only on our 8 node cluster. 
> NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):
> !image-2021-04-13-15-14-02-162.png!
>  
> If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
> shows no summer time, so only GMT+1 instead of GMT+2. As well not what we 
> want.
> {code:java}
> java.arg.20=-Duser.timezone="Europe/Zurich"{code}
> !image-2021-04-13-15-14-56-690.png!
>  
> What Matt below suggested has been verified, all servers (single nodes as 
> well as clusters) are reporting the same time/timezone.
> [https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]
>  
> So the question remains, where on a NiFi cluster comes the time from the UI 
> and what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
> getting CEST but the time is anyhow UTC instead of CEST... I really need to 
> have the correct time in the UI as I don't know what the impact could be on 
> our dataflows.
>  
> Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-8423) Timezone wrong in UI for an 8 node cluster

2021-04-13 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-8423:
--

 Summary: Timezone wrong in UI for an 8 node cluster
 Key: NIFI-8423
 URL: https://issues.apache.org/jira/browse/NIFI-8423
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.13.2
 Environment: 8 Node NiFi Cluster on CentOS 7
OpenJDK 1.8.0_282
Local timezone: Europe/Zurich (CEST or UTC+2h)

Reporter: Josef Zahner
 Attachments: image-2021-04-13-15-14-02-162.png, 
image-2021-04-13-15-14-06-930.png, image-2021-04-13-15-14-56-690.png

We just upgraded to NiFi 1.13.2 and Java 1.8.0_282

On our 8 node NiFi 1.13.2 cluster with timezone Europe/Zurich (CEST/CET), we 
have the issue that the UI does display the correct timezone (CEST, so UTC 
+2h), but in fact the time is displayed as UTC. NTP is enabled and working. The 
OS configuration/location is everywhere the same (doesn't matter if single or 
cluster NiFi). My tests below are all done at around 15:xx:xx local time (CEST).

As you can see below, the timezone seems to be correct, but the time itself is 
within NiFi 2h behind (so in fact UTC) compared to Windows. In earlier 
NiFi/java versions it was enough to multiple times restart the cluster, but on 
the newest versions this doesn't help anymore. It shows most of the time CEST 
with the wrong time or directly UTC.

!image-2021-04-13-15-14-06-930.png!

 

The single NiFi instances or the 2 node clusters are always fine. The issue 
exists only on our 8 node cluster. 
NiFi Single Node Screenshot, which is fine (CEST, so UTC + 2h):

!image-2021-04-13-15-14-02-162.png!

 

If we set the -Duser.timezone to "Europe/Zurich" in bootstrap.conf, the UI 
shows no summer time, so only GMT+1 instead of GMT+2. As well not what we want.
{code:java}
java.arg.20=-Duser.timezone="Europe/Zurich"{code}
!image-2021-04-13-15-14-56-690.png!

 

What Matt below suggested has been verified, all servers (single nodes as well 
as clusters) are reporting the same time/timezone.

[https://community.cloudera.com/t5/Support-Questions/NiFi-clock-is-off-by-one-hour-daylight-savings-problem/td-p/192942]

 

So the question remains, where on a NiFi cluster comes the time from the UI and 
what could cause it that it is wrong? Sometimes I get UTC, sometimes I'm 
getting CEST but the time is anyhow UTC instead of CEST... I really need to 
have the correct time in the UI as I don't know what the impact could be on our 
dataflows.

 

Any help would be really appreciated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2020-04-27 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17093939#comment-17093939
 ] 

Josef Zahner commented on NIFI-6860:


[~tmelhase] in my case the issue was fully reproducible. I'm off until begin of 
june, but can I help you to reproduce the issue somehow?

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Assignee: Troy Melhase
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png, authorizers.xml, 
> login-identity-providers.xml
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In 

[jira] [Resolved] (NIFI-6883) PutSFTP Permissions Wrong with NiFi 1.10.0

2020-02-25 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner resolved NIFI-6883.

Resolution: Duplicate

> PutSFTP Permissions Wrong with NiFi 1.10.0
> --
>
> Key: NIFI-6883
> URL: https://issues.apache.org/jira/browse/NIFI-6883
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: OpenJDK 8, Centos 7.6, NiFi Cluster with 2 Nodes
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Canvas Overview.png, Flowfile Permissions.png, PutSFTP 
> Config.png
>
>
> The PutSFTP processor doesn't anymore respect the flowfile attribute 
> "file.permissions". It sets per default always "000" if the field 
> "Permissions" from PutSFTP is empty, even thought the attribute has been set 
> by the ListSFTP. Occurs since NiFi 1.10.0.
> *Workaround*: set the permission manually by writing them into field 
> "Permissions".
> *Steps to reproduce*: Check my canvas overview picture below with ListSFTP, 
> FetchSFTP, UpdateAttribute & PutSFTP processors. As input I've used the 
> "testfile.txt" and as output I've written "newname.txt". Permissions for the 
> output are "--" instead of "664".
> {code:java}
> /tmp/my_test:
> total 4.0K
> drwxrwxr-x   2 usera usera   26 Nov 19 09:40 .
> drwxrwxrwt. 10 root root203 Nov 19 10:03 ..
> -rw-rw-r--   1 usera usera4 Nov 19 09:40 testfile.txt{code}
> {code:java}
> /tmp/my_test_out:
> total 4.0K
> drwxrwxr-x   2 usera usera   25 Nov 19 10:03 .
> drwxrwxrwt. 10 root root203 Nov 19 10:03 ..
> --   1 usera usera4 Nov 19 10:03 newname.txt
> {code}
> Canvas Overview:
> !Canvas Overview.png|width=521,height=683!
>  
> Flowfile Permission before PutSFTP:
> !Flowfile Permissions.png|width=699,height=521!
> PutSFTP Config for field "Permissions" (default empty)
> !PutSFTP Config.png|width=788,height=547!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6883) PutSFTP Permissions Wrong with NiFi 1.10.0

2020-02-25 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17045224#comment-17045224
 ] 

Josef Zahner commented on NIFI-6883:


This issue has been fixed in 1.11.x. I can't remember the Jira number .

> PutSFTP Permissions Wrong with NiFi 1.10.0
> --
>
> Key: NIFI-6883
> URL: https://issues.apache.org/jira/browse/NIFI-6883
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: OpenJDK 8, Centos 7.6, NiFi Cluster with 2 Nodes
>Reporter: Josef Zahner
>Priority: Major
> Attachments: Canvas Overview.png, Flowfile Permissions.png, PutSFTP 
> Config.png
>
>
> The PutSFTP processor doesn't anymore respect the flowfile attribute 
> "file.permissions". It sets per default always "000" if the field 
> "Permissions" from PutSFTP is empty, even thought the attribute has been set 
> by the ListSFTP. Occurs since NiFi 1.10.0.
> *Workaround*: set the permission manually by writing them into field 
> "Permissions".
> *Steps to reproduce*: Check my canvas overview picture below with ListSFTP, 
> FetchSFTP, UpdateAttribute & PutSFTP processors. As input I've used the 
> "testfile.txt" and as output I've written "newname.txt". Permissions for the 
> output are "--" instead of "664".
> {code:java}
> /tmp/my_test:
> total 4.0K
> drwxrwxr-x   2 usera usera   26 Nov 19 09:40 .
> drwxrwxrwt. 10 root root203 Nov 19 10:03 ..
> -rw-rw-r--   1 usera usera4 Nov 19 09:40 testfile.txt{code}
> {code:java}
> /tmp/my_test_out:
> total 4.0K
> drwxrwxr-x   2 usera usera   25 Nov 19 10:03 .
> drwxrwxrwt. 10 root root203 Nov 19 10:03 ..
> --   1 usera usera4 Nov 19 10:03 newname.txt
> {code}
> Canvas Overview:
> !Canvas Overview.png|width=521,height=683!
>  
> Flowfile Permission before PutSFTP:
> !Flowfile Permissions.png|width=699,height=521!
> PutSFTP Config for field "Permissions" (default empty)
> !PutSFTP Config.png|width=788,height=547!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6895) PutKudu Processor Warnings - Applying an operation in a closed session; this is unsafe

2020-01-31 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027400#comment-17027400
 ] 

Josef Zahner commented on NIFI-6895:


I can confirm that the issue has been solved with NiFi 1.11.0.

> PutKudu Processor Warnings - Applying an operation in a closed session; this 
> is unsafe
> --
>
> Key: NIFI-6895
> URL: https://issues.apache.org/jira/browse/NIFI-6895
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: Kudu 1.10.0; NiFi 1.10.0, OpenJDK 8 (232); 8 Node Cluster
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> We have just upgraded from NiFi 1.9.2 to NiFi 1.10.0. We have seen that we 
> got 1’000 times more logs ({{nifi-app.log}}) since the upgrade, caused mainly 
> by the PutKudu processor.
> The logmessages we are always getting are: 
> {code:java}
> 2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
> org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
> session; this is unsafe{code}
> The PutKudu processor itself seems to work fine.
>  
> The line of code which from Apache Kudu Client which reflects the message is 
> here: (line 547):
>  
> [https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java]
>  
> Since we are getting a huge amount of loglines we had to insert the following 
> *workaround* in {{logback.xml}}:
> {code:java}
> 
> {code}
> Without suppressing the AsyncKuduSession messages we are getting multiple 
> gigabytes of data per hour, but sadly with the workaround we don't see any 
> PutKudu warnings anymore. 
>  
> Can the dev's please check why we are getting the warning and fix the root 
> cause? Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2020-01-31 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027398#comment-17027398
 ] 

Josef Zahner commented on NIFI-6908:


I can confirm that it works under NiFi 1.11.0. Memory Leak is gone, we are 
already productive.

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Fix For: 1.11.0
>
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-6958) Disabled State in Registry (in Sub PG) breaks Flow Update on Nifi Side

2020-01-31 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027323#comment-17027323
 ] 

Josef Zahner edited comment on NIFI-6958 at 1/31/20 9:49 AM:
-

[~markap14] are you aware of this issue? We have seen that you implemented the 
"Enabled/Disable State in Registry for NiFi" 
(https://issues.apache.org/jira/browse/NIFI-6025). The issue here is that we 
can't disable a processor within a subprocessgroup, which is very annoying 
because as soon you have such a processor you can't pull the the processgroup 
out from registry - so the template in the registry is completely broken.


was (Author: jzahner):
[~markap14] are you aware of this issue? We have seen that you implemented the 
"Enabled/Disable State in Registry for NiFi". The issue here is that we can't 
disable a processor within a subprocessgroup, which is very annoying because as 
soon you have such a processor you can't pull the the processgroup out from 
registry - so the template in the registry is completely broken.

> Disabled State in Registry (in Sub PG) breaks Flow Update on Nifi Side
> --
>
> Key: NIFI-6958
> URL: https://issues.apache.org/jira/browse/NIFI-6958
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: Nifi 1.10.0, Nifi-Registry 0.5.0
>Reporter: Silka Simmen
>Priority: Major
>
> We run into an error, when trying to update or import a Flow on Nifi from 
> Registry: "No Processor with ID X belongs to this Process Group"
> To reproduce the error above we save a flow into the Registry that contains a 
> disabled processor in a sub process group:
>  * Process Group saved to Registry
>  ** Sub Process Group
>  *** Processor in Disabled State
> It seems that a flow cannot be updated or imported anymore on Nifi side as 
> soon as it contains a disabled processor in a sub process group.
> We think this bug might be introduced by this feature implementation: 
> https://issues.apache.org/jira/browse/NIFI-6025



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6958) Disabled State in Registry (in Sub PG) breaks Flow Update on Nifi Side

2020-01-31 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17027323#comment-17027323
 ] 

Josef Zahner commented on NIFI-6958:


[~markap14] are you aware of this issue? We have seen that you implemented the 
"Enabled/Disable State in Registry for NiFi". The issue here is that we can't 
disable a processor within a subprocessgroup, which is very annoying because as 
soon you have such a processor you can't pull the the processgroup out from 
registry - so the template in the registry is completely broken.

> Disabled State in Registry (in Sub PG) breaks Flow Update on Nifi Side
> --
>
> Key: NIFI-6958
> URL: https://issues.apache.org/jira/browse/NIFI-6958
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: Nifi 1.10.0, Nifi-Registry 0.5.0
>Reporter: Silka Simmen
>Priority: Major
>
> We run into an error, when trying to update or import a Flow on Nifi from 
> Registry: "No Processor with ID X belongs to this Process Group"
> To reproduce the error above we save a flow into the Registry that contains a 
> disabled processor in a sub process group:
>  * Process Group saved to Registry
>  ** Sub Process Group
>  *** Processor in Disabled State
> It seems that a flow cannot be updated or imported anymore on Nifi side as 
> soon as it contains a disabled processor in a sub process group.
> We think this bug might be introduced by this feature implementation: 
> https://issues.apache.org/jira/browse/NIFI-6025



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6895) PutKudu Processor Warnings - Applying an operation in a closed session; this is unsafe

2019-12-05 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16988941#comment-16988941
 ] 

Josef Zahner commented on NIFI-6895:


Hi [~pvillard], we faced the issue in our production nifi cluster which I don't 
want to endanger again with tests. I can't promise that I'll find the time to 
set it up in lab and test (it's not that easy as we have to send a lot of data 
through the processor and lab is not really ready for tests).

I'll keep you posted if I find the time to work in it. Cheers

> PutKudu Processor Warnings - Applying an operation in a closed session; this 
> is unsafe
> --
>
> Key: NIFI-6895
> URL: https://issues.apache.org/jira/browse/NIFI-6895
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: Kudu 1.10.0; NiFi 1.10.0, OpenJDK 8 (232); 8 Node Cluster
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Major
> Fix For: 1.11.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We have just upgraded from NiFi 1.9.2 to NiFi 1.10.0. We have seen that we 
> got 1’000 times more logs ({{nifi-app.log}}) since the upgrade, caused mainly 
> by the PutKudu processor.
> The logmessages we are always getting are: 
> {code:java}
> 2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
> org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
> session; this is unsafe{code}
> The PutKudu processor itself seems to work fine.
>  
> The line of code which from Apache Kudu Client which reflects the message is 
> here: (line 547):
>  
> [https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java]
>  
> Since we are getting a huge amount of loglines we had to insert the following 
> *workaround* in {{logback.xml}}:
> {code:java}
> 
> {code}
> Without suppressing the AsyncKuduSession messages we are getting multiple 
> gigabytes of data per hour, but sadly with the workaround we don't see any 
> PutKudu warnings anymore. 
>  
> Can the dev's please check why we are getting the warning and fix the root 
> cause? Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-28 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984243#comment-16984243
 ] 

Josef Zahner commented on NIFI-6908:


Please have a look at the pictures below. One additional hint, exactly after 
"disabling" the processor it was possible to free up all the memory by 
executing a garbage collection.

!PutKudu_Settings.png!

 

!PutKudu_Scheduling.png!

 

!PutKudu_Properties.png!

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-28 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6908:
---
Attachment: PutKudu_Properties.png
PutKudu_Scheduling.png
PutKudu_Settings.png

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: PutKudu_Properties.png, PutKudu_Scheduling.png, 
> PutKudu_Settings.png, memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-28 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6908:
---
Attachment: Screenshot 2019-11-28 at 09.57.11.png

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-28 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6908:
---
Attachment: (was: Screenshot 2019-11-28 at 09.57.11.png)

> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Assignee: Grant Henke
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2019-11-28 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984219#comment-16984219
 ] 

Josef Zahner edited comment on NIFI-6860 at 11/28/19 8:53 AM:
--

Hi Nathan

Of course I can share the config (I have replaced some secure keywords like 
passwords).

Yes we have a keystore configured in authorizers.xml. The same as as in the 
nifi.properties. To be honest I never thought about it, we just copied the 
keystore/truststore config. One speciality about the keystore, even if I think 
its not relevant. We are using as CN the following name "*.corproot.net", but 
as SAN (subject alternative name) we have all the hostnames we use for nifi, 
eg. nifi-01.corproot.net and nifi-02.corproot.net, So at the end we can use 
only one keystore for all our nifi nodes, doesn't matter whether cluster or 
single node. Ah and the keystore is a client & server cert, that's a 
requirement because we use it as well for the cluster communication.

For a test I've removed the keystore from authorizers.xml config with java-11, 
same result - error 13.

*nifi.properties:*
{code:java}
nifi.security.user.authorizer=managed-authorizer
nifi.security.user.login.identity.provider=ldap-provider
{code}
 

*authorizers.xml -> (attached to ticket; header xml lines are missing, sorry)*

 

*login-identity-providers.xml:* *-> attached to ticket***

 

What else do you need?

 


was (Author: jzahner):
Hi Nathan

Of course I can share the config (I have replaced some secure keywords like 
passwords).

Yes we have a keystore configured in authorizers.xml. The same as as in the 
nifi.properties. To be honest I never thought about it, we just copied the 
keystore/truststore config. One speciality about the keystore, even if I think 
its not relevant. We are using as CN the following name "*.corproot.net", but 
as SAN (subject alternative name) we have all the hostnames we use for nifi, 
eg. nifi-01.corproot.net and nifi-02.corproot.net, So at the end we can use 
only one keystore for all our nifi nodes, doesn't matter whether cluster or 
single node. Ah and the keystore is a client & server cert, that's a 
requirement because we use it as well for the cluster communication.

For a test I've removed the keystore from authorizers.xml config with java-11, 
same result - error 13.

*nifi.properties:*

 
{code:java}
nifi.security.user.authorizer=managed-authorizer
nifi.security.user.login.identity.provider=ldap-provider
{code}
 

 

*authorizers.xml -> (attached to ticket; header xml lines are missing, sorry)*

 

*login-identity-providers.xml:* *-> attached to ticket***

 

What else do you need?

 

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Assignee: Nathan Gough
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png, authorizers.xml, 
> login-identity-providers.xml
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: 

[jira] [Commented] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2019-11-28 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16984219#comment-16984219
 ] 

Josef Zahner commented on NIFI-6860:


Hi Nathan

Of course I can share the config (I have replaced some secure keywords like 
passwords).

Yes we have a keystore configured in authorizers.xml. The same as as in the 
nifi.properties. To be honest I never thought about it, we just copied the 
keystore/truststore config. One speciality about the keystore, even if I think 
its not relevant. We are using as CN the following name "*.corproot.net", but 
as SAN (subject alternative name) we have all the hostnames we use for nifi, 
eg. nifi-01.corproot.net and nifi-02.corproot.net, So at the end we can use 
only one keystore for all our nifi nodes, doesn't matter whether cluster or 
single node. Ah and the keystore is a client & server cert, that's a 
requirement because we use it as well for the cluster communication.

For a test I've removed the keystore from authorizers.xml config with java-11, 
same result - error 13.

*nifi.properties:*

 
{code:java}
nifi.security.user.authorizer=managed-authorizer
nifi.security.user.login.identity.provider=ldap-provider
{code}
 

 

*authorizers.xml -> (attached to ticket; header xml lines are missing, sorry)*

 

*login-identity-providers.xml:* *-> attached to ticket***

 

What else do you need?

 

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Assignee: Nathan Gough
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png, authorizers.xml, 
> login-identity-providers.xml
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> 

[jira] [Updated] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2019-11-28 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6860:
---
Attachment: login-identity-providers.xml

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Assignee: Nathan Gough
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png, authorizers.xml, 
> login-identity-providers.xml
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In authorizers.xml we added the line “{{false}}”, but beside of that at least 
> the authorizers.xml is the same. Anybody an idea what 

[jira] [Updated] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2019-11-27 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6860:
---
Attachment: authorizers.xml

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Assignee: Nathan Gough
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png, authorizers.xml
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In authorizers.xml we added the line “{{false}}”, but beside of that at least 
> the authorizers.xml is the same. Anybody an idea what could cause the error? 
> NiFi-5839 seems to be 

[jira] [Updated] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-27 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6908:
---
Description: 
PutKudu 1.10.0 eats up all the heap memory and garbage collection can't anymore 
free up memory after a few hours.

We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
streaming source which generates constantly about 2'500 flowfiles/2.5GB data in 
5 minutes. In our example the streaming source was running on "nifi-05" (green 
line). As you can see between 00:00 and 04:00 the memory grows and grows and at 
the end the node became instable and the dreaded "java.lang.OutOfMemoryError: 
Java heap space" message appeared. We tried to do a manual garbage collection 
with visualvm profiler, but it didn't helped.  

!memory_leak.png!

We are sure that the PutKudu is the culprit, as we have now taken the codebase 
from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks at all.

With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 hours 
with our current load as the memory was completely full.

 

  was:
PutKudu 1.10.0 eats up all the heap memory and garbage collection can't anymore 
free up memory after a few hours.

We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
streaming source which generates constantly about 2'500 flowfiles/2.5GB data in 
5 minutes. In our example the streaming source was running on "nifi-05" (green 
line). As you can see between 00:00 and 04:00 the memory grows and grows and at 
the end the node became instable and the dreaded "java.lang.OutOfMemoryError: 
Java heap space" message appeared. We tried to do a manual garbage collection 
with visualvm profiler, but it didn't helped.  

!memory_leak.png!

We are sure that the PutKudu is the culprit, as we have now taken the codebase 
from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks at all.

Like this our cluster crashed within 5-6 hours with our current load as the 
memory was completely full.

 


> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> With the official PutKudu 1.10.0 processor our cluster crashed within 5-6 
> hours with our current load as the memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-27 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6908:
---
Description: 
PutKudu 1.10.0 eats up all the heap memory and garbage collection can't anymore 
free up memory after a few hours.

We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
streaming source which generates constantly about 2'500 flowfiles/2.5GB data in 
5 minutes. In our example the streaming source was running on "nifi-05" (green 
line). As you can see between 00:00 and 04:00 the memory grows and grows and at 
the end the node became instable and the dreaded "java.lang.OutOfMemoryError: 
Java heap space" message appeared. We tried to do a manual garbage collection 
with visualvm profiler, but it didn't helped.  

!memory_leak.png!

We are sure that the PutKudu is the culprit, as we have now taken the codebase 
from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks at all.

Like this our cluster crashed within 5-6 hours with our current load as the 
memory was completely full.

 

  was:
PutKudu 1.10.0 eats up all the heap memory and garbage collection can't anymore 
free up memory after a few hours.

We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
streaming source which generates constantly about 2'500 flowfiles/2.5GB data in 
5 minutes. In our example the streaming source was running on "nifi-05" (green 
line). As you can see between 00:00 and 04:00 the memory grows and grows and at 
the end the node became instable and the dreaded "java.lang.OutOfMemoryError: 
Java heap space" message appeared. With tried to do a manual garbage collection 
with visualvm profiler, but it didn't helped.  

!memory_leak.png!

We are sure that the PutKudu is the culprit, as we have now taken the codebase 
from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks at all.

Like this our cluster crashed within 5-6 hours with our current load as the 
memory was completely full.

 


> PutKudu 1.10.0 Memory Leak
> --
>
> Key: NIFI-6908
> URL: https://issues.apache.org/jira/browse/NIFI-6908
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.10.0
> Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: heap, kudu, oom
> Attachments: memory_leak.png
>
>
> PutKudu 1.10.0 eats up all the heap memory and garbage collection can't 
> anymore free up memory after a few hours.
> We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
> streaming source which generates constantly about 2'500 flowfiles/2.5GB data 
> in 5 minutes. In our example the streaming source was running on "nifi-05" 
> (green line). As you can see between 00:00 and 04:00 the memory grows and 
> grows and at the end the node became instable and the dreaded 
> "java.lang.OutOfMemoryError: Java heap space" message appeared. We tried to 
> do a manual garbage collection with visualvm profiler, but it didn't helped.  
> !memory_leak.png!
> We are sure that the PutKudu is the culprit, as we have now taken the 
> codebase from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks 
> at all.
> Like this our cluster crashed within 5-6 hours with our current load as the 
> memory was completely full.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6908) PutKudu 1.10.0 Memory Leak

2019-11-27 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-6908:
--

 Summary: PutKudu 1.10.0 Memory Leak
 Key: NIFI-6908
 URL: https://issues.apache.org/jira/browse/NIFI-6908
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.10.0
 Environment: NiFi 1.10.0 8-Node Cluster; Kudu 1.10.0
Reporter: Josef Zahner
 Attachments: memory_leak.png

PutKudu 1.10.0 eats up all the heap memory and garbage collection can't anymore 
free up memory after a few hours.

We have an NiFi 8-Node cluster (31GB java max memory configured) with a 
streaming source which generates constantly about 2'500 flowfiles/2.5GB data in 
5 minutes. In our example the streaming source was running on "nifi-05" (green 
line). As you can see between 00:00 and 04:00 the memory grows and grows and at 
the end the node became instable and the dreaded "java.lang.OutOfMemoryError: 
Java heap space" message appeared. With tried to do a manual garbage collection 
with visualvm profiler, but it didn't helped.  

!memory_leak.png!

We are sure that the PutKudu is the culprit, as we have now taken the codebase 
from PutKudu 1.9.2 and use it now in NiFi 1.10.0 without any leaks at all.

Like this our cluster crashed within 5-6 hours with our current load as the 
memory was completely full.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6895) PutKudu Processor Warnings - Applying an operation in a closed session; this is unsafe

2019-11-21 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6895:
---
Description: 
We have just upgraded from NiFi 1.9.2 to NiFi 1.10.0. We have seen that we got 
1’000 times more logs ({{nifi-app.log}}) since the upgrade, caused mainly by 
the PutKudu processor.

The logmessages we are always getting are: 
{code:java}
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe{code}
The PutKudu processor itself seems to work fine.

 

The line of code which from Apache Kudu Client which reflects the message is 
here: (line 547):
 
[https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java]

 

Since we are getting a huge amount of loglines we had to insert the following 
*workaround* in {{logback.xml}}:
{code:java}

{code}
Without suppressing the AsyncKuduSession messages we are getting multiple 
gigabytes of data per hour, but sadly with the workaround we don't see any 
PutKudu warnings anymore. 

 

Can the dev's please check why we are getting the warning and fix the root 
cause? Thanks in advance.

  was:
We have just upgraded from NiFi 1.9.2 to NiFi 1.10.0. We have seen that we got 
1’000 times more logs ({{nifi-app.log}}) since the upgrade, caused mainly by 
the PutKudu processor.

The logmessages we are always getting are:

 
{code:java}
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe{code}
 

The PutKudu processor itself seems to work fine.

The line of code which from Apache Kudu Client which reflects the message is 
here: (line 547):
[https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java]

Since we are getting a huge amount of loglines we had to insert the following 
*workaround* in {{logback.xml}}:
{code:java}

{code}
Without suppressing the AsyncKuduSession messages we are getting multiple 
gigabytes of data per hour, but sadly with the workaround we don't see any 
PutKudu warnings anymore. 

 

Can the dev's please check why we are getting the warning and fix the root 
cause? Thanks in advance.


> PutKudu Processor Warnings - Applying an operation in a closed session; this 
> is unsafe
> --
>
> Key: NIFI-6895
> URL: https://issues.apache.org/jira/browse/NIFI-6895
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: Kudu 1.10.0; NiFi 1.10.0, OpenJDK 8 (232); 8 Node Cluster
>Reporter: Josef Zahner
>Priority: Major
>
> We have just upgraded from NiFi 1.9.2 to NiFi 1.10.0. We have seen that we 
> got 1’000 times more logs ({{nifi-app.log}}) since the upgrade, caused mainly 
> by the PutKudu processor.
> The logmessages we are always getting are: 
> {code:java}
> 2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
> org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
> session; this is unsafe{code}
> The PutKudu processor itself seems to work fine.
>  
> The line of code which from Apache Kudu Client which reflects the message is 
> here: (line 547):
>  
> [https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java]
>  
> Since we are getting a huge amount of loglines we had to insert the following 
> *workaround* in {{logback.xml}}:
> {code:java}
> 
> {code}
> Without suppressing the AsyncKuduSession messages we are getting multiple 
> gigabytes of data per hour, but sadly with the workaround we don't see any 
> PutKudu warnings anymore. 
>  
> Can the dev's please check why we are getting the warning and fix the root 
> cause? Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6895) PutKudu Processor Warnings - Applying an operation in a closed session; this is unsafe

2019-11-21 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-6895:
--

 Summary: PutKudu Processor Warnings - Applying an operation in a 
closed session; this is unsafe
 Key: NIFI-6895
 URL: https://issues.apache.org/jira/browse/NIFI-6895
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.10.0
 Environment: Kudu 1.10.0; NiFi 1.10.0, OpenJDK 8 (232); 8 Node Cluster
Reporter: Josef Zahner


We have just upgraded from NiFi 1.9.2 to NiFi 1.10.0. We have seen that we got 
1’000 times more logs ({{nifi-app.log}}) since the upgrade, caused mainly by 
the PutKudu processor.

The logmessages we are always getting are:

 
{code:java}
2019-11-21 08:42:27,627 WARN [Timer-Driven Process Thread-2] 
org.apache.kudu.client.AsyncKuduSession Applying an operation in a closed 
session; this is unsafe{code}
 

The PutKudu processor itself seems to work fine.

The line of code which from Apache Kudu Client which reflects the message is 
here: (line 547):
[https://github.com/apache/kudu/blob/master/java/kudu-client/src/main/java/org/apache/kudu/client/AsyncKuduSession.java]

Since we are getting a huge amount of loglines we had to insert the following 
*workaround* in {{logback.xml}}:
{code:java}

{code}
Without suppressing the AsyncKuduSession messages we are getting multiple 
gigabytes of data per hour, but sadly with the workaround we don't see any 
PutKudu warnings anymore. 

 

Can the dev's please check why we are getting the warning and fix the root 
cause? Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-4898) Remote Process Group in a SSL setup generates Java Exception

2019-11-19 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-4898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner resolved NIFI-4898.

Resolution: Not A Problem

Custom NARs removed

> Remote Process Group in a SSL setup generates Java Exception
> 
>
> Key: NIFI-4898
> URL: https://issues.apache.org/jira/browse/NIFI-4898
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.5.0
> Environment: NiFi Version 1.5.0
> Java 1.8.0_161-b12
> CentOS Linux release 7.4.1708
>Reporter: Josef Zahner
>Priority: Major
>
> In a SSL secured NiFi setup, doesn't matter whether it is a cluster or not, 
> NiFi creates a Java exception as soon as I create a "Remote Process Group". 
> It doesn't mater which URL (even one which doesn't exists) I insert or if I 
> choose RAW or HTTP, the error is always the same and occurs as soon as I 
> click "add" on the "Remote Process Group".
> On NiFi 1.4.0 this works without any issues.
> Error:
> {code:java}
> 2018-02-21 10:42:10,006 ERROR [Remote Process Group 
> b7bde0cc-0161-1000-2e7f-3167a78d8386 Thread-1] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at java.util.concurrent.FutureTask.report(Unknown Source)
> at java.util.concurrent.FutureTask.get(Unknown Source)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1237)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.fetchController(SiteToSiteRestApiClient.java:419)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:394)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:361)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getController(SiteToSiteRestApiClient.java:346)
> at 
> org.apache.nifi.remote.StandardRemoteProcessGroup.refreshFlowContents(StandardRemoteProcessGroup.java:842)
> at 
> org.apache.nifi.remote.StandardRemoteProcessGroup.lambda$initialize$0(StandardRemoteProcessGroup.java:193)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.run(Unknown Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(Unknown
>  Source)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
>  Source)
> ... 3 common frames omitted
> 2018-02-21 10:42:10,009 ERROR [Remote Process Group 
> b7bde0cc-0161-1000-2e7f-3167a78d8386 Thread-1] 
> org.apache.nifi.engine.FlowEngine A flow controller task execution stopped 
> abnormally
> java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at java.util.concurrent.FutureTask.report(Unknown Source)
> at java.util.concurrent.FutureTask.get(Unknown Source)
> at org.apache.nifi.engine.FlowEngine.afterExecute(FlowEngine.java:100)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.NoSuchMethodError: 
> org.apache.http.impl.client.HttpClientBuilder.setSSLContext(Ljavax/net/ssl/SSLContext;)Lorg/apache/http/impl/client/HttpClientBuilder;
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.setupClient(SiteToSiteRestApiClient.java:278)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.getHttpClient(SiteToSiteRestApiClient.java:219)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1189)
> at 
> org.apache.nifi.remote.util.SiteToSiteRestApiClient.execute(SiteToSiteRestApiClient.java:1237)
> at 
> 

[jira] [Created] (NIFI-6883) PutSFTP Permissions Wrong with NiFi 1.10.0

2019-11-19 Thread Josef Zahner (Jira)
Josef Zahner created NIFI-6883:
--

 Summary: PutSFTP Permissions Wrong with NiFi 1.10.0
 Key: NIFI-6883
 URL: https://issues.apache.org/jira/browse/NIFI-6883
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.10.0
 Environment: OpenJDK 8, Centos 7.6, NiFi Cluster with 2 Nodes
Reporter: Josef Zahner
 Attachments: Canvas Overview.png, Flowfile Permissions.png, PutSFTP 
Config.png

The PutSFTP processor doesn't anymore respect the flowfile attribute 
"file.permissions". It sets per default always "000" if the field "Permissions" 
from PutSFTP is empty, even thought the attribute has been set by the ListSFTP. 
Occurs since NiFi 1.10.0.

*Workaround*: set the permission manually by writing them into field 
"Permissions".

*Steps to reproduce*: Check my canvas overview picture below with ListSFTP, 
FetchSFTP, UpdateAttribute & PutSFTP processors. As input I've used the 
"testfile.txt" and as output I've written "newname.txt". Permissions for the 
output are "--" instead of "664".
{code:java}
/tmp/my_test:
total 4.0K
drwxrwxr-x   2 usera usera   26 Nov 19 09:40 .
drwxrwxrwt. 10 root root203 Nov 19 10:03 ..
-rw-rw-r--   1 usera usera4 Nov 19 09:40 testfile.txt{code}
{code:java}
/tmp/my_test_out:
total 4.0K
drwxrwxr-x   2 usera usera   25 Nov 19 10:03 .
drwxrwxrwt. 10 root root203 Nov 19 10:03 ..
--   1 usera usera4 Nov 19 10:03 newname.txt
{code}
Canvas Overview:

!Canvas Overview.png|width=521,height=683!

 

Flowfile Permission before PutSFTP:

!Flowfile Permissions.png|width=699,height=521!

PutSFTP Config for field "Permissions" (default empty)

!PutSFTP Config.png|width=788,height=547!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2019-11-12 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16972607#comment-16972607
 ] 

Josef Zahner commented on NIFI-6860:


More News, we upgraded in parallel to NiFi 1.10.0 from java 1.8.0 to java 11. 
In our case java 11 breaks the LDAP START_TLS feature, if I switch back to java 
1.8.0 the error message is gone and NiFi 1.10.0 starts with the same config.

As workaround we will now switch back to java 1.8.0. But we are glad that we 
can still use the START_TLS feature (as it is the successor of LDAPS).

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> 

[jira] [Updated] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue

2019-11-12 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6860:
---
Summary: Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue  
(was: Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue)

> Upgrade NiFi 1.9.2 to 1.10.0 - Java11 LDAP (START_TLS) Issue
> 
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In authorizers.xml we added the line “{{false}}”, but beside of that at least 
> the authorizers.xml is the same. Anybody an idea 

[jira] [Updated] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue

2019-11-12 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6860:
---
Labels: Java11 LDAP Nifi START-TLS  (was: LDAP Nifi START-TLS)

> Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue
> -
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: Java11, LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In authorizers.xml we added the line “{{false}}”, but beside of that at least 
> the authorizers.xml is the same. Anybody an idea what could cause the error? 
> NiFi-5839 seems to be related to the property above. 

[jira] [Commented] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue

2019-11-11 Thread Josef Zahner (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16971432#comment-16971432
 ] 

Josef Zahner commented on NIFI-6860:


the TCPDUMP created during startup of NiFi 1.10.0 with authentication strategy 
*START_TLS*. It shows simple, that's why I'm getting the error 13.

!Screenshot 2019-11-11 at 11.14.52.png!

 

> Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue
> -
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In authorizers.xml we added the line “{{false}}”, but beside 

[jira] [Updated] (NIFI-6860) Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue

2019-11-11 Thread Josef Zahner (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josef Zahner updated NIFI-6860:
---
Attachment: Screenshot 2019-11-11 at 11.14.52.png

> Upgrade NiFi 1.9.2 to 1.10.0 - LDAP (START_TLS) Issue
> -
>
> Key: NIFI-6860
> URL: https://issues.apache.org/jira/browse/NIFI-6860
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.10.0
> Environment: NiFi Single Node with HTTPS/LDAP enabled; CentOS 7.x
>Reporter: Josef Zahner
>Priority: Blocker
>  Labels: LDAP, Nifi, START-TLS
> Attachments: Screenshot 2019-11-11 at 11.14.52.png
>
>
> We would like to upgrade from NiFi 1.9.2 to 1.10.0 and we have HTTPS with 
> LDAP (START_TLS) authentication successfully enabled on 1.9.2. Now after 
> upgrading,  we have an issue which prevents nifi from startup:
> {code:java}
> 2019-11-11 08:29:30,447 ERROR [main] o.s.web.context.ContextLoader Context 
> initialization failed
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.springframework.security.config.annotation.web.configuration.WebSecurityConfiguration':
>  Unsatisfied dependency expressed through method 
> 'setFilterChainProxySecurityConfigurer' parameter 1; nested exception is 
> org.springframework.beans.factory.BeanExpressionException: Expression parsing 
> failed; nested exception is 
> org.springframework.beans.factory.UnsatisfiedDependencyException: Error 
> creating bean with name 
> 'org.apache.nifi.web.NiFiWebApiSecurityConfiguration': Unsatisfied dependency 
> expressed through method 'setJwtAuthenticationProvider' parameter 0; nested 
> exception is org.springframework.beans.factory.BeanCreationException: Error 
> creating bean with name 'jwtAuthenticationProvider' defined in class path 
> resource [nifi-web-security-context.xml]: Cannot resolve reference to bean 
> 'authorizer' while setting constructor argument; nested exception is 
> org.springframework.beans.factory.BeanCreationException: Error creating bean 
> with name 'authorizer': FactoryBean threw exception on object creation; 
> nested exception is 
> org.springframework.ldap.AuthenticationNotSupportedException: [LDAP: error 
> code 13 - confidentiality required]; nested exception is 
> javax.naming.AuthenticationNotSupportedException: [LDAP: error code 13 - 
> confidentiality required]
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredMethodElement.inject(AutowiredAnnotationBeanPostProcessor.java:666)
> at 
> org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:87)
> at 
> org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1269)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:551)
> at 
> org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:481)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:312)
> at 
> org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:230)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:308)
> at 
> org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:197)
> at 
> org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:761)
> at 
> org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:867)
> at 
> org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:543)
> at 
> org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:443)
> at 
> org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:325)
> at 
> org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107){code}
> In authorizers.xml we added the line “{{false}}”, but beside of that at least 
> the authorizers.xml is the same. Anybody an idea what could cause the error? 
> NiFi-5839 seems to be related to the property above. Other than that I found 

  1   2   >