[jira] [Commented] (NIFI-13181) Azure Blob and ADLS processors throw NoSuchMethodError when Service Principal is used

2024-06-17 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855621#comment-17855621
 ] 

Yuanhao Zhu commented on NIFI-13181:


[~turcsanyip] Ok! Thanks for fixing! Much appreciated! :D

> Azure Blob and ADLS processors throw NoSuchMethodError when Service Principal 
> is used
> -
>
> Key: NIFI-13181
> URL: https://issues.apache.org/jira/browse/NIFI-13181
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.26.0
>Reporter: Zoltán Kornél Török
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 1.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I spent some time testing nifi 1.26 and found this error, when tried to use 
> blobstorage related processors, like ListAzureBlobStorage_v12 .
> {code:java}
> 2024-05-08 09:09:06,416 WARN reactor.core.Exceptions: throwIfFatal detected a 
> jvm fatal exception, which is thrown and logged below:
> java.lang.NoSuchMethodError: 
> com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(Z)Lcom/microsoft/aad/msal4j/AbstractApplicationBase$Builder;
>     at 
> com.azure.identity.implementation.IdentityClientBase.getConfidentialClient(IdentityClientBase.java:233)
>     at 
> com.azure.identity.implementation.IdentityClient.lambda$getConfidentialClientApplication$4(IdentityClient.java:130)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:45)
>     at 
> reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>     at reactor.core.publisher.MonoUsing.subscribe(MonoUsing.java:102)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at 
> reactor.core.publisher.MonoFromFluxOperator.subscribe(MonoFromFluxOperator.java:81)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:263)
>     at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:51)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:203)
>     at 
> reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:203)
>     at 
> reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
>     at reactor.core.publisher.Flux.subscribe(Flux.java:8628)
>     at reactor.core.publisher.Flux.blockLast(Flux.java:2760)
>     at 
> com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:102)
>     at 
> com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.(ContinuablePagedByItemIterable.java:75)
>     at 
> com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:55)
>     at 
> com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:141)
>     at 
> org.apache.nifi.processors.azure.storage.ListAzureBlobStorage_v12.performListing(ListAzureBlobStorage_v12.java:230)
>     at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.lambda$listByTrackingEntities$11(AbstractListProcessor.java:1126)
>     at 
> org.apache.nifi.processor.util.list.ListedEntityTracker.trackEntities(ListedEntityTracker.java:272)
>     at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingEntities(AbstractListProcessor.java:1124)
>     at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:529)
>     at 
> org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
>     at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1361)
>     at 
> org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:247)
>     at 
> 

[jira] [Commented] (NIFI-13181) Azure Blob and ADLS processors throw NoSuchMethodError when Service Principal is used

2024-06-17 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855575#comment-17855575
 ] 

Yuanhao Zhu commented on NIFI-13181:


FYI, I've tried to use azure keyvault parameter provider to fetch secrets from 
azure keyvault and also end up getting same NoSuchMethodError. I suspect they 
have same root cause:
2024-06-17 09:30:04,716 ERROR [NiFi Web Server-88] 
c.a.core.implementation.AccessTokenCache \{"az.sdk.message":"Failed to acquire 
a new access 
token.","exception":"'com.microsoft.aad.msal4j.AbstractApplicationBase$Builder 
com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(boolean)'"}
java.lang.NoSuchMethodError: 
'com.microsoft.aad.msal4j.AbstractApplicationBase$Builder 
com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(boolean)'
Apparently it's trying to call AbstractApplicationBase.Builder which is only 
introduced in msal4j in 1.15.0

> Azure Blob and ADLS processors throw NoSuchMethodError when Service Principal 
> is used
> -
>
> Key: NIFI-13181
> URL: https://issues.apache.org/jira/browse/NIFI-13181
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.26.0
>Reporter: Zoltán Kornél Török
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 1.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I spent some time testing nifi 1.26 and found this error, when tried to use 
> blobstorage related processors, like ListAzureBlobStorage_v12 .
> {code:java}
> 2024-05-08 09:09:06,416 WARN reactor.core.Exceptions: throwIfFatal detected a 
> jvm fatal exception, which is thrown and logged below:
> java.lang.NoSuchMethodError: 
> com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(Z)Lcom/microsoft/aad/msal4j/AbstractApplicationBase$Builder;
>     at 
> com.azure.identity.implementation.IdentityClientBase.getConfidentialClient(IdentityClientBase.java:233)
>     at 
> com.azure.identity.implementation.IdentityClient.lambda$getConfidentialClientApplication$4(IdentityClient.java:130)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:45)
>     at 
> reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>     at reactor.core.publisher.MonoUsing.subscribe(MonoUsing.java:102)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at 
> reactor.core.publisher.MonoFromFluxOperator.subscribe(MonoFromFluxOperator.java:81)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:263)
>     at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:51)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:203)
>     at 
> reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:203)
>     at 
> reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
>     at reactor.core.publisher.Flux.subscribe(Flux.java:8628)
>     at reactor.core.publisher.Flux.blockLast(Flux.java:2760)
>     at 
> com.azure.core.util.paging.ContinuablePagedByIteratorBase.requestPage(ContinuablePagedByIteratorBase.java:102)
>     at 
> com.azure.core.util.paging.ContinuablePagedByItemIterable$ContinuablePagedByItemIterator.(ContinuablePagedByItemIterable.java:75)
>     at 
> com.azure.core.util.paging.ContinuablePagedByItemIterable.iterator(ContinuablePagedByItemIterable.java:55)
>     at 
> com.azure.core.util.paging.ContinuablePagedIterable.iterator(ContinuablePagedIterable.java:141)
>     at 
> org.apache.nifi.processors.azure.storage.ListAzureBlobStorage_v12.performListing(ListAzureBlobStorage_v12.java:230)
>     at 
> org.apache.nifi.processor.util.list.AbstractListProcessor.lambda$listByTrackingEntities$11(AbstractListProcessor.java:1126)
>     at 
> 

[jira] [Comment Edited] (NIFI-13181) Azure Blob and ADLS processors throw NoSuchMethodError when Service Principal is used

2024-06-17 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855575#comment-17855575
 ] 

Yuanhao Zhu edited comment on NIFI-13181 at 6/17/24 10:02 AM:
--

FYI, I've tried to use azure keyvault parameter provider after upgrading to 
1.26.0 to fetch secrets from azure keyvault and also end up getting same 
NoSuchMethodError. I suspect they have same root cause:
2024-06-17 09:30:04,716 ERROR [NiFi Web Server-88] 
c.a.core.implementation.AccessTokenCache \{"az.sdk.message":"Failed to acquire 
a new access 
token.","exception":"'com.microsoft.aad.msal4j.AbstractApplicationBase$Builder 
com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(boolean)'"}
java.lang.NoSuchMethodError: 
'com.microsoft.aad.msal4j.AbstractApplicationBase$Builder 
com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(boolean)'
Apparently it's trying to call AbstractApplicationBase.Builder which is only 
introduced in msal4j in 1.15.0


was (Author: JIRAUSER305688):
FYI, I've tried to use azure keyvault parameter provider to fetch secrets from 
azure keyvault and also end up getting same NoSuchMethodError. I suspect they 
have same root cause:
2024-06-17 09:30:04,716 ERROR [NiFi Web Server-88] 
c.a.core.implementation.AccessTokenCache \{"az.sdk.message":"Failed to acquire 
a new access 
token.","exception":"'com.microsoft.aad.msal4j.AbstractApplicationBase$Builder 
com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(boolean)'"}
java.lang.NoSuchMethodError: 
'com.microsoft.aad.msal4j.AbstractApplicationBase$Builder 
com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(boolean)'
Apparently it's trying to call AbstractApplicationBase.Builder which is only 
introduced in msal4j in 1.15.0

> Azure Blob and ADLS processors throw NoSuchMethodError when Service Principal 
> is used
> -
>
> Key: NIFI-13181
> URL: https://issues.apache.org/jira/browse/NIFI-13181
> Project: Apache NiFi
>  Issue Type: Bug
>Affects Versions: 1.26.0
>Reporter: Zoltán Kornél Török
>Assignee: Peter Turcsanyi
>Priority: Major
> Fix For: 1.27.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I spent some time testing nifi 1.26 and found this error, when tried to use 
> blobstorage related processors, like ListAzureBlobStorage_v12 .
> {code:java}
> 2024-05-08 09:09:06,416 WARN reactor.core.Exceptions: throwIfFatal detected a 
> jvm fatal exception, which is thrown and logged below:
> java.lang.NoSuchMethodError: 
> com.microsoft.aad.msal4j.ConfidentialClientApplication$Builder.logPii(Z)Lcom/microsoft/aad/msal4j/AbstractApplicationBase$Builder;
>     at 
> com.azure.identity.implementation.IdentityClientBase.getConfidentialClient(IdentityClientBase.java:233)
>     at 
> com.azure.identity.implementation.IdentityClient.lambda$getConfidentialClientApplication$4(IdentityClient.java:130)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:45)
>     at 
> reactor.core.publisher.MonoCacheTime.subscribeOrReturn(MonoCacheTime.java:143)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>     at reactor.core.publisher.MonoUsing.subscribe(MonoUsing.java:102)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at 
> reactor.core.publisher.MonoFromFluxOperator.subscribe(MonoFromFluxOperator.java:81)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.subscribeNext(MonoIgnoreThen.java:263)
>     at reactor.core.publisher.MonoIgnoreThen.subscribe(MonoIgnoreThen.java:51)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:203)
>     at 
> reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
>     at 
> reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:57)
>     at reactor.core.publisher.MonoDefer.subscribe(MonoDefer.java:53)
>     at reactor.core.publisher.Mono.subscribe(Mono.java:4491)
>     at 
> reactor.core.publisher.FluxFlatMap.trySubscribeScalarMap(FluxFlatMap.java:203)
>     at 
> reactor.core.publisher.MonoFlatMap.subscribeOrReturn(MonoFlatMap.java:53)
>     at 

[jira] [Commented] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-11 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853929#comment-17853929
 ] 

Yuanhao Zhu commented on NIFI-13340:


[~markap14] Hey Mark! Thanks so much for your feedback and effort. We 
appreciate a lot that you look into this issue and fix it. Wish you all the 
best and have a nice day! :)

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Assignee: Mark Payne
>Priority: Major
>  Labels: data-consistency, statestore
> Fix For: 2.0.0-M4
>
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We need to delete the 
> processor group and restore it from nifi registry again) and the occurrence 
> of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> {{In the log file I found the following:}}
>  
> {code:java}
> 2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{code}
>  
> {{ }}
> {{Also in the diagnostics, I found the following for the 'Delete before 
> Insert' processor group:}}
> {{ }}
> {code:java}
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] {code}
> {{ }}
> {{Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.}}
> {{ }}
> {{We dig through the source code of StandardDataValve.java and found that 
> that data valve's states are stored every time the data valve is opened and 
> close so the potential reason causing this issue is that the processor group 
> id was put into the statemap when data flowed in but somehow the removal of 
> the entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to queue in front of 
> the processor group}}
> {{ }}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-04 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851926#comment-17851926
 ] 

Yuanhao Zhu commented on NIFI-13340:


[~joewitt] Also, something I notice about the TEST processor group and our 
'Delete before Insert' processor group is that both of them involve in 
duplicating the flowfiles from input port and then one of the duplicate gets 
terminated inside the processor group, I guess that might have something to do 
with the cause? :)

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Priority: Major
>  Labels: data-consistency, statestore
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We need to delete the 
> processor group and restore it from nifi registry again) and the occurrence 
> of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> {{In the log file I found the following:}}
>  
> {code:java}
> 2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{code}
>  
> {{ }}
> {{Also in the diagnostics, I found the following for the 'Delete before 
> Insert' processor group:}}
> {{ }}
> {code:java}
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] {code}
> {{ }}
> {{Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.}}
> {{ }}
> {{We dig through the source code of StandardDataValve.java and found that 
> that data valve's states are stored every time the data valve is opened and 
> close so the potential reason causing this issue is that the processor group 
> id was put into the statemap when data flowed in but somehow the removal of 
> the entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to queue in front of 
> the processor group}}
> {{ }}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-04 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851908#comment-17851908
 ] 

Yuanhao Zhu edited comment on NIFI-13340 at 6/4/24 7:13 AM:


[~joewitt] Hey Joe! That does sounds similar, and I was able to reproduce the 
problem with the flow you uploaded, however, I replaced the funnel as the log 
message processor so that the flowfile outputed by TEST will not be queued up. 
Anyway, I think the behavior you mentioned could indeed have similar causes as 
mine. Thank you for your information!


was (Author: JIRAUSER305688):
[~joewitt] Hey Joe! That does sounds similar, and I was able to reproduce the 
problem with the flow you uploaded, however, I replaced the output port as the 
log message processor so that the flowfile outputed by TEST will not be queued 
up. Anyway, I think the behavior you mentioned could indeed have similar causes 
as mine. Thank you for your information!

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Priority: Major
>  Labels: data-consistency, statestore
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We need to delete the 
> processor group and restore it from nifi registry again) and the occurrence 
> of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> {{In the log file I found the following:}}
>  
> {code:java}
> 2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{code}
>  
> {{ }}
> {{Also in the diagnostics, I found the following for the 'Delete before 
> Insert' processor group:}}
> {{ }}
> {code:java}
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] {code}
> {{ }}
> {{Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.}}
> {{ }}
> {{We dig through the source code of StandardDataValve.java and found that 
> that data valve's states are stored every time the data valve is opened and 
> close so the potential reason causing this issue is that the processor group 
> id was put into the statemap when data flowed in but somehow the removal of 
> the entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to queue in front 

[jira] [Comment Edited] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-04 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851908#comment-17851908
 ] 

Yuanhao Zhu edited comment on NIFI-13340 at 6/4/24 6:48 AM:


[~joewitt] Hey Joe! That does sounds similar, and I was able to reproduce the 
problem with the flow you uploaded, however, I replaced the output port as the 
log message processor so that the flowfile outputed by TEST will not be queued 
up. Anyway, I think the behavior you mentioned could indeed have similar causes 
as mine. Thank you for your information!


was (Author: JIRAUSER305688):
[~joewitt] Hey Joe! That does sounds similar, and I was able to reproduce the 
problem with the flow you uploaded, however, I used replaced the output port as 
the log message processor so that the flowfile outputed by TEST will not be 
queued up. Anyway, I think the behavior you mentioned could indeed have similar 
causes as mine. Thank you for your information!

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Priority: Major
>  Labels: data-consistency, statestore
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We need to delete the 
> processor group and restore it from nifi registry again) and the occurrence 
> of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> {{In the log file I found the following:}}
>  
> {code:java}
> 2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{code}
>  
> {{ }}
> {{Also in the diagnostics, I found the following for the 'Delete before 
> Insert' processor group:}}
> {{ }}
> {code:java}
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] {code}
> {{ }}
> {{Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.}}
> {{ }}
> {{We dig through the source code of StandardDataValve.java and found that 
> that data valve's states are stored every time the data valve is opened and 
> close so the potential reason causing this issue is that the processor group 
> id was put into the statemap when data flowed in but somehow the removal of 
> the entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to 

[jira] [Commented] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-04 Thread Yuanhao Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851908#comment-17851908
 ] 

Yuanhao Zhu commented on NIFI-13340:


[~joewitt] Hey Joe! That does sounds similar, and I was able to reproduce the 
problem with the flow you uploaded, however, I used replaced the output port as 
the log message processor so that the flowfile outputed by TEST will not be 
queued up. Anyway, I think the behavior you mentioned could indeed have similar 
causes as mine. Thank you for your information!

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Priority: Major
>  Labels: data-consistency, statestore
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We need to delete the 
> processor group and restore it from nifi registry again) and the occurrence 
> of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> {{In the log file I found the following:}}
>  
> {code:java}
> 2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{code}
>  
> {{ }}
> {{Also in the diagnostics, I found the following for the 'Delete before 
> Insert' processor group:}}
> {{ }}
> {code:java}
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] {code}
> {{ }}
> {{Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.}}
> {{ }}
> {{We dig through the source code of StandardDataValve.java and found that 
> that data valve's states are stored every time the data valve is opened and 
> close so the potential reason causing this issue is that the processor group 
> id was put into the statemap when data flowed in but somehow the removal of 
> the entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to queue in front of 
> the processor group}}
> {{ }}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanhao Zhu updated NIFI-13340:
---
Description: 
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We need to delete the processor 
group and restore it from nifi registry again) and the occurrence of this issue 
had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

 
{code:java}
2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{code}
 

{{ }}
{{Also in the diagnostics, I found the following for the 'Delete before Insert' 
processor group:}}
{{ }}
{code:java}
Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
existing reports(This is the parent processor group of the Delete before Insert)
Currently Have Data Flowing In: []
Currently Have Data Flowing Out: 
[StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]]
Reason for Not allowing data to flow in:
    Data Valve is already allowing data to flow out of group:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]
Reason for Not allowing data to flow out:
    Output is Allowed:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] {code}
{{ }}
{{Which is clearly {*}not correct{*}. since there are currently not data 
flowing out from the 'Delete before Insert' processor group.}}
{{ }}
{{We dig through the source code of StandardDataValve.java and found that that 
data valve's states are stored every time the data valve is opened and close so 
the potential reason causing this issue is that the processor group id was put 
into the statemap when data flowed in but somehow the removal of the entry was 
not successful. We are aware that if the statemap is not stored before the nifi 
restarts, it could lead to such circumstances, but in the recent occurrences of 
this issue, there were not nifi restart recorded or observed at the time when 
all those flowfiles started to queue in front of the processor group}}
{{ }}

 

  was:
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

 
{code:java}
2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{code}
 

{{ }}
{{Also in the diagnostics, I found the following for the 'Delete before Insert' 

[jira] [Updated] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanhao Zhu updated NIFI-13340:
---
Priority: Blocker  (was: Critical)

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Priority: Blocker
>  Labels: data-consistency, statestore
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We have to manually switch 
> the flowfile concurrency to unbounded and then switch it back to make it work 
> again) and the occurrence of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> {{In the log file I found the following:}}
>  
> {code:java}
> 2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{code}
>  
> {{ }}
> {{Also in the diagnostics, I found the following for the 'Delete before 
> Insert' processor group:}}
> {{ }}
> {code:java}
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] {code}
> {{ }}
> {{Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.}}
> {{ }}
> {{We dig through the source code of StandardDataValve.java and found that 
> that data valve's states are stored every time the data valve is opened and 
> close so the potential reason causing this issue is that the processor group 
> id was put into the statemap when data flowed in but somehow the removal of 
> the entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to queue in front of 
> the processor group}}
> {{ }}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanhao Zhu updated NIFI-13340:
---
Description: 
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

 
{code:java}
2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{code}
 

{{ }}
{{Also in the diagnostics, I found the following for the 'Delete before Insert' 
processor group:}}
{{ }}
{code:java}
Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
existing reports(This is the parent processor group of the Delete before Insert)
Currently Have Data Flowing In: []
Currently Have Data Flowing Out: 
[StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]]
Reason for Not allowing data to flow in:
    Data Valve is already allowing data to flow out of group:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]
Reason for Not allowing data to flow out:
    Output is Allowed:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] {code}
{{ }}
{{Which is clearly {*}not correct{*}. since there are currently not data 
flowing out from the 'Delete before Insert' processor group.}}
{{ }}
{{We dig through the source code of StandardDataValve.java and found that that 
data valve's states are stored every time the data valve is opened and close so 
the potential reason causing this issue is that the processor group id was put 
into the statemap when data flowed in but somehow the removal of the entry was 
not successful. We are aware that if the statemap is not stored before the nifi 
restarts, it could lead to such circumstances, but in the recent occurrences of 
this issue, there were not nifi restart recorded or observed at the time when 
all those flowfiles started to queue in front of the processor group}}
{{ }}

 

  was:
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

 
{code:java}

{code}
{{2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group }}

{{ }}
{{Also in the diagnostics, I found the 

[jira] [Updated] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanhao Zhu updated NIFI-13340:
---
Description: 
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

 
{code:java}

{code}
{{2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group }}

{{ }}
{{Also in the diagnostics, I found the following for the 'Delete before Insert' 
processor group:}}
{{ }}
{code:java}
Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
existing reports(This is the parent processor group of the Delete before Insert)
Currently Have Data Flowing In: []
Currently Have Data Flowing Out: 
[StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]]
Reason for Not allowing data to flow in:
    Data Valve is already allowing data to flow out of group:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]
Reason for Not allowing data to flow out:
    Output is Allowed:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] {code}

{{ }}
{{Which is clearly {*}not correct{*}. since there are currently not data 
flowing out from the 'Delete before Insert' processor group.}}
{{ }}
{{We dig through the source code of StandardDataValve.java and found that that 
data valve's states are stored every time the data valve is opened and close so 
the potential reason causing this issue is that the processor group id was put 
into the statemap when data flowed in but somehow the removal of the entry was 
not successful. We are aware that if the statemap is not stored before the nifi 
restarts, it could lead to such circumstances, but in the recent occurrences of 
this issue, there were not nifi restart recorded or observed at the time when 
all those flowfiles started to queue in front of the processor group}}
{{ }}

 

  was:
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

{{```}}
{{{color:#ff}2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{color}}}

{{```}}
{{ }}
{{Also in the diagnostics, 

[jira] [Updated] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanhao Zhu updated NIFI-13340:
---
Description: 
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

{{In the log file I found the following:}}

{{```}}
{{{color:#ff}2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{color}}}

{{```}}
{{ }}
{{Also in the diagnostics, I found the following for the 'Delete before Insert' 
processor group:}}
{{ }}
{{Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
existing reports(This is the parent processor group of the Delete before 
Insert)}}
{{Currently Have Data Flowing In: []}}
{{{color:#ff}Currently Have Data Flowing Out: 
[StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]]{color}}}
{{Reason for Not allowing data to flow in:}}
{{    Data Valve is already allowing data to flow out of group:}}
{{        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]}}
{{Reason for Not allowing data to flow out:}}
{{    Output is Allowed:}}
{{        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]}}
{{ }}
{{Which is clearly {*}not correct{*}. since there are currently not data 
flowing out from the 'Delete before Insert' processor group.}}
{{ }}
{{We dig through the source code of StandardDataValve.java and found that that 
data valve's states are stored every time the data valve is opened and close so 
the potential reason causing this issue is that the processor group id was put 
into the statemap when data flowed in but somehow the removal of the entry was 
not successful. We are aware that if the statemap is not stored before the nifi 
restarts, it could lead to such circumstances, but in the recent occurrences of 
this issue, there were not nifi restart recorded or observed at the time when 
all those flowfiles started to queue in front of the processor group}}
{{ }}

  was:
*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

In the log file I found the following:
{color:#FF}2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{color}
 
Also 

[jira] [Updated] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-13340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanhao Zhu updated NIFI-13340:
---
Flags: Important  (was: Patch,Important)

> Flowfiles stopped to be ingested before a processor group
> -
>
> Key: NIFI-13340
> URL: https://issues.apache.org/jira/browse/NIFI-13340
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.25.0
> Environment: Host OS :ubuntu 20.04 in wsl
> CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
> RAM: 128GB
> ZFS ARC cache was configured to be maximum 4 GB
>Reporter: Yuanhao Zhu
>Priority: Critical
>  Labels: data-consistency, statestore
> Attachments: image-2024-06-03-14-42-37-280.png, 
> image-2024-06-03-14-44-15-590.png
>
>
> *Background:* We run nifi as a standalone instance in a docker container in 
> on all our stages, the statemanagement provider used by our instance is the 
> local WAL backed provider.
> {*}Description{*}: We observed that the flowfiles in front of one of our 
> processor groups stopped to be ingested once in a while and it happens on all 
> our stages without noticeable pattern. The flowfile concurrency policy of the 
> processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE 
> and the outbound policy is set to BATCH_OUTPUT. The ingestion should continue 
> since the processor group had already send out every flowfile in it, but it 
> stopped. 
>  
> There is only one brutal solution from our side(We have to manually switch 
> the flowfile concurrency to unbounded and then switch it back to make it work 
> again) and the occurrence of this issue had impacted our data ingestion.
> !image-2024-06-03-14-42-37-280.png!
> !image-2024-06-03-14-44-15-590.png!
> As you can see in the screenshot that the processor group 'Delete before 
> Insert' has no more flowfile to output but still it does not ingest the data 
> queued in the input port
>  
> In the log file I found the following:
> {color:#FF}2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
> o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert] because Outbound Policy is Batch Output and valve is already 
> open to allow data to flow out of group{color}
>  
> Also in the diagnostics, I found the following for the 'Delete before Insert' 
> processor group:
>  
> Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
> existing reports(This is the parent processor group of the Delete before 
> Insert)
> Currently Have Data Flowing In: []
> {color:#FF}Currently Have Data Flowing Out: 
> [StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]]{color}
> Reason for Not allowing data to flow in:
>     Data Valve is already allowing data to flow out of group:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
> Reason for Not allowing data to flow out:
>     Output is Allowed:
>         
> StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
>  before Insert]
>  
> Which is clearly {*}not correct{*}. since there are currently not data 
> flowing out from the 'Delete before Insert' processor group.
>  
> We dig through the source code of StandardDataValve.java and found that that 
> data valve's states are stored every time the data valve is opened and close 
> so the potential reason causing this issue is that the processor group id was 
> put into the statemap when data flowed in but somehow the removal of the 
> entry was not successful. We are aware that if the statemap is not stored 
> before the nifi restarts, it could lead to such circumstances, but in the 
> recent occurrences of this issue, there were not nifi restart recorded or 
> observed at the time when all those flowfiles started to queue in front of 
> the processor group
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-13340) Flowfiles stopped to be ingested before a processor group

2024-06-03 Thread Yuanhao Zhu (Jira)
Yuanhao Zhu created NIFI-13340:
--

 Summary: Flowfiles stopped to be ingested before a processor group
 Key: NIFI-13340
 URL: https://issues.apache.org/jira/browse/NIFI-13340
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.25.0
 Environment: Host OS :ubuntu 20.04 in wsl
CPU: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz
RAM: 128GB
ZFS ARC cache was configured to be maximum 4 GB

Reporter: Yuanhao Zhu
 Attachments: image-2024-06-03-14-42-37-280.png, 
image-2024-06-03-14-44-15-590.png

*Background:* We run nifi as a standalone instance in a docker container in on 
all our stages, the statemanagement provider used by our instance is the local 
WAL backed provider.

{*}Description{*}: We observed that the flowfiles in front of one of our 
processor groups stopped to be ingested once in a while and it happens on all 
our stages without noticeable pattern. The flowfile concurrency policy of the 
processor group that stopped ingesting data is set to SINGLE_BATCH_PER_NODE and 
the outbound policy is set to BATCH_OUTPUT. The ingestion should continue since 
the processor group had already send out every flowfile in it, but it stopped. 

 

There is only one brutal solution from our side(We have to manually switch the 
flowfile concurrency to unbounded and then switch it back to make it work 
again) and the occurrence of this issue had impacted our data ingestion.
!image-2024-06-03-14-42-37-280.png!

!image-2024-06-03-14-44-15-590.png!

As you can see in the screenshot that the processor group 'Delete before 
Insert' has no more flowfile to output but still it does not ingest the data 
queued in the input port

 

In the log file I found the following:
{color:#FF}2024-06-03 11:34:01,772 TRACE [Timer-Driven Process Thread-15] 
o.apache.nifi.groups.StandardDataValve Will not allow data to flow into 
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert] because Outbound Policy is Batch Output and valve is already 
open to allow data to flow out of group{color}
 
Also in the diagnostics, I found the following for the 'Delete before Insert' 
processor group:
 
Process Group e6510a87-aa78-3268-1b11-3c310f0ad144, Name = Search and Delete 
existing reports(This is the parent processor group of the Delete before Insert)
Currently Have Data Flowing In: []
{color:#FF}Currently Have Data Flowing Out: 
[StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]]{color}
Reason for Not allowing data to flow in:
    Data Valve is already allowing data to flow out of group:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]
Reason for Not allowing data to flow out:
    Output is Allowed:
        
StandardProcessGroup[identifier=5eb0ad69-e8ed-3ba0-52da-af94fb9836cd,name=Delete
 before Insert]
 
Which is clearly {*}not correct{*}. since there are currently not data flowing 
out from the 'Delete before Insert' processor group.
 
We dig through the source code of StandardDataValve.java and found that that 
data valve's states are stored every time the data valve is opened and close so 
the potential reason causing this issue is that the processor group id was put 
into the statemap when data flowed in but somehow the removal of the entry was 
not successful. We are aware that if the statemap is not stored before the nifi 
restarts, it could lead to such circumstances, but in the recent occurrences of 
this issue, there were not nifi restart recorded or observed at the time when 
all those flowfiles started to queue in front of the processor group
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)