[GitHub] [nifi] EndzeitBegins commented on pull request #7481: NIFI-8128: add support for specifying a password for Sentinel
EndzeitBegins commented on PR #7481: URL: https://github.com/apache/nifi/pull/7481#issuecomment-1636682094 @exceptionfactory Thanks for the quick review. I understand. I was thinking the same, but assumed it might be okay-ish due to the fact, that the test library used `embedded-redis` does the same. I'll take a look at replacing `embedded-redis`, which hasn't seen maintenance for a long time by the way, with Testcontainers. If that proves to time consuming I'd propose to remove the binary and `@Disable` the one test that requires a newer version completely, with an hint that in order to execute it, one must provide a newer binary locally. I think it's a good thing to at least have a test case that reproduces an undesired behaviour, before fixing it. Would you be fine with both approaches? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11758) Add local file upload option in PutAzure*Storage processors
[ https://issues.apache.org/jira/browse/NIFI-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandor Soma Abonyi updated NIFI-11758: -- Fix Version/s: 2.0.0 1.23.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add local file upload option in PutAzure*Storage processors > --- > > Key: NIFI-11758 > URL: https://issues.apache.org/jira/browse/NIFI-11758 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Fix For: 2.0.0, 1.23.0 > > Time Spent: 2h 40m > Remaining Estimate: 0h > > There are cases when the files to be uploaded to Azure Storage are available > on the local filesystem where NiFi is running. That is, the flow could read > and upload the files directly from the filesystem without adding it in NiFi's > content repo which is an overhead in this case (can be relevant for huge > files). > Add "Data to Upload" property with options "FlowFile's Content" (default, > current behaviour) and "Local File". Using the latter, the user can by-pass > the content repo and upload the file from the local filesystem to Azure > Storage directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11758) Add local file upload option in PutAzure*Storage processors
[ https://issues.apache.org/jira/browse/NIFI-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743321#comment-17743321 ] ASF subversion and git services commented on NIFI-11758: Commit 2f9bb2095c18b8f1e84e3effcba7e60d93f63aa5 in nifi's branch refs/heads/support/nifi-1.x from Peter Turcsanyi [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=2f9bb2095c ] NIFI-11758: Added FileResourceService and used it in PutAzure*Storage processors for local file upload - Renamed classes from DataUpload to ResourceTransfer and updated references - Disabled testNonReadableFile() on Windows due to Posix permissions - Replaced utility methods with functional handling of FileResource - Corrected FlowFile InputStream access using Optional.orElseGet() Backported - Updated 2.0.0-SNAPSHOT references to 1.23.0-SNAPSHOT - Replaced InputStream.readAllBytes() with IOUtils.toByteArray(inputStream) to address Java8 incompatibility - Replaced Optional.isEmpty() with Optional.isPresent() to address Java8 incompatibility This closes: #7458 Co-authored-by: David Handermann Signed-off-by: Nandor Soma Abonyi (cherry picked from commit 437995b75a4237b7bf9d304f7693cf3b53371a9f) > Add local file upload option in PutAzure*Storage processors > --- > > Key: NIFI-11758 > URL: https://issues.apache.org/jira/browse/NIFI-11758 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Time Spent: 2h 40m > Remaining Estimate: 0h > > There are cases when the files to be uploaded to Azure Storage are available > on the local filesystem where NiFi is running. That is, the flow could read > and upload the files directly from the filesystem without adding it in NiFi's > content repo which is an overhead in this case (can be relevant for huge > files). > Add "Data to Upload" property with options "FlowFile's Content" (default, > current behaviour) and "Local File". Using the latter, the user can by-pass > the content repo and upload the file from the local filesystem to Azure > Storage directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-11815) Dynamic Controller Service for Database Connections/Pooling
Joe Witt created NIFI-11815: --- Summary: Dynamic Controller Service for Database Connections/Pooling Key: NIFI-11815 URL: https://issues.apache.org/jira/browse/NIFI-11815 Project: Apache NiFi Issue Type: New Feature Reporter: Joe Witt Consider a user that interacts with a large number of databases/tables/etc.. for various different flows that happen at different times and so on. Could be thousands of such configurations. Each one being unique means it needs a different controller service which in turn leads to thousands of these and a fairly painful configuration. The current DBCPConnectionPool controller service works great when you know all those values and they're stable for a given processor/purpose. A DynamicDBCPConnectionPool could offer a far better configuration experience for these more ad-hoc and varied cases mentioned above. Consider having a single ControllerService instance that is configured with a specific user and password but other values like database name, table name, and such is configurable via expression language, properties, etc.. that are passed in potentially on a per flow file basis. The DynamicDBCPConnectionPool then would create the necessary values from the expression language evaluation and the controller service settings then do a lookup for any actual ConnectionPools that match those inputs. If it exists use it and if it doesn't make a new one. Keep a small cache of 10 or some configurable number of these. Let them age-off be shut down when no longer being recently/actively used. This cuts down from 1000s to a single controller service while still yielding good performance assuming it is cache/hit friendly periodic usage of a given combination of inputs. Far better user experience and resource management. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11814) Controller Services that are enabling (not enabled) appear to block parameter context changes
[ https://issues.apache.org/jira/browse/NIFI-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-11814: Description: Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of Controller Services (more than 1000 of them) * Some Controller Services reference Param Context values some dont. * A handful of the Controller Services are failing to start but are not disabled. They're invalid but listed as 'enabling' when looking at diagnostics for instance. Such as {noformat} DBCPConnectionPool : 1003 total, {enabling=1, enabled=278, disabled=724} {noformat} * Now user tries to edit a parameter in a parameter context that does not reference any controller services (or they make a new param context then try to add a param at all). It will timeout after about 30 seconds failing due to {noformat} 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/process-groups/root/controller-services to host:9443 due to java.net.ProtocolException: unexpected end of stream 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator java.net.ProtocolException: unexpected end of stream at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) at okio.RealBufferedSource.read(RealBufferedSource.kt:189) at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) at okio.InflaterSource.refill(InflaterSource.kt:112) at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) at okio.InflaterSource.read(InflaterSource.kt:49) at okio.GzipSource.read(GzipSource.kt:69) at okio.Buffer.writeAll(Buffer.kt:1290) at okio.RealBufferedSource.readByteArray(RealBufferedSource.kt:236) at okhttp3.ResponseBody.bytes(ResponseBody.kt:124) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.getResponseBytes(OkHttpReplicationClient.java:168) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:645) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:869) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-07-14 14:21:13,791 ERROR [Parameter Context Update Thread-2] o.a.n.web.api.ParameterContextResource Failed to update Parameter Context org.apache.nifi.web.util.LifecycleManagementException: Failed while waiting for Controller Services to finish transitioning to a state of DISABLED at org.apache.nifi.web.util.ClusterReplicationComponentLifecycle.activateControllerServices(ClusterReplicationComponentLifecycle.java:469) at org.apache.nifi.web.util.ParameterUpdateManager.disableControllerServices(ParameterUpdateManager.java:273) at org.apache.nifi.web.util.ParameterUpdateManager.updateParameterContexts(ParameterUpdateManager.java:144) at org.apache.nifi.web.api.ParameterContextResource.lambda$submitUpdateRequest$17(ParameterContextResource.java:832) at org.apache.nifi.web.api.concurrent.AsyncRequestManager$2.run(AsyncRequestManager.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java {noformat} * Ironically then if you look at why some of the controller services failed to enable it can be for reasons like a parameter they reference not being available. Which you could not go fix as that would then timeout due to the enabling controller service. For example {noformat} 2023-07-14 22:21:26,912 ERROR [Timer-Driven Process Thread-4] o.a.n.c.s.StandardController
[jira] [Updated] (NIFI-11814) Controller Services that are enabling (not enabled) appear to block parameter context changes
[ https://issues.apache.org/jira/browse/NIFI-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-11814: Description: Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of Controller Services (more than 1000 of them) * Some Controller Services reference Param Context values some dont. * A handful of the Controller Services are failing to start but are not disabled. They're invalid but listed as 'enabling' when looking at diagnostics for instance. Such as {noformat} DBCPConnectionPool : 1003 total, {enabling=1, enabled=278, disabled=724} {noformat} * Now user tries to edit a parameter in a parameter context that does not reference any controller services (or they make a new param context then try to add a param at all). It will timeout after about 30 seconds failing due to {noformat} 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/process-groups/root/controller-services to host:9443 due to java.net.ProtocolException: unexpected end of stream 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator java.net.ProtocolException: unexpected end of stream at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) at okio.RealBufferedSource.read(RealBufferedSource.kt:189) at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) at okio.InflaterSource.refill(InflaterSource.kt:112) at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) at okio.InflaterSource.read(InflaterSource.kt:49) at okio.GzipSource.read(GzipSource.kt:69) at okio.Buffer.writeAll(Buffer.kt:1290) at okio.RealBufferedSource.readByteArray(RealBufferedSource.kt:236) at okhttp3.ResponseBody.bytes(ResponseBody.kt:124) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.getResponseBytes(OkHttpReplicationClient.java:168) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:645) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:869) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-07-14 14:21:13,791 ERROR [Parameter Context Update Thread-2] o.a.n.web.api.ParameterContextResource Failed to update Parameter Context org.apache.nifi.web.util.LifecycleManagementException: Failed while waiting for Controller Services to finish transitioning to a state of DISABLED at org.apache.nifi.web.util.ClusterReplicationComponentLifecycle.activateControllerServices(ClusterReplicationComponentLifecycle.java:469) at org.apache.nifi.web.util.ParameterUpdateManager.disableControllerServices(ParameterUpdateManager.java:273) at org.apache.nifi.web.util.ParameterUpdateManager.updateParameterContexts(ParameterUpdateManager.java:144) at org.apache.nifi.web.api.ParameterContextResource.lambda$submitUpdateRequest$17(ParameterContextResource.java:832) at org.apache.nifi.web.api.concurrent.AsyncRequestManager$2.run(AsyncRequestManager.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java {noformat} * Ironically then if you look at why some of the controller services failed to enable it can be for reasons like a parameter they reference not being available. Which you could not go fix as that would then timeout due to the enabling controller service. For example {noformat} *no* further _formatting_ is done here {noformat} was: Flagging as a blocker for giv
[jira] [Updated] (NIFI-11814) Controller Services that are enabling (not enabled) appear to block parameter context changes
[ https://issues.apache.org/jira/browse/NIFI-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-11814: Description: Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of Controller Services (more than 1000 of them) * Some Controller Services reference Param Context values some dont. * A handful of the Controller Services are failing to start but are not disabled. They're invalid but listed as 'enabling' when looking at diagnostics for instance. Such as {noformat} DBCPConnectionPool : 1003 total, {enabling=1, enabled=278, disabled=724} {noformat} * Now user tries to edit a parameter in a parameter context that does not reference any controller services (or they make a new param context then try to add a param at all). It will timeout after about 30 seconds failing due to {noformat} 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/process-groups/root/controller-services to host:9443 due to java.net.ProtocolException: unexpected end of stream 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator java.net.ProtocolException: unexpected end of stream at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) at okio.RealBufferedSource.read(RealBufferedSource.kt:189) at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) at okio.InflaterSource.refill(InflaterSource.kt:112) at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) at okio.InflaterSource.read(InflaterSource.kt:49) at okio.GzipSource.read(GzipSource.kt:69) at okio.Buffer.writeAll(Buffer.kt:1290) at okio.RealBufferedSource.readByteArray(RealBufferedSource.kt:236) at okhttp3.ResponseBody.bytes(ResponseBody.kt:124) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.getResponseBytes(OkHttpReplicationClient.java:168) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:645) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:869) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-07-14 14:21:13,791 ERROR [Parameter Context Update Thread-2] o.a.n.web.api.ParameterContextResource Failed to update Parameter Context org.apache.nifi.web.util.LifecycleManagementException: Failed while waiting for Controller Services to finish transitioning to a state of DISABLED at org.apache.nifi.web.util.ClusterReplicationComponentLifecycle.activateControllerServices(ClusterReplicationComponentLifecycle.java:469) at org.apache.nifi.web.util.ParameterUpdateManager.disableControllerServices(ParameterUpdateManager.java:273) at org.apache.nifi.web.util.ParameterUpdateManager.updateParameterContexts(ParameterUpdateManager.java:144) at org.apache.nifi.web.api.ParameterContextResource.lambda$submitUpdateRequest$17(ParameterContextResource.java:832) at org.apache.nifi.web.api.concurrent.AsyncRequestManager$2.run(AsyncRequestManager.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java {noformat} * Ironically then if you look at why some of the controller services failed to enable it can be for reasons like a parameter they reference not being available. Which you could not go fix as that would then timeout due to the enabling controller service. was: Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of
[jira] [Updated] (NIFI-11814) Controller Services that are enabling (not enabled) appear to block parameter context changes
[ https://issues.apache.org/jira/browse/NIFI-11814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-11814: Description: Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of Controller Services (more than 1000 of them) * Some Controller Services reference Param Context values some dont. * A handful of the Controller Services are failing to start but are not disabled. They're invalid but listed as 'enabling' when looking at diagnostics for instance. Such as 'DBCPConnectionPool : 1003 total, {enabling=1, enabled=278, disabled=724}' * Now user tries to edit a parameter in a parameter context that does not reference any controller services (or they make a new param context then try to add a param at all). It will timeout after about 30 seconds failing due to {noformat} 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/process-groups/root/controller-services to host:9443 due to java.net.ProtocolException: unexpected end of stream 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator java.net.ProtocolException: unexpected end of stream at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) at okio.RealBufferedSource.read(RealBufferedSource.kt:189) at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) at okio.InflaterSource.refill(InflaterSource.kt:112) at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) at okio.InflaterSource.read(InflaterSource.kt:49) at okio.GzipSource.read(GzipSource.kt:69) at okio.Buffer.writeAll(Buffer.kt:1290) at okio.RealBufferedSource.readByteArray(RealBufferedSource.kt:236) at okhttp3.ResponseBody.bytes(ResponseBody.kt:124) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.getResponseBytes(OkHttpReplicationClient.java:168) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:645) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:869) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-07-14 14:21:13,791 ERROR [Parameter Context Update Thread-2] o.a.n.web.api.ParameterContextResource Failed to update Parameter Context org.apache.nifi.web.util.LifecycleManagementException: Failed while waiting for Controller Services to finish transitioning to a state of DISABLED at org.apache.nifi.web.util.ClusterReplicationComponentLifecycle.activateControllerServices(ClusterReplicationComponentLifecycle.java:469) at org.apache.nifi.web.util.ParameterUpdateManager.disableControllerServices(ParameterUpdateManager.java:273) at org.apache.nifi.web.util.ParameterUpdateManager.updateParameterContexts(ParameterUpdateManager.java:144) at org.apache.nifi.web.api.ParameterContextResource.lambda$submitUpdateRequest$17(ParameterContextResource.java:832) at org.apache.nifi.web.api.concurrent.AsyncRequestManager$2.run(AsyncRequestManager.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java {noformat} * Ironically then if you look at why some of the controller services failed to enable it can be for reasons like a parameter they reference not being available. Which you could not go fix as that would then timeout due to the enabling controller service. was: Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of Controller Services
[jira] [Created] (NIFI-11814) Controller Services that are enabling (not enabled) appear to block parameter context changes
Joe Witt created NIFI-11814: --- Summary: Controller Services that are enabling (not enabled) appear to block parameter context changes Key: NIFI-11814 URL: https://issues.apache.org/jira/browse/NIFI-11814 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.22.0, 1.21.0 Reporter: Joe Witt Fix For: 1.23.0 Flagging as a blocker for given the problem it can cause. Scenario: * User has a significant number of Controller Services (more than 1000 of them) * Some Controller Services reference Param Context values some dont. * A handful of the Controller Services are failing to start but are not disabled. They're invalid but listed as 'enabling' when looking at diagnostics for instance. Such as 'DBCPConnectionPool : 1003 total, {enabling=1, enabled=278, disabled=724}' * Now user tries to edit a parameter in a parameter context that does not reference any controller services (or they make a new param context then try to add a param at all). It will timeout after about 30 seconds failing due to {noformat} § 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator Failed to replicate request GET /nifi-api/flow/process-groups/root/controller-services to host:9443 due to java.net.ProtocolException: unexpected end of stream 2023-07-14 14:21:13,791 WARN [Replicate Request Thread-219480] o.a.n.c.c.h.r.ThreadPoolRequestReplicator java.net.ProtocolException: unexpected end of stream at okhttp3.internal.http1.Http1ExchangeCodec$ChunkedSource.read(Http1ExchangeCodec.kt:415) at okhttp3.internal.connection.Exchange$ResponseBodySource.read(Exchange.kt:276) at okio.RealBufferedSource.read(RealBufferedSource.kt:189) at okio.RealBufferedSource.exhausted(RealBufferedSource.kt:197) at okio.InflaterSource.refill(InflaterSource.kt:112) at okio.InflaterSource.readOrInflate(InflaterSource.kt:76) at okio.InflaterSource.read(InflaterSource.kt:49) at okio.GzipSource.read(GzipSource.kt:69) at okio.Buffer.writeAll(Buffer.kt:1290) at okio.RealBufferedSource.readByteArray(RealBufferedSource.kt:236) at okhttp3.ResponseBody.bytes(ResponseBody.kt:124) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.getResponseBytes(OkHttpReplicationClient.java:168) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:138) at org.apache.nifi.cluster.coordination.http.replication.okhttp.OkHttpReplicationClient.replicate(OkHttpReplicationClient.java:130) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator.replicateRequest(ThreadPoolRequestReplicator.java:645) at org.apache.nifi.cluster.coordination.http.replication.ThreadPoolRequestReplicator$NodeHttpRequest.run(ThreadPoolRequestReplicator.java:869) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-07-14 14:21:13,791 ERROR [Parameter Context Update Thread-2] o.a.n.web.api.ParameterContextResource Failed to update Parameter Context org.apache.nifi.web.util.LifecycleManagementException: Failed while waiting for Controller Services to finish transitioning to a state of DISABLED at org.apache.nifi.web.util.ClusterReplicationComponentLifecycle.activateControllerServices(ClusterReplicationComponentLifecycle.java:469) at org.apache.nifi.web.util.ParameterUpdateManager.disableControllerServices(ParameterUpdateManager.java:273) at org.apache.nifi.web.util.ParameterUpdateManager.updateParameterContexts(ParameterUpdateManager.java:144) at org.apache.nifi.web.api.ParameterContextResource.lambda$submitUpdateRequest$17(ParameterContextResource.java:832) at org.apache.nifi.web.api.concurrent.AsyncRequestManager$2.run(AsyncRequestManager.java:117) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java {noformat} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11758) Add local file upload option in PutAzure*Storage processors
[ https://issues.apache.org/jira/browse/NIFI-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743316#comment-17743316 ] ASF subversion and git services commented on NIFI-11758: Commit 437995b75a4237b7bf9d304f7693cf3b53371a9f in nifi's branch refs/heads/main from Peter Turcsanyi [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=437995b75a ] NIFI-11758: Added FileResourceService and used it in PutAzure*Storage processors for local file upload - Renamed classes from DataUpload to ResourceTransfer and updated references - Disabled testNonReadableFile() on Windows due to Posix permissions - Replaced utility methods with functional handling of FileResource - Corrected FlowFile InputStream access using Optional.orElseGet() This closes: #7458 Co-authored-by: David Handermann Signed-off-by: Nandor Soma Abonyi > Add local file upload option in PutAzure*Storage processors > --- > > Key: NIFI-11758 > URL: https://issues.apache.org/jira/browse/NIFI-11758 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Peter Turcsanyi >Assignee: Peter Turcsanyi >Priority: Major > Time Spent: 2.5h > Remaining Estimate: 0h > > There are cases when the files to be uploaded to Azure Storage are available > on the local filesystem where NiFi is running. That is, the flow could read > and upload the files directly from the filesystem without adding it in NiFi's > content repo which is an overhead in this case (can be relevant for huge > files). > Add "Data to Upload" property with options "FlowFile's Content" (default, > current behaviour) and "Local File". Using the latter, the user can by-pass > the content repo and upload the file from the local filesystem to Azure > Storage directly. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] asfgit closed pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage…
asfgit closed pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage… URL: https://github.com/apache/nifi/pull/7458 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] nandorsoma commented on pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage…
nandorsoma commented on PR #7458: URL: https://github.com/apache/nifi/pull/7458#issuecomment-1636531128 Merging to `main` and `support/nifi-1.x`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-11813) Removal of Event Driven Scheduling Strategy
Pierre Villard created NIFI-11813: - Summary: Removal of Event Driven Scheduling Strategy Key: NIFI-11813 URL: https://issues.apache.org/jira/browse/NIFI-11813 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Pierre Villard Assignee: Pierre Villard Fix For: 2.0.0 As part of NiFi 2.0 we want to remove the Event-Driven scheduling strategy that was marked as experimental for a long time and didn't prove to bring any kind of performance improvements. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-1696) Event driven tasks appear not to auto restart when data in queues but no new data
[ https://issues.apache.org/jira/browse/NIFI-1696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-1696. -- Resolution: Abandoned Removal of event driven scheduling strategy in NiFi 2.0 > Event driven tasks appear not to auto restart when data in queues but no new > data > - > > Key: NIFI-1696 > URL: https://issues.apache.org/jira/browse/NIFI-1696 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 0.6.0 >Reporter: Joe Witt >Priority: Critical > > The scenario: > - 3 node cluster > - node 'NodeA' is primary and running a processor 'ProcP1' set to event > driven scheduling > - 'ProcP1' wasn't working due to a config issue but there were a few events > on its input queues. > - restarted node 'NodeA' after fixing config issue. > - No new events had come in but those events were sitting there. > Result: The 'ProcP1' appears to be in scheduled state but does not get > scheduled to run - perhaps because no new event triggered it? Seems like > perhaps we're not checking if there are already events at restart? Stopping > and starting 'ProcP1' then resulted in things going back to normal. > It is not obvious to me how to recreate this in a repeatable way. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8128) RedisDistributedMapCacheClientService doesn't work with password protected sentinel
[ https://issues.apache.org/jira/browse/NIFI-8128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743285#comment-17743285 ] endzeit commented on NIFI-8128: --- I just opened the pull-request [#7481|https://github.com/apache/nifi/pull/7481] which should resolve this issue by allowing an optional Sentinel password to be provided. > RedisDistributedMapCacheClientService doesn't work with password protected > sentinel > --- > > Key: NIFI-8128 > URL: https://issues.apache.org/jira/browse/NIFI-8128 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.11.4 >Reporter: DEOM Damien >Assignee: endzeit >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > > RedisDistributedMapCacheClientService doesn't work with password protected > sentinel . > Standalone mode with password works fine. > Removing password in sentinel solves the issue > > > NB: if redis has authentification, Nifi expects the password to be that of > redis, not that of sentinel > If redis has pwd but not sentinel, and no pwd is specified in nifi, we get > this message: > > {{org.springframework.data.redis.RedisConnectionFailureException: Cannot get > Jedis connection; nested exception is > redis.clients.jedis.exceptions.JedisException: Could not get a resource from > the pool}} > > If both redis and sentinel have the same pwd, we get this message: > > {{ failed to process session due to All sentinels down, cannot determine > where is mymaster master is running...; Processor Administratively Yielded > for 1 sec: redis.clients.jedis.exceptions.JedisConnectionException: All > sentinels down, cannot determine where is mymaster master is running...}} > > The documentation should be updated to facilitate the establishment of a > working, secured cluster. > > Original discussion > [https://stackoverflow.com/questions/65412299/nifi-redis-sentinel-integration] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] EndzeitBegins opened a new pull request, #7481: NIFI-8128: add support for specifying a password for Sentinel
EndzeitBegins opened a new pull request, #7481: URL: https://github.com/apache/nifi/pull/7481 # Summary [NIFI-8128](https://issues.apache.org/jira/browse/NIFI-8128) In order to reproduce the issue in a test case the Redis Sentinel setting `sentinel auth-pass` is required, which is only available in a newer Redis server binary as the one bundled into the `com.github.kstyrc:embedded-redis` dependency. Thus I bundled a more recent binary (for Linux only). # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 17 ### Licensing - [x] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [x] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [x] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-11798) Update smbj to 0.12.1
[ https://issues.apache.org/jira/browse/NIFI-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-11798. --- Fix Version/s: 2.0.0 1.23.0 Resolution: Fixed > Update smbj to 0.12.1 > - > > Key: NIFI-11798 > URL: https://issues.apache.org/jira/browse/NIFI-11798 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Assignee: Mike R >Priority: Major > Fix For: 2.0.0, 1.23.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Update smbj to 0.12.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11798) Update smbj to 0.12.1
[ https://issues.apache.org/jira/browse/NIFI-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743251#comment-17743251 ] ASF subversion and git services commented on NIFI-11798: Commit e5362c5eb738363974d02fde872001b873d941ae in nifi's branch refs/heads/main from mr1716 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e5362c5eb7 ] NIFI-11798 Update smbj to 0.12.1 Signed-off-by: Pierre Villard This closes #7480. > Update smbj to 0.12.1 > - > > Key: NIFI-11798 > URL: https://issues.apache.org/jira/browse/NIFI-11798 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Assignee: Mike R >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > Update smbj to 0.12.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11798) Update smbj to 0.12.1
[ https://issues.apache.org/jira/browse/NIFI-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743252#comment-17743252 ] ASF subversion and git services commented on NIFI-11798: Commit c5cefebc0ace12e01c70bcc843389b5bb329bbb1 in nifi's branch refs/heads/support/nifi-1.x from mr1716 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=c5cefebc0a ] NIFI-11798 Update smbj to 0.12.1 Signed-off-by: Pierre Villard This closes #7480. > Update smbj to 0.12.1 > - > > Key: NIFI-11798 > URL: https://issues.apache.org/jira/browse/NIFI-11798 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Assignee: Mike R >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Update smbj to 0.12.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] asfgit closed pull request #7480: NIFI-11798 Update smbj to 0.12.1
asfgit closed pull request #7480: NIFI-11798 Update smbj to 0.12.1 URL: https://github.com/apache/nifi/pull/7480 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-11812) Stateless hardcodes "nexus" as the only extension option - Requesting NiFi Registry and/or S3 as additional ExtensionClient options
Stephanie Ambrose created NIFI-11812: Summary: Stateless hardcodes "nexus" as the only extension option - Requesting NiFi Registry and/or S3 as additional ExtensionClient options Key: NIFI-11812 URL: https://issues.apache.org/jira/browse/NIFI-11812 Project: Apache NiFi Issue Type: New Feature Reporter: Stephanie Ambrose Right now stateless NiFi has a hard-coded check on extension type of "nexus" on line 302 of StandardStatelessDataflowFactory. It would be nice to configure stateless to pull nars from additional extensions such as NiFi Registry and/or S3. "stateful" NiFi allows you to set the nar provider to NiFi Registry, which can be configured to store to S3. It'd be nice to have this feature for stateless as well versus being tied to only nexus. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11794) RedisStateProvider failing when clearing state with local scope
[ https://issues.apache.org/jira/browse/NIFI-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743231#comment-17743231 ] ASF subversion and git services commented on NIFI-11794: Commit 00a6478c06bc167d1c6fddd6f6ca3c25f192f49c in nifi's branch refs/heads/support/nifi-1.x from Pierre Villard [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=00a6478c06 ] NIFI-11794 - Fix NPE + configure max attempts for Redis State Provider (#7473) Signed-off-by: Otto Fowler This closes #7473. > RedisStateProvider failing when clearing state with local scope > --- > > Key: NIFI-11794 > URL: https://issues.apache.org/jira/browse/NIFI-11794 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.latest, 2.latest > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Trying to configure the NiFi cluster state provider with the Redis > implementation against a Google Cloud Platform Memorystore instance. > I can see that there is no state saved in the redis instance. > When trying to clear state of a processor: > {code:java} > 2023-07-11 17:57:39,007 ERROR [NiFi Web Server-22] > o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: > java.lang.NullPointerException. Returning Internal Server Error response. > java.lang.NullPointerException: null > at > org.apache.nifi.redis.state.RedisStateProvider.lambda$replace$1(RedisStateProvider.java:248) > at > org.apache.nifi.redis.state.RedisStateProvider.withConnection(RedisStateProvider.java:313) > at > org.apache.nifi.redis.state.RedisStateProvider.replace(RedisStateProvider.java:207) > at > org.apache.nifi.redis.state.RedisStateProvider.clear(RedisStateProvider.java:263) > at > org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.clear(StandardStateManagerProvider.java:395) > at > org.apache.nifi.controller.state.StandardStateManager.clear(StandardStateManager.java:85) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO.clearState(StandardComponentStateDAO.java:58) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO.clearState(StandardComponentStateDAO.java:72) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO$$FastClassBySpringCGLIB$$51589743.invoke() > {code} > However this action results in creating the key in the Redis instance: > {code:java} > # redis-cli -h ... -p ... -a ... --tls --cacert /tmp/server-ca.pem > ...:6378> KEYS * > 1) "nifi/components/46025cdf-0189-1000--c11ae372" > ...:6378> GET "nifi/components/46025cdf-0189-1000--c11ae372" > "{\"version\":0,\"encodingVersion\":1,\"stateValues\":{}}" > {code} > The configuration in state-management.xml is looking like: > {code:java} > > redis-provider > Standalone > ...:6378 > ... > true > {code} > CA cert has been added in the NiFi truststore. > Still debugging the code to figure out the issue and will also add additional > logs. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11794) RedisStateProvider failing when clearing state with local scope
[ https://issues.apache.org/jira/browse/NIFI-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-11794: -- Fix Version/s: 2.0.0 1.23.0 (was: 1.latest) (was: 2.latest) Resolution: Fixed Status: Resolved (was: Patch Available) > RedisStateProvider failing when clearing state with local scope > --- > > Key: NIFI-11794 > URL: https://issues.apache.org/jira/browse/NIFI-11794 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 2.0.0, 1.23.0 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Trying to configure the NiFi cluster state provider with the Redis > implementation against a Google Cloud Platform Memorystore instance. > I can see that there is no state saved in the redis instance. > When trying to clear state of a processor: > {code:java} > 2023-07-11 17:57:39,007 ERROR [NiFi Web Server-22] > o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: > java.lang.NullPointerException. Returning Internal Server Error response. > java.lang.NullPointerException: null > at > org.apache.nifi.redis.state.RedisStateProvider.lambda$replace$1(RedisStateProvider.java:248) > at > org.apache.nifi.redis.state.RedisStateProvider.withConnection(RedisStateProvider.java:313) > at > org.apache.nifi.redis.state.RedisStateProvider.replace(RedisStateProvider.java:207) > at > org.apache.nifi.redis.state.RedisStateProvider.clear(RedisStateProvider.java:263) > at > org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.clear(StandardStateManagerProvider.java:395) > at > org.apache.nifi.controller.state.StandardStateManager.clear(StandardStateManager.java:85) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO.clearState(StandardComponentStateDAO.java:58) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO.clearState(StandardComponentStateDAO.java:72) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO$$FastClassBySpringCGLIB$$51589743.invoke() > {code} > However this action results in creating the key in the Redis instance: > {code:java} > # redis-cli -h ... -p ... -a ... --tls --cacert /tmp/server-ca.pem > ...:6378> KEYS * > 1) "nifi/components/46025cdf-0189-1000--c11ae372" > ...:6378> GET "nifi/components/46025cdf-0189-1000--c11ae372" > "{\"version\":0,\"encodingVersion\":1,\"stateValues\":{}}" > {code} > The configuration in state-management.xml is looking like: > {code:java} > > redis-provider > Standalone > ...:6378 > ... > true > {code} > CA cert has been added in the NiFi truststore. > Still debugging the code to figure out the issue and will also add additional > logs. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1589: MINIFICPP-1825 Create Properties at compile time
szaszm commented on code in PR #1589: URL: https://github.com/apache/nifi-minifi-cpp/pull/1589#discussion_r1263887566 ## libminifi/include/core/PropertyDefinition.h: ## @@ -0,0 +1,78 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include +#include +#include + +#include "core/PropertyType.h" +#include "utils/gsl.h" + +namespace org::apache::nifi::minifi::core { + +template +struct PropertyDefinition { + std::string_view name; + std::string_view display_name; + std::string_view description; + bool is_required = false; + std::string_view valid_regex; + std::array allowed_values; + std::array allowed_types; Review Comment: I think we could store these as actual types, instead of type name strings, and defer the string conversion to later. `utils/meta/type_list.h` could optionally be used for one kind of implementation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11794) RedisStateProvider failing when clearing state with local scope
[ https://issues.apache.org/jira/browse/NIFI-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743204#comment-17743204 ] ASF subversion and git services commented on NIFI-11794: Commit 8a61d5bdbf60a0f4cec3e9ded1488bee3f495859 in nifi's branch refs/heads/main from Pierre Villard [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=8a61d5bdbf ] NIFI-11794 - Fix NPE + configure max attempts for Redis State Provider (#7473) Signed-off-by: Otto Fowler This closes #7473. > RedisStateProvider failing when clearing state with local scope > --- > > Key: NIFI-11794 > URL: https://issues.apache.org/jira/browse/NIFI-11794 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.latest, 2.latest > > Time Spent: 1h 10m > Remaining Estimate: 0h > > Trying to configure the NiFi cluster state provider with the Redis > implementation against a Google Cloud Platform Memorystore instance. > I can see that there is no state saved in the redis instance. > When trying to clear state of a processor: > {code:java} > 2023-07-11 17:57:39,007 ERROR [NiFi Web Server-22] > o.a.nifi.web.api.config.ThrowableMapper An unexpected error has occurred: > java.lang.NullPointerException. Returning Internal Server Error response. > java.lang.NullPointerException: null > at > org.apache.nifi.redis.state.RedisStateProvider.lambda$replace$1(RedisStateProvider.java:248) > at > org.apache.nifi.redis.state.RedisStateProvider.withConnection(RedisStateProvider.java:313) > at > org.apache.nifi.redis.state.RedisStateProvider.replace(RedisStateProvider.java:207) > at > org.apache.nifi.redis.state.RedisStateProvider.clear(RedisStateProvider.java:263) > at > org.apache.nifi.controller.state.manager.StandardStateManagerProvider$1.clear(StandardStateManagerProvider.java:395) > at > org.apache.nifi.controller.state.StandardStateManager.clear(StandardStateManager.java:85) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO.clearState(StandardComponentStateDAO.java:58) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO.clearState(StandardComponentStateDAO.java:72) > at > org.apache.nifi.web.dao.impl.StandardComponentStateDAO$$FastClassBySpringCGLIB$$51589743.invoke() > {code} > However this action results in creating the key in the Redis instance: > {code:java} > # redis-cli -h ... -p ... -a ... --tls --cacert /tmp/server-ca.pem > ...:6378> KEYS * > 1) "nifi/components/46025cdf-0189-1000--c11ae372" > ...:6378> GET "nifi/components/46025cdf-0189-1000--c11ae372" > "{\"version\":0,\"encodingVersion\":1,\"stateValues\":{}}" > {code} > The configuration in state-management.xml is looking like: > {code:java} > > redis-provider > Standalone > ...:6378 > ... > true > {code} > CA cert has been added in the NiFi truststore. > Still debugging the code to figure out the issue and will also add additional > logs. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] ottobackwards merged pull request #7473: NIFI-11794 - Fix NPE + configure max attempts for Redis State Provider
ottobackwards merged PR #7473: URL: https://github.com/apache/nifi/pull/7473 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1589: MINIFICPP-1825 Create Properties at compile time
szaszm commented on code in PR #1589: URL: https://github.com/apache/nifi-minifi-cpp/pull/1589#discussion_r1263795580 ## libminifi/include/core/ProcessContext.h: ## @@ -102,18 +103,44 @@ class ProcessContext : public controller::ControllerServiceLookup, public core:: return value; } + template + std::enable_if_t::value, std::optional> + getProperty(const PropertyReference& property) const { +T value; +try { + if (!getProperty(property.name, value)) return std::nullopt; +} catch (const utils::internal::ValueException&) { + return std::nullopt; +} +return value; + } + + template + requires(!std::is_convertible_v && !std::is_convertible_v&>) + bool getProperty(std::string_view name, T &value) const { Review Comment: ```suggestion template concept NotAFlowFile = !std::convertible_to && !std::convertible_to&>; bool getProperty(std::string_view name, NotAFlowFile auto& value) const { ``` ## libminifi/include/core/ProcessContext.h: ## @@ -102,18 +103,44 @@ class ProcessContext : public controller::ControllerServiceLookup, public core:: return value; } + template + std::enable_if_t::value, std::optional> + getProperty(const PropertyReference& property) const { Review Comment: Let's convert this to concept constraints, since we're touching it, and the new one below is also written like this. ```suggestion template std::optional getProperty(const PropertyReference& property) const { ``` ## libminifi/include/core/ProcessContext.h: ## @@ -102,18 +103,44 @@ class ProcessContext : public controller::ControllerServiceLookup, public core:: return value; } + template + std::enable_if_t::value, std::optional> + getProperty(const PropertyReference& property) const { +T value; +try { + if (!getProperty(property.name, value)) return std::nullopt; +} catch (const utils::internal::ValueException&) { + return std::nullopt; +} +return value; + } + + template + requires(!std::is_convertible_v && !std::is_convertible_v&>) + bool getProperty(std::string_view name, T &value) const { +return getPropertyImp::type>(std::string{name}, value); + } + template - std::enable_if_t && !std::is_convertible_v&>, - bool> getProperty(const std::string &name, T &value) const { -return getPropertyImp::type>(name, value); + requires(!std::is_convertible_v && !std::is_convertible_v&>) + bool getProperty(const PropertyReference& property, T &value) const { Review Comment: ```suggestion bool getProperty(const PropertyReference& property, NotAFlowFile auto& value) const { ``` ## libminifi/src/core/Property.cpp: ## @@ -90,4 +94,41 @@ std::vector> Property::getExclusiveOfPropert return exclusive_of_properties_; } +namespace { +std::vector createPropertyValues(gsl::span values, const core::PropertyParser& property_parser) { + return ranges::views::transform(values, [&property_parser](const auto& value) { +return property_parser.parse(value); + }) | ranges::to; +} + +inline std::vector createStrings(gsl::span string_views) { + return ranges::views::transform(string_views, [](const auto& string_view) { return std::string{string_view}; }) + | ranges::to; +} + +inline std::vector> createStrings(gsl::span> pairs_of_string_views) { + return ranges::views::transform(pairs_of_string_views, [](const auto& pair_of_string_views) { return std::pair(pair_of_string_views); }) + | ranges::to; +} +} // namespace + +Property::Property(const PropertyReference& compile_time_property) +: name_(compile_time_property.name), + description_(compile_time_property.description), + is_required_(compile_time_property.is_required), + valid_regex_(compile_time_property.valid_regex), Review Comment: Is this `valid_regex_` ever used? The builder doesn't seem to set it. If it's always empty, we might as well drop it. ## libminifi/include/core/PropertyType.h: ## @@ -90,12 +47,19 @@ class PropertyValidator { [[nodiscard]] virtual ValidationResult validate(const std::string &subject, const std::shared_ptr &input) const = 0; [[nodiscard]] virtual ValidationResult validate(const std::string &subject, const std::string &input) const = 0; +}; + +class PropertyType : public PropertyParser, public PropertyValidator { Review Comment: I think composition would be better design here, than multiple inheritance. Most property types have names changed to end with "_TYPE", except `UnsignedIntPropertyType`. This particular case may be an oversight. But the pattern ultimately overrides the `getName` method of `PropertyValidator`, so it should retuirn a validator name (like the old version), but a method called `getName` in `PropertyType` has no business returning a validator name. I'm also not sure how the type name gets converted back t
[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
lordgamez commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r1263847178 ## METRICS.md: ## @@ -108,6 +108,15 @@ An agent identifier should also be defined to identify which agent the metric is nifi.metrics.publisher.agent.identifier=Agent1 +### Configure Prometheus metrics publisher with SSL + +The communication between MiNiFi and the Prometheus server can be encrypted using SSL. This can be achieved by adding the SSL certificate path (a single file containing both the certificate and the SSL key) and optionally adding the root CA path when using a self signed certificate to the minifi.properties file. Here is an example with the SSL properties: Review Comment: Updated according our discussion in 45f72a651e07636f89cb177f8536a78cf3fe67d3 ## METRICS.md: ## @@ -108,6 +108,15 @@ An agent identifier should also be defined to identify which agent the metric is nifi.metrics.publisher.agent.identifier=Agent1 +### Configure Prometheus metrics publisher with SSL + +The communication between MiNiFi and the Prometheus server can be encrypted using SSL. This can be achieved by adding the SSL certificate path (a single file containing both the certificate and the SSL key) and optionally adding the root CA path when using a self signed certificate to the minifi.properties file. Here is an example with the SSL properties: Review Comment: Updated according to our discussion in 45f72a651e07636f89cb177f8536a78cf3fe67d3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
lordgamez commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r1263846980 ## extensions/prometheus/PrometheusExposerWrapper.cpp: ## @@ -20,9 +20,27 @@ namespace org::apache::nifi::minifi::extensions::prometheus { -PrometheusExposerWrapper::PrometheusExposerWrapper(uint32_t port) -: exposer_(std::to_string(port)) { - logger_->log_info("Started Prometheus metrics publisher on port %" PRIu32, port); +PrometheusExposerWrapper::PrometheusExposerWrapper(const PrometheusExposerConfig& config) +: exposer_(parseExposerConfig(config)) { + logger_->log_info("Started Prometheus metrics publisher on port %" PRIu32, config.port); Review Comment: Updated in 45f72a651e07636f89cb177f8536a78cf3fe67d3 ## docker/test/integration/cluster/containers/PrometheusContainer.py: ## @@ -16,13 +16,53 @@ import os import tempfile import docker.types + from .Container import Container +from OpenSSL import crypto +from ssl_utils.SSL_cert_utils import make_cert_without_extended_usage class PrometheusContainer(Container): -def __init__(self, feature_context, name, vols, network, image_store, command=None): -super().__init__(feature_context, name, 'prometheus', vols, network, image_store, command) -prometheus_yml_content = """ +def __init__(self, feature_context, name, vols, network, image_store, command=None, ssl=False): +engine = "prometheus-ssl" if ssl else "prometheus" +super().__init__(feature_context, name, engine, vols, network, image_store, command) +self.ssl = ssl +if ssl: +prometheus_cert, prometheus_key = make_cert_without_extended_usage(f"prometheus-{feature_context.id}", feature_context.root_ca_cert, feature_context.root_ca_key) + +self.root_ca_file = tempfile.NamedTemporaryFile(delete=False) + self.root_ca_file.write(crypto.dump_certificate(type=crypto.FILETYPE_PEM, cert=feature_context.root_ca_cert)) +self.root_ca_file.close() +os.chmod(self.root_ca_file.name, 0o644) + +self.prometheus_cert_file = tempfile.NamedTemporaryFile(delete=False) + self.prometheus_cert_file.write(crypto.dump_certificate(type=crypto.FILETYPE_PEM, cert=prometheus_cert)) +self.prometheus_cert_file.close() +os.chmod(self.prometheus_cert_file.name, 0o644) + +self.prometheus_key_file = tempfile.NamedTemporaryFile(delete=False) + self.prometheus_key_file.write(crypto.dump_privatekey(type=crypto.FILETYPE_PEM, pkey=prometheus_key)) +self.prometheus_key_file.close() +os.chmod(self.prometheus_key_file.name, 0o644) + +prometheus_yml_content = """ +global: + scrape_interval: 2s + evaluation_interval: 15s +scrape_configs: + - job_name: "minifi" +static_configs: + - targets: ["minifi-cpp-flow-{feature_id}:9936"] +scheme: https +tls_config: + ca_file: /etc/prometheus/certs/root-ca.pem +""".format(feature_id=self.feature_context.id) +self.yaml_file = tempfile.NamedTemporaryFile(delete=False) +self.yaml_file.write(prometheus_yml_content.encode()) +self.yaml_file.close() +os.chmod(self.yaml_file.name, 0o644) +else: +prometheus_yml_content = """ Review Comment: Updated in 45f72a651e07636f89cb177f8536a78cf3fe67d3 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
lordgamez commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r1263819486 ## METRICS.md: ## @@ -108,6 +108,15 @@ An agent identifier should also be defined to identify which agent the metric is nifi.metrics.publisher.agent.identifier=Agent1 +### Configure Prometheus metrics publisher with SSL + +The communication between MiNiFi and the Prometheus server can be encrypted using SSL. This can be achieved by adding the SSL certificate path (a single file containing both the certificate and the SSL key) and optionally adding the root CA path when using a self signed certificate to the minifi.properties file. Here is an example with the SSL properties: Review Comment: In this scenario the MiNiFi publishes the metrics and acts as a server (using a CivetWeb server in the implementation) that can be scraped for the metrics by the Prometheus server. So in this case the MiNiFi needs a self signed certificate, not the Prometheus server. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage…
exceptionfactory commented on PR #7458: URL: https://github.com/apache/nifi/pull/7458#issuecomment-1635947161 > During testing, I found it a bit cumbersome that the `File Name` and `Blob Name` name properties are on the processor while the source path is in the service. In my mind, they are logically connected, especially in this case. Purely from the user's point of view, I think it would be better to put these fields into the FileResourceService, though I understand it is not possible as it resides in a separate module. Tbh, I don't have a good idea to solve it atm, but I wanted to point it out. Thanks for the feedback and testing @nandorsoma! I pushed an update that removes two utility methods and instead uses the functional `Optional.map().orElse()` approach. This seems more natural under the circumstances. This optional approach to file transfer is somewhat outside the bounds of normal processing, given that the `StandardFileResourceService` allows reading directly from the file system, so I understand how the property organization may not be the most intuitive. The correct configuration does require aligning the Service `File Path` with the expected input values on a given FlowFile, but that seems reasonable given that standard usage patterns will not make use of this feature. There could be room for improvement in subsequent iterations, but at least the current approach avoids introducing new permission restrictions on the existing Processors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9784) MergeContent Process Ignores Case of the Correlation Attribute Resulting in Great Hilarity
[ https://issues.apache.org/jira/browse/NIFI-9784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743182#comment-17743182 ] Christopher Landwehr commented on NIFI-9784: The Property Configuration for the Correlation Attribute Name of the MergeContent Processor is expecting the Property Name of the attribute that the user/admin is choosing to utilize to correlate the flowfiles for Bin-Packing. When you utilize the string `${foo}` as the attribute name it's telling the processor to merge on the literal string of `${foo}`. None of the flowfiles are being assigned an attribute name/value of `${foo}` therefore the attribute doesn't exist to correlate with and meets completion strategy of minimum flowfiles to merge with. If you are wanting this logic to function based on your uploaded example you would need to change the Correlation Attribute name from `${foo}` to `foo`. I recommend that the resolution status to be marked closed. > MergeContent Process Ignores Case of the Correlation Attribute Resulting in > Great Hilarity > -- > > Key: NIFI-9784 > URL: https://issues.apache.org/jira/browse/NIFI-9784 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.15.3 > Environment: Linux >Reporter: Matthew >Priority: Major > Attachments: MergeContentProcessorTestCase.xml, > image-2022-03-09-22-26-57-072.png > > > The MergeContent processor ignores the case of the correlation attribute when > binary packing. > This results in the grouping together flow files with dissimilar correlation > attributes. > Subsequently the correlation attribute is dropped in the merged flowfile as > they are not actually equal. > Interestingly, the correlation attribute is dropped regardless of the > configuration of the Attribute Strategy. I suspect this is also an artifact > of how case is handled. > I'm not sure if the priority is Major as there is an obvious workaround > (force case). > In any case (waka waka), I've attached a test... case. > !image-2022-03-09-22-26-57-072.png|width=905,height=829! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage…
exceptionfactory commented on code in PR #7458: URL: https://github.com/apache/nifi/pull/7458#discussion_r1263767571 ## nifi-nar-bundles/nifi-extension-utils/nifi-resource-transfer/src/main/java/org/apache/nifi/processors/transfer/ResourceTransferUtils.java: ## @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.transfer; + +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.fileresource.service.api.FileResource; +import org.apache.nifi.fileresource.service.api.FileResourceService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +import java.io.InputStream; +import java.util.Optional; + +import static org.apache.nifi.processors.transfer.ResourceTransferProperties.FILE_RESOURCE_SERVICE; + +public final class ResourceTransferUtils { + +private ResourceTransferUtils() {} + +/** + * Get File Resource from File Resource Service based on provided Source otherwise return empty + * + * @param resourceTransferSource type of the data upload + * @param context process context with properties + * @param flowFile FlowFile with attributes to use in expression language + * @return Optional FileResource retrieved from FileResourceService if Source is File Resource Service, otherwise empty + * @throws ProcessException Thrown if Source is File Resource but FileResourceService is not provided in the context + */ +public static Optional getFileResource(final ResourceTransferSource resourceTransferSource, final ProcessContext context, final FlowFile flowFile) { +final Optional resource; + +if (resourceTransferSource == ResourceTransferSource.FILE_RESOURCE_SERVICE) { +final PropertyValue property = context.getProperty(FILE_RESOURCE_SERVICE); +if (property == null || !property.isSet()) { +throw new ProcessException("File Resource Service required but not configured"); +} +final FileResourceService fileResourceService = property.asControllerService(FileResourceService.class); +final FileResource fileResource = fileResourceService.getFileResource(flowFile.getAttributes()); +resource = Optional.ofNullable(fileResource); +} else { +resource = Optional.empty(); +} + +return resource; +} + +/** + * Returns the input stream of the FileResource if it is provided (not null). Otherwise, returns the input stream of the FlowFile. + * + * @param session the session to read the FlowFile + * @param flowFile the FlowFile which is read when no FileResource is provided + * @param fileResource the FileResource + * @return input stream of the FileResource or the FlowFile + */ +public static InputStream getTransferInputStream(final ProcessSession session, final FlowFile flowFile, final FileResource fileResource) { Review Comment: I attempted to minimize changes, but with the adjustment to return `Optional`, this particular method probably needs to be adjusted. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage…
exceptionfactory commented on code in PR #7458: URL: https://github.com/apache/nifi/pull/7458#discussion_r1263766777 ## nifi-nar-bundles/nifi-extension-utils/nifi-resource-transfer/src/main/java/org/apache/nifi/processors/transfer/ResourceTransferUtils.java: ## @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.transfer; + +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.fileresource.service.api.FileResource; +import org.apache.nifi.fileresource.service.api.FileResourceService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +import java.io.InputStream; +import java.util.Optional; + +import static org.apache.nifi.processors.transfer.ResourceTransferProperties.FILE_RESOURCE_SERVICE; + +public final class ResourceTransferUtils { + +private ResourceTransferUtils() {} + +/** + * Get File Resource from File Resource Service based on provided Source otherwise return empty + * + * @param resourceTransferSource type of the data upload + * @param context process context with properties + * @param flowFile FlowFile with attributes to use in expression language + * @return Optional FileResource retrieved from FileResourceService if Source is File Resource Service, otherwise empty + * @throws ProcessException Thrown if Source is File Resource but FileResourceService is not provided in the context + */ +public static Optional getFileResource(final ResourceTransferSource resourceTransferSource, final ProcessContext context, final FlowFile flowFile) { Review Comment: With this as a public utility method, I wanted to make it clear that it may not return a `FileResource`, so I changed it to use the `Optional` wrapper. I considered adjusting the calling code, and I will take another look at that in light of your comments. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] nandorsoma commented on a diff in pull request #7458: NIFI-11758: Added FileResourceService and used it in PutAzure*Storage…
nandorsoma commented on code in PR #7458: URL: https://github.com/apache/nifi/pull/7458#discussion_r1263662880 ## nifi-nar-bundles/nifi-extension-utils/nifi-resource-transfer/src/main/java/org/apache/nifi/processors/transfer/ResourceTransferUtils.java: ## @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.transfer; + +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.fileresource.service.api.FileResource; +import org.apache.nifi.fileresource.service.api.FileResourceService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +import java.io.InputStream; +import java.util.Optional; + +import static org.apache.nifi.processors.transfer.ResourceTransferProperties.FILE_RESOURCE_SERVICE; + +public final class ResourceTransferUtils { + +private ResourceTransferUtils() {} + +/** + * Get File Resource from File Resource Service based on provided Source otherwise return empty + * + * @param resourceTransferSource type of the data upload + * @param context process context with properties + * @param flowFile FlowFile with attributes to use in expression language + * @return Optional FileResource retrieved from FileResourceService if Source is File Resource Service, otherwise empty + * @throws ProcessException Thrown if Source is File Resource but FileResourceService is not provided in the context + */ +public static Optional getFileResource(final ResourceTransferSource resourceTransferSource, final ProcessContext context, final FlowFile flowFile) { Review Comment: As I see in both cases you unwrap the return value with `orElse(null);`. Wouldn't it be cleaner to simply return the FileResource when it exists, otherwise null? ## nifi-nar-bundles/nifi-extension-utils/nifi-resource-transfer/src/main/java/org/apache/nifi/processors/transfer/ResourceTransferUtils.java: ## @@ -0,0 +1,85 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.transfer; + +import org.apache.nifi.components.PropertyValue; +import org.apache.nifi.fileresource.service.api.FileResource; +import org.apache.nifi.fileresource.service.api.FileResourceService; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.exception.ProcessException; + +import java.io.InputStream; +import java.util.Optional; + +import static org.apache.nifi.processors.transfer.ResourceTransferProperties.FILE_RESOURCE_SERVICE; + +public final class ResourceTransferUtils { + +private ResourceTransferUtils() {} + +/** + * Get File Resource from File Resource Service based on provided Source otherwise return empty + * + * @param resourceTransferSource type of the data upload + * @param context process context with properties + * @param flowFile FlowFile with attributes to use in expression language + * @return Optional FileResource retrieved from FileResourceService if Source is File Resource Service, otherwise empty + * @throws ProcessException Thrown if Source is File Resource but FileResourceService is not provided in the context + */ +public static Option
[jira] [Updated] (MINIFICPP-2160) Change clear-actions-cache.yml from cron to workflow_run
[ https://issues.apache.org/jira/browse/MINIFICPP-2160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martin Zink updated MINIFICPP-2160: --- Status: Patch Available (was: Open) https://github.com/apache/nifi-minifi-cpp/pull/1605 > Change clear-actions-cache.yml from cron to workflow_run > > > Key: MINIFICPP-2160 > URL: https://issues.apache.org/jira/browse/MINIFICPP-2160 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Martin Zink >Assignee: Martin Zink >Priority: Trivial > Time Spent: 10m > Remaining Estimate: 0h > > Instead of every 30 minutes Github Actions Cache Eviction could run after > every workflow run -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mr1716 opened a new pull request, #7480: NIFI-11798 Update smbj to 0.12.1
mr1716 opened a new pull request, #7480: URL: https://github.com/apache/nifi/pull/7480 # Summary [NIFI-11798](https://issues.apache.org/jira/browse/NIFI-11798) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI-11798) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11798) Update smbj to 0.12.1
[ https://issues.apache.org/jira/browse/NIFI-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike R updated NIFI-11798: -- Summary: Update smbj to 0.12.1 (was: Update smbj to 0.12.0) > Update smbj to 0.12.1 > - > > Key: NIFI-11798 > URL: https://issues.apache.org/jira/browse/NIFI-11798 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Assignee: Mike R >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Update smbj to 0.12.0 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-11806) Update woodstox-core to 6.5.1
[ https://issues.apache.org/jira/browse/NIFI-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike R resolved NIFI-11806. --- Resolution: Duplicate > Update woodstox-core to 6.5.1 > - > > Key: NIFI-11806 > URL: https://issues.apache.org/jira/browse/NIFI-11806 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Priority: Major > > Update woodstox-core to 6.5.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (NIFI-11806) Update woodstox-core to 6.5.1
[ https://issues.apache.org/jira/browse/NIFI-11806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike R closed NIFI-11806. - Already done > Update woodstox-core to 6.5.1 > - > > Key: NIFI-11806 > URL: https://issues.apache.org/jira/browse/NIFI-11806 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Priority: Major > > Update woodstox-core to 6.5.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Closed] (NIFI-11805) Update jsoup to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike R closed NIFI-11805. - Already done > Update jsoup to 1.16.1 > -- > > Key: NIFI-11805 > URL: https://issues.apache.org/jira/browse/NIFI-11805 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Priority: Major > > Update jsoup to 1.16.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-11805) Update jsoup to 1.16.1
[ https://issues.apache.org/jira/browse/NIFI-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike R resolved NIFI-11805. --- Resolution: Duplicate > Update jsoup to 1.16.1 > -- > > Key: NIFI-11805 > URL: https://issues.apache.org/jira/browse/NIFI-11805 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.22.0 >Reporter: Mike R >Priority: Major > > Update jsoup to 1.16.1 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11811) ListS3 Processor doesn't back pressure
[ https://issues.apache.org/jira/browse/NIFI-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Hermon updated NIFI-11811: - Description: Dear NiFi team, I have built a data flow for syncing two S3 buckets on two separate sites. I am using NiFi because the throughput is much higher than any other tool (aws-s3-cli and RClone) The flow is pretty straightforward: ListS3 on-site A > FetchS3 from site A -> FetchS3 from site B (check if file already exists)> (on failure) PutS3Object in site B Sometimes site B throws 504 timeout exceptions which starts the back pressure mechanism. Everything back pressures perfectly fine but not the ListS3 processor output queue, I have to manually terminate it in order to stop listing new files, even trying to stop it fails. I have tried limiting the size (even though it is 0 bytes flow files since the flow file only contains attributes) and also limiting the number of objects. p.s the bucket has 5 Billion objects was: Dear NiFi team, I have built a data flow for syncing two S3 buckets on two separate sites. I am using NiFi because the throughput is much higher than any other tool (aws-s3-cli and RClone) The flow is pretty straightforward: ListS3 on-site A > FetchS3 from site A -> FetchS3 from site B (check if file already exists)> (on failure) PutS3Object in site B Sometimes site B throws 504 timeout exceptions which starts the back pressure mechanism. Everything back pressures perfectly fine but not the ListS3 processor output queue, I have to manually terminate it in order to stop listing new files, even trying to stop it fails. I have tried limiting the size (even though it is 0 bytes flow files since the flow file only contains attributes) and also limiting the number of objects. > ListS3 Processor doesn't back pressure > -- > > Key: NIFI-11811 > URL: https://issues.apache.org/jira/browse/NIFI-11811 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.16.2 >Reporter: Daniel Hermon >Priority: Critical > > Dear NiFi team, > I have built a data flow for syncing two S3 buckets on two separate sites. > I am using NiFi because the throughput is much higher than any other tool > (aws-s3-cli and RClone) > The flow is pretty straightforward: > ListS3 on-site A > FetchS3 from site A -> FetchS3 from site B (check if file > already exists)> (on failure) PutS3Object in site B > Sometimes site B throws 504 timeout exceptions which starts the back pressure > mechanism. > Everything back pressures perfectly fine but not the ListS3 processor output > queue, I have to manually terminate it in order to stop listing new files, > even trying to stop it fails. > I have tried limiting the size (even though it is 0 bytes flow files since > the flow file only contains attributes) and also limiting the number of > objects. > > p.s the bucket has 5 Billion objects -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mr1716 closed pull request #7472: NIFI-11798 Update smbj to 0.12.0
mr1716 closed pull request #7472: NIFI-11798 Update smbj to 0.12.0 URL: https://github.com/apache/nifi/pull/7472 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-11811) ListS3 Processor doesn't back pressure
Daniel Hermon created NIFI-11811: Summary: ListS3 Processor doesn't back pressure Key: NIFI-11811 URL: https://issues.apache.org/jira/browse/NIFI-11811 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.16.2 Reporter: Daniel Hermon Dear NiFi team, I have built a data flow for syncing two S3 buckets on two separate sites. I am using NiFi because the throughput is much higher than any other tool (aws-s3-cli and RClone) The flow is pretty straightforward: ListS3 on-site A -> FetchS3 from site A -> FetchS3 from site B (check if file already exists)-> (on failure) PutS3Object in site B Sometimes site B throws 504 timeout exceptions which starts the back pressure mechanism. Everything back pressures perfectly fine but not the ListS3 processor output queue, I have to manually terminate it in order to stop listing new files, even trying to stop it fails. I have tried limiting the size (even though it is 0 bytes flow files since the flow file only contains attributes) and also limiting the number of objects. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11811) ListS3 Processor doesn't back pressure
[ https://issues.apache.org/jira/browse/NIFI-11811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Hermon updated NIFI-11811: - Description: Dear NiFi team, I have built a data flow for syncing two S3 buckets on two separate sites. I am using NiFi because the throughput is much higher than any other tool (aws-s3-cli and RClone) The flow is pretty straightforward: ListS3 on-site A > FetchS3 from site A -> FetchS3 from site B (check if file already exists)> (on failure) PutS3Object in site B Sometimes site B throws 504 timeout exceptions which starts the back pressure mechanism. Everything back pressures perfectly fine but not the ListS3 processor output queue, I have to manually terminate it in order to stop listing new files, even trying to stop it fails. I have tried limiting the size (even though it is 0 bytes flow files since the flow file only contains attributes) and also limiting the number of objects. was: Dear NiFi team, I have built a data flow for syncing two S3 buckets on two separate sites. I am using NiFi because the throughput is much higher than any other tool (aws-s3-cli and RClone) The flow is pretty straightforward: ListS3 on-site A -> FetchS3 from site A -> FetchS3 from site B (check if file already exists)-> (on failure) PutS3Object in site B Sometimes site B throws 504 timeout exceptions which starts the back pressure mechanism. Everything back pressures perfectly fine but not the ListS3 processor output queue, I have to manually terminate it in order to stop listing new files, even trying to stop it fails. I have tried limiting the size (even though it is 0 bytes flow files since the flow file only contains attributes) and also limiting the number of objects. > ListS3 Processor doesn't back pressure > -- > > Key: NIFI-11811 > URL: https://issues.apache.org/jira/browse/NIFI-11811 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.16.2 >Reporter: Daniel Hermon >Priority: Critical > > Dear NiFi team, > I have built a data flow for syncing two S3 buckets on two separate sites. > I am using NiFi because the throughput is much higher than any other tool > (aws-s3-cli and RClone) > The flow is pretty straightforward: > ListS3 on-site A > FetchS3 from site A -> FetchS3 from site B (check if file > already exists)> (on failure) PutS3Object in site B > Sometimes site B throws 504 timeout exceptions which starts the back pressure > mechanism. > Everything back pressures perfectly fine but not the ListS3 processor output > queue, I have to manually terminate it in order to stop listing new files, > even trying to stop it fails. > I have tried limiting the size (even though it is 0 bytes flow files since > the flow file only contains attributes) and also limiting the number of > objects. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] pvillard31 commented on pull request #7473: NIFI-11794 - Fix NPE + configure max attempts for Redis State Provider
pvillard31 commented on PR #7473: URL: https://github.com/apache/nifi/pull/7473#issuecomment-1635507641 That would not cause the null error. EXEC returns null in case the transaction has been aborted. So you'd need to simulate one node/thread to have a transaction starting with a WATCH on the key, then another node/thread completing a transaction on the same key, to have the first one receiving a null. The logs I shared in the JIRA comment are coming from a cluster that I patched with this fix. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11810) Remove unused JSTL and EL libraries from Standard Bundle
[ https://issues.apache.org/jira/browse/NIFI-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743048#comment-17743048 ] ASF subversion and git services commented on NIFI-11810: Commit b1568bcf5735817ac1f737d82d870a399dd48645 in nifi's branch refs/heads/support/nifi-1.x from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b1568bcf57 ] NIFI-11810 Removed unused JSTL and EL API libraries Signed-off-by: Pierre Villard This closes #7479. > Remove unused JSTL and EL libraries from Standard Bundle > > > Key: NIFI-11810 > URL: https://issues.apache.org/jira/browse/NIFI-11810 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 20m > Remaining Estimate: 0h > > The {{nifi-standard-bundle}} includes dependencies for {{javax.el-api}} and > {{javax.servlet.jsp.jstl-api}}. The {{nifi-jolt-transform-json-ui}} > references these libraries, but the {{index.jsp}} page does not use JSP > Expression Language or JSTL, so these libraries can be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11810) Remove unused JSTL and EL libraries from Standard Bundle
[ https://issues.apache.org/jira/browse/NIFI-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-11810: -- Fix Version/s: 2.0.0 1.23.0 (was: 1.latest) (was: 2.latest) Resolution: Fixed Status: Resolved (was: Patch Available) > Remove unused JSTL and EL libraries from Standard Bundle > > > Key: NIFI-11810 > URL: https://issues.apache.org/jira/browse/NIFI-11810 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 2.0.0, 1.23.0 > > Time Spent: 20m > Remaining Estimate: 0h > > The {{nifi-standard-bundle}} includes dependencies for {{javax.el-api}} and > {{javax.servlet.jsp.jstl-api}}. The {{nifi-jolt-transform-json-ui}} > references these libraries, but the {{index.jsp}} page does not use JSP > Expression Language or JSTL, so these libraries can be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11810) Remove unused JSTL and EL libraries from Standard Bundle
[ https://issues.apache.org/jira/browse/NIFI-11810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743047#comment-17743047 ] ASF subversion and git services commented on NIFI-11810: Commit e812951c57edc48c51ec6d116a46a9d2adff48a9 in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=e812951c57 ] NIFI-11810 Removed unused JSTL and EL API libraries Signed-off-by: Pierre Villard This closes #7479. > Remove unused JSTL and EL libraries from Standard Bundle > > > Key: NIFI-11810 > URL: https://issues.apache.org/jira/browse/NIFI-11810 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 10m > Remaining Estimate: 0h > > The {{nifi-standard-bundle}} includes dependencies for {{javax.el-api}} and > {{javax.servlet.jsp.jstl-api}}. The {{nifi-jolt-transform-json-ui}} > references these libraries, but the {{index.jsp}} page does not use JSP > Expression Language or JSTL, so these libraries can be removed. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] asfgit closed pull request #7479: NIFI-11810 Remove unused JSTL and EL API libraries
asfgit closed pull request #7479: NIFI-11810 Remove unused JSTL and EL API libraries URL: https://github.com/apache/nifi/pull/7479 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11809) Upgrade Maven to 3.9.3
[ https://issues.apache.org/jira/browse/NIFI-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743046#comment-17743046 ] ASF subversion and git services commented on NIFI-11809: Commit b6f84fb6e22e2fd753149473f143015c9077b1c6 in nifi's branch refs/heads/support/nifi-1.x from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=b6f84fb6e2 ] NIFI-11809 Upgraded wrapped Maven from 3.9.2 to 3.9.3 Signed-off-by: Pierre Villard This closes #7478. > Upgrade Maven to 3.9.3 > -- > > Key: NIFI-11809 > URL: https://issues.apache.org/jira/browse/NIFI-11809 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 20m > Remaining Estimate: 0h > > Apache Maven [3.9.3|https://maven.apache.org/docs/3.9.3/release-notes.html] > includes several updates, including improvements for file locking. The Maven > Wrapper referenced version should be upgraded from 3.9.2 to 3.9.3. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-11809) Upgrade Maven to 3.9.3
[ https://issues.apache.org/jira/browse/NIFI-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-11809: -- Fix Version/s: 2.0.0 1.23.0 (was: 1.latest) (was: 2.latest) Resolution: Fixed Status: Resolved (was: Patch Available) > Upgrade Maven to 3.9.3 > -- > > Key: NIFI-11809 > URL: https://issues.apache.org/jira/browse/NIFI-11809 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 2.0.0, 1.23.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Apache Maven [3.9.3|https://maven.apache.org/docs/3.9.3/release-notes.html] > includes several updates, including improvements for file locking. The Maven > Wrapper referenced version should be upgraded from 3.9.2 to 3.9.3. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-11809) Upgrade Maven to 3.9.3
[ https://issues.apache.org/jira/browse/NIFI-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17743045#comment-17743045 ] ASF subversion and git services commented on NIFI-11809: Commit f82e903eea8047b729a24ff4cfc1782478806d5f in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=f82e903eea ] NIFI-11809 Upgraded wrapped Maven from 3.9.2 to 3.9.3 Signed-off-by: Pierre Villard This closes #7478. > Upgrade Maven to 3.9.3 > -- > > Key: NIFI-11809 > URL: https://issues.apache.org/jira/browse/NIFI-11809 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.latest, 2.latest > > Time Spent: 10m > Remaining Estimate: 0h > > Apache Maven [3.9.3|https://maven.apache.org/docs/3.9.3/release-notes.html] > includes several updates, including improvements for file locking. The Maven > Wrapper referenced version should be upgraded from 3.9.2 to 3.9.3. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1587: MINIFICPP-2135 Add SSL support for Prometheus reporter
fgerlits commented on code in PR #1587: URL: https://github.com/apache/nifi-minifi-cpp/pull/1587#discussion_r1263391342 ## METRICS.md: ## @@ -108,6 +108,15 @@ An agent identifier should also be defined to identify which agent the metric is nifi.metrics.publisher.agent.identifier=Agent1 +### Configure Prometheus metrics publisher with SSL + +The communication between MiNiFi and the Prometheus server can be encrypted using SSL. This can be achieved by adding the SSL certificate path (a single file containing both the certificate and the SSL key) and optionally adding the root CA path when using a self signed certificate to the minifi.properties file. Here is an example with the SSL properties: Review Comment: The root CA is for the server certificate, isn't it? I would make that clearer: ```suggestion The communication between MiNiFi and the Prometheus server can be encrypted using SSL. This can be achieved by adding the SSL certificate path (a single file containing both the client certificate and the client SSL key) and optionally adding the root CA path if the Prometheus server uses a self-signed certificate, to the minifi.properties file. Here is an example with the SSL properties: ``` ## extensions/prometheus/PrometheusExposerWrapper.cpp: ## @@ -20,9 +20,27 @@ namespace org::apache::nifi::minifi::extensions::prometheus { -PrometheusExposerWrapper::PrometheusExposerWrapper(uint32_t port) -: exposer_(std::to_string(port)) { - logger_->log_info("Started Prometheus metrics publisher on port %" PRIu32, port); +PrometheusExposerWrapper::PrometheusExposerWrapper(const PrometheusExposerConfig& config) +: exposer_(parseExposerConfig(config)) { + logger_->log_info("Started Prometheus metrics publisher on port %" PRIu32, config.port); Review Comment: It could be useful to add "with TLS enabled" to this log message if the config contains a certificate. ## docker/test/integration/cluster/containers/PrometheusContainer.py: ## @@ -16,13 +16,53 @@ import os import tempfile import docker.types + from .Container import Container +from OpenSSL import crypto +from ssl_utils.SSL_cert_utils import make_cert_without_extended_usage class PrometheusContainer(Container): -def __init__(self, feature_context, name, vols, network, image_store, command=None): -super().__init__(feature_context, name, 'prometheus', vols, network, image_store, command) -prometheus_yml_content = """ +def __init__(self, feature_context, name, vols, network, image_store, command=None, ssl=False): +engine = "prometheus-ssl" if ssl else "prometheus" +super().__init__(feature_context, name, engine, vols, network, image_store, command) +self.ssl = ssl +if ssl: +prometheus_cert, prometheus_key = make_cert_without_extended_usage(f"prometheus-{feature_context.id}", feature_context.root_ca_cert, feature_context.root_ca_key) + +self.root_ca_file = tempfile.NamedTemporaryFile(delete=False) + self.root_ca_file.write(crypto.dump_certificate(type=crypto.FILETYPE_PEM, cert=feature_context.root_ca_cert)) +self.root_ca_file.close() +os.chmod(self.root_ca_file.name, 0o644) + +self.prometheus_cert_file = tempfile.NamedTemporaryFile(delete=False) + self.prometheus_cert_file.write(crypto.dump_certificate(type=crypto.FILETYPE_PEM, cert=prometheus_cert)) +self.prometheus_cert_file.close() +os.chmod(self.prometheus_cert_file.name, 0o644) + +self.prometheus_key_file = tempfile.NamedTemporaryFile(delete=False) + self.prometheus_key_file.write(crypto.dump_privatekey(type=crypto.FILETYPE_PEM, pkey=prometheus_key)) +self.prometheus_key_file.close() +os.chmod(self.prometheus_key_file.name, 0o644) + +prometheus_yml_content = """ +global: + scrape_interval: 2s + evaluation_interval: 15s +scrape_configs: + - job_name: "minifi" +static_configs: + - targets: ["minifi-cpp-flow-{feature_id}:9936"] +scheme: https +tls_config: + ca_file: /etc/prometheus/certs/root-ca.pem +""".format(feature_id=self.feature_context.id) +self.yaml_file = tempfile.NamedTemporaryFile(delete=False) +self.yaml_file.write(prometheus_yml_content.encode()) +self.yaml_file.close() +os.chmod(self.yaml_file.name, 0o644) +else: +prometheus_yml_content = """ Review Comment: This looks a bit confusing to me. I think setting something like `extra_ssl_settings` to either something or nothing, and then having a single ```python prometheus_yml_content = """ global: scrape_interval: 2s evaluation_interval: 15s scrape_configs: - job_name: "minifi" static_configs: - targets: ["minifi-cpp-flow-{feature_id}:9936"] {extra_ssl_setti
[GitHub] [nifi-minifi-cpp] martinzink opened a new pull request, #1605: MINIFICPP-2160 Change clear-actions-cache.yml from cron to workflow_run
martinzink opened a new pull request, #1605: URL: https://github.com/apache/nifi-minifi-cpp/pull/1605 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org