[jira] [Resolved] (NIFI-10210) Processor supporting Apache iotdb database
[ https://issues.apache.org/jira/browse/NIFI-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-10210. - Resolution: Duplicate > Processor supporting Apache iotdb database > -- > > Key: NIFI-10210 > URL: https://issues.apache.org/jira/browse/NIFI-10210 > Project: Apache NiFi > Issue Type: New Feature >Reporter: luke.miao >Priority: Major > Attachments: image-2022-07-11-16-23-46-844.png > > Time Spent: 1h 20m > Remaining Estimate: 0h > > I have developed three processors based on Apache iotdb database, one query > processor and two insertion processors. I want to submit PR to nifi GitHub. > !image-2022-07-11-16-23-46-844.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10234) Add Put Processor for Apache IoTDB
[ https://issues.apache.org/jira/browse/NIFI-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10234: Summary: Add Put Processor for Apache IoTDB (was: implement a processor of Apache IoTDB) > Add Put Processor for Apache IoTDB > -- > > Key: NIFI-10234 > URL: https://issues.apache.org/jira/browse/NIFI-10234 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Xuan Ronaldo >Assignee: Xuan Ronaldo >Priority: Major > Attachments: README.md, 屏幕截图 2022-07-26 232200.png > > Time Spent: 11.5h > Remaining Estimate: 0h > > Hi, folks > > I'm a contributer of Apache IoTDB. Recently, I have implemented a processor > which can write data to IoTDB. I'd like to submit it to the NiFi as a > build-in processor. Besides, there are more processors or controller services > will be developed. > > Regards, > Xuan Ronaldo -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-10234) Add Put Processor for Apache IoTDB
[ https://issues.apache.org/jira/browse/NIFI-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-10234. - Fix Version/s: 1.20.0 Resolution: Fixed > Add Put Processor for Apache IoTDB > -- > > Key: NIFI-10234 > URL: https://issues.apache.org/jira/browse/NIFI-10234 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Xuan Ronaldo >Assignee: Xuan Ronaldo >Priority: Major > Fix For: 1.20.0 > > Attachments: README.md, 屏幕截图 2022-07-26 232200.png > > Time Spent: 11.5h > Remaining Estimate: 0h > > Hi, folks > > I'm a contributer of Apache IoTDB. Recently, I have implemented a processor > which can write data to IoTDB. I'd like to submit it to the NiFi as a > build-in processor. Besides, there are more processors or controller services > will be developed. > > Regards, > Xuan Ronaldo -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10234) implement a processor of Apache IoTDB
[ https://issues.apache.org/jira/browse/NIFI-10234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17657046#comment-17657046 ] ASF subversion and git services commented on NIFI-10234: Commit 40ccb71f85da3a05408ea2335d90372b709f9896 in nifi's branch refs/heads/main from lizhizhou [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=40ccb71f85 ] NIFI-10234 Added PutIoTDBRecord Processor This closes #6416 Signed-off-by: David Handermann Co-authored-by: David Handermann Co-authored-by: Xuan Ronaldo Co-authored-by: Zhizhou Li > implement a processor of Apache IoTDB > - > > Key: NIFI-10234 > URL: https://issues.apache.org/jira/browse/NIFI-10234 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Xuan Ronaldo >Assignee: Xuan Ronaldo >Priority: Major > Attachments: README.md, 屏幕截图 2022-07-26 232200.png > > Time Spent: 11.5h > Remaining Estimate: 0h > > Hi, folks > > I'm a contributer of Apache IoTDB. Recently, I have implemented a processor > which can write data to IoTDB. I'd like to submit it to the NiFi as a > build-in processor. Besides, there are more processors or controller services > will be developed. > > Regards, > Xuan Ronaldo -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory closed pull request #6416: NIFI-10234 implement PutIoTDB
exceptionfactory closed pull request #6416: NIFI-10234 implement PutIoTDB URL: https://github.com/apache/nifi/pull/6416 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 opened a new pull request, #6834: NIFI-11036: Add Cluster Summary Metrics to Prometheus endpoint
mattyb149 opened a new pull request, #6834: URL: https://github.com/apache/nifi/pull/6834 # Summary [NIFI-11036](https://issues.apache.org/jira/browse/NIFI-11036) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-11036` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-11036` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [x] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11004) Add Documentation for OIDC Groups Claim Property
[ https://issues.apache.org/jira/browse/NIFI-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-11004: Fix Version/s: 1.20.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add Documentation for OIDC Groups Claim Property > > > Key: NIFI-11004 > URL: https://issues.apache.org/jira/browse/NIFI-11004 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.20.0 > > Time Spent: 50m > Remaining Estimate: 0h > > NiFi 1.19.0 included support for retrieving group membership from a > configurable ID token claim as part of the OIDC authentication process. The > Administrator's Guide should be updated to describe this property and the > associated default value. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10993) PublishKafkaRecord should write key record (when configured) using correct schema
[ https://issues.apache.org/jira/browse/NIFI-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paul Grey updated NIFI-10993: - Status: Patch Available (was: In Progress) > PublishKafkaRecord should write key record (when configured) using correct > schema > - > > Key: NIFI-10993 > URL: https://issues.apache.org/jira/browse/NIFI-10993 > Project: Apache NiFi > Issue Type: Bug >Reporter: Paul Grey >Assignee: Paul Grey >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > community report via > (https://www.mail-archive.com/users@nifi.apache.org/msg15668.html) > to us...@nifi.apache.org > To whom it may concern, > Hello, I would like to report an issue for Nifi. But, following the new Jira > Guidelines, I would therefore like to request that an account for ASF Jira in > order to create a ticket. > In regards to the bug, using Nifi 1.19.1 I would like to send a tombstone > message (null payload) to Kafka and using the Confluent JDBC sink connector > (with delete.enabled=true) to delete a record in our Postgres database. I > believe as of Nifi 1.19, PublishKafkaRecord_2_6 now supports the 'Publish > Strategy: Use Wrapper' feature which allows setting the Kafka message key and > value (Primary Key as the Kafka key, null for the Kafka value). For the > Record Key Writer, I'm using an AvroRecordSetWriter to validate and serialize > the key against the confluent schema registry (Schema Write Strategy: > Confluent Schema Registry Reference, Schema Access Strategy: Use 'Schema > Name' Property) but sending the message I come across the error: > PublishKafkaRecord_2_6[id=XXX] Failed to send FlowFile[filename=XXX] to > Kafka: org.apache.nifi.processor.exception.ProcessException: Could not > determine the Avro Schema to use for writing the content > - Caused by: org.apache.nifi.schema.access.SchemaNotFoundException: Cannot > write Confluent Schema Registry Reference because the Schema Identifier is > not known > I can confirm the configuration for the for the AvroRecordSetWriter, > ConfluentSchemaRegistry controllers, and PublishKafkaRecord processor are all > configured correctly, as I can send the Kafka message just fine using the > default Publish Strategy (Use Content as Record Value). It only occurs using > Use Wrapper, and the ConfluentSchemaRegistry. > A workaround that has worked was for using JsonRecordSetWriter w/ embedded > JSON schemas, but it would be nice to continue using our Avro Schema Registry > for this. > I'd appreciate if someone had any advice or experience with this issue, > otherwise I'd like to log an issue in JIRA. > Thank you, > Austin Tao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] greyp9 commented on a diff in pull request #6833: NIFI-10993 - PublishKafkaRecord should use correct record schema
greyp9 commented on code in PR #6833: URL: https://github.com/apache/nifi/pull/6833#discussion_r1066408810 ## nifi-nar-bundles/nifi-kafka-bundle/nifi-kafka-2-6-processors/src/test/java/org/apache/nifi/processors/kafka/pubsub/TestPublishKafkaMock.java: ## @@ -368,7 +511,7 @@ private PublisherLease getPublisherLease(final Collection
[GitHub] [nifi] greyp9 opened a new pull request, #6833: NIFI-10993 - PublishKafkaRecord should use correct record schema
greyp9 opened a new pull request, #6833: URL: https://github.com/apache/nifi/pull/6833 Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [x] JDK 11 - [x] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-11004) Add Documentation for OIDC Groups Claim Property
[ https://issues.apache.org/jira/browse/NIFI-11004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656995#comment-17656995 ] ASF subversion and git services commented on NIFI-11004: Commit 0c0f7e87be4b2b51297fb4717335ca87f6089fae in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0c0f7e87be ] NIFI-11004 Added documentation for OIDC groups claim property This closes #6802 Signed-off-by: Paul Grey > Add Documentation for OIDC Groups Claim Property > > > Key: NIFI-11004 > URL: https://issues.apache.org/jira/browse/NIFI-11004 > Project: Apache NiFi > Issue Type: Improvement > Components: Documentation & Website >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Time Spent: 50m > Remaining Estimate: 0h > > NiFi 1.19.0 included support for retrieving group membership from a > configurable ID token claim as part of the OIDC authentication process. The > Administrator's Guide should be updated to describe this property and the > associated default value. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] greyp9 closed pull request #6802: NIFI-11004 Add documentation for OIDC groups claim property
greyp9 closed pull request #6802: NIFI-11004 Add documentation for OIDC groups claim property URL: https://github.com/apache/nifi/pull/6802 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10993) PublishKafkaRecord should write key record (when configured) using correct schema
[ https://issues.apache.org/jira/browse/NIFI-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656970#comment-17656970 ] Paul Grey commented on NIFI-10993: -- Took a good bit of setup and testing, but I think I have a handle on this one. I'll post a PR once automation completes. https://github.com/greyp9/nifi/actions/runs/3887137106 > PublishKafkaRecord should write key record (when configured) using correct > schema > - > > Key: NIFI-10993 > URL: https://issues.apache.org/jira/browse/NIFI-10993 > Project: Apache NiFi > Issue Type: Bug >Reporter: Paul Grey >Assignee: Paul Grey >Priority: Minor > > community report via > (https://www.mail-archive.com/users@nifi.apache.org/msg15668.html) > to us...@nifi.apache.org > To whom it may concern, > Hello, I would like to report an issue for Nifi. But, following the new Jira > Guidelines, I would therefore like to request that an account for ASF Jira in > order to create a ticket. > In regards to the bug, using Nifi 1.19.1 I would like to send a tombstone > message (null payload) to Kafka and using the Confluent JDBC sink connector > (with delete.enabled=true) to delete a record in our Postgres database. I > believe as of Nifi 1.19, PublishKafkaRecord_2_6 now supports the 'Publish > Strategy: Use Wrapper' feature which allows setting the Kafka message key and > value (Primary Key as the Kafka key, null for the Kafka value). For the > Record Key Writer, I'm using an AvroRecordSetWriter to validate and serialize > the key against the confluent schema registry (Schema Write Strategy: > Confluent Schema Registry Reference, Schema Access Strategy: Use 'Schema > Name' Property) but sending the message I come across the error: > PublishKafkaRecord_2_6[id=XXX] Failed to send FlowFile[filename=XXX] to > Kafka: org.apache.nifi.processor.exception.ProcessException: Could not > determine the Avro Schema to use for writing the content > - Caused by: org.apache.nifi.schema.access.SchemaNotFoundException: Cannot > write Confluent Schema Registry Reference because the Schema Identifier is > not known > I can confirm the configuration for the for the AvroRecordSetWriter, > ConfluentSchemaRegistry controllers, and PublishKafkaRecord processor are all > configured correctly, as I can send the Kafka message just fine using the > default Publish Strategy (Use Content as Record Value). It only occurs using > Use Wrapper, and the ConfluentSchemaRegistry. > A workaround that has worked was for using JsonRecordSetWriter w/ embedded > JSON schemas, but it would be nice to continue using our Avro Schema Registry > for this. > I'd appreciate if someone had any advice or experience with this issue, > otherwise I'd like to log an issue in JIRA. > Thank you, > Austin Tao -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-11038) Add support of StandardProxyConfigurationService to QuerySalesforceObject
crissaegrim created NIFI-11038: -- Summary: Add support of StandardProxyConfigurationService to QuerySalesforceObject Key: NIFI-11038 URL: https://issues.apache.org/jira/browse/NIFI-11038 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: crissaegrim Currently, `QuerySalesforceObject` does not support StandardProxyConfigurationService -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-11037) Add support of StandardProxyConfigurationService to StandardOauth2AccessTokenProvider
crissaegrim created NIFI-11037: -- Summary: Add support of StandardProxyConfigurationService to StandardOauth2AccessTokenProvider Key: NIFI-11037 URL: https://issues.apache.org/jira/browse/NIFI-11037 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: crissaegrim Currently, `StandardOauth2AccessTokenProvider` does not support `StandardProxyConfigurationService`. I'm aware I can probaly use `java.arg.x=-Dhttp—` proxy flags. But this breaks S3 supports because of a known issue with `aws-java-sdk`. See aws/aws-sdk-java issue #2797 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10772) Unattributable error on nifi shutdown when controller service was unable to be started
[ https://issues.apache.org/jira/browse/NIFI-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-10772: Fix Version/s: 1.20.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Unattributable error on nifi shutdown when controller service was unable to > be started > -- > > Key: NIFI-10772 > URL: https://issues.apache.org/jira/browse/NIFI-10772 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.18.0, 1.20.0 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Fix For: 1.20.0 > > Time Spent: 1h > Remaining Estimate: 0h > > This error occurs when nifi is unable to start a controller service that is > supposed to be in an enabled state. On shutdown, nifi will give an error > (stacktrace below) > To reproduce, for example using, StandardRestrictedSSLContextService: > Enable StandardRestrictedSSLContextService > Shutdown nifi > remove keystore StandardRestrictedSSLContextService relied on (or move it to > different location on filesystem) > start nifi > stop nifi > When nifi is shutdown the following uncaught/non-attributable error is in > nifi-app.log: > {code:java} > 2023-01-06 15:46:41,085 ERROR [Timer-Driven Process Thread-5] > org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task > java.util.concurrent.RejectedExecutionException: Task > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@2867c735 > rejected from org.apache.nifi.e > ngine.FlowEngine@a814d7d[Shutting down, pool size = 10, active threads = 3, > queued tasks = 0, completed tasks = 257823] > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at org.apache.nifi.engine.FlowEngine.schedule(FlowEngine.java:87) > at > org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:591) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > {code} > It is unclear from the current log output as to what the underlying cause of > it was (i.e. which controller service StandardControllerServiceNode is having > trouble with) > A similar non-attributable error is also seen on nifi shutdown for a > processor that relies on this controller service. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10772) Unattributable error on nifi shutdown when controller service was unable to be started
[ https://issues.apache.org/jira/browse/NIFI-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656753#comment-17656753 ] ASF subversion and git services commented on NIFI-10772: Commit fe254242330c076fd1be522878b5b45fb7f5db63 in nifi's branch refs/heads/main from Nissim Shiman [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=fe25424233 ] NIFI-10772 Clarify logs on shutdown where controller service and/or processor were unable to properly start Signed-off-by: Matthew Burgess This closes #6829 > Unattributable error on nifi shutdown when controller service was unable to > be started > -- > > Key: NIFI-10772 > URL: https://issues.apache.org/jira/browse/NIFI-10772 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.18.0, 1.20.0 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Time Spent: 1h > Remaining Estimate: 0h > > This error occurs when nifi is unable to start a controller service that is > supposed to be in an enabled state. On shutdown, nifi will give an error > (stacktrace below) > To reproduce, for example using, StandardRestrictedSSLContextService: > Enable StandardRestrictedSSLContextService > Shutdown nifi > remove keystore StandardRestrictedSSLContextService relied on (or move it to > different location on filesystem) > start nifi > stop nifi > When nifi is shutdown the following uncaught/non-attributable error is in > nifi-app.log: > {code:java} > 2023-01-06 15:46:41,085 ERROR [Timer-Driven Process Thread-5] > org.apache.nifi.engine.FlowEngine Uncaught Exception in Runnable task > java.util.concurrent.RejectedExecutionException: Task > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@2867c735 > rejected from org.apache.nifi.e > ngine.FlowEngine@a814d7d[Shutting down, pool size = 10, active threads = 3, > queued tasks = 0, completed tasks = 257823] > at > java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063) > at > java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) > at > java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326) > at > java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:533) > at org.apache.nifi.engine.FlowEngine.schedule(FlowEngine.java:87) > at > org.apache.nifi.controller.service.StandardControllerServiceNode$2.run(StandardControllerServiceNode.java:591) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:750) > {code} > It is unclear from the current log output as to what the underlying cause of > it was (i.e. which controller service StandardControllerServiceNode is having > trouble with) > A similar non-attributable error is also seen on nifi shutdown for a > processor that relies on this controller service. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 closed pull request #6829: NIFI-10772 Clarify logs on shutdown where controller service and/or processor were unable to start
mattyb149 closed pull request #6829: NIFI-10772 Clarify logs on shutdown where controller service and/or processor were unable to start URL: https://github.com/apache/nifi/pull/6829 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 commented on pull request #6829: NIFI-10772 Clarify logs on shutdown where controller service and/or processor were unable to start
mattyb149 commented on PR #6829: URL: https://github.com/apache/nifi/pull/6829#issuecomment-1377709038 +1 LGTM, thanks for the improvement! Merging to main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mattyb149 commented on a diff in pull request #6829: NIFI-10772 Clarify logs on shutdown where controller service and/or processor were unable to start
mattyb149 commented on code in PR #6829: URL: https://github.com/apache/nifi/pull/6829#discussion_r1066186479 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/StandardProcessorNode.java: ## @@ -1799,8 +1800,15 @@ private void initiateStart(final ScheduledExecutorService taskScheduler, final l return null; }; -// Trigger the task in a background thread. -final Future taskFuture = schedulingAgentCallback.scheduleTask(startupTask); +final Future taskFuture; +try { +// Trigger the task in a background thread. +taskFuture = schedulingAgentCallback.scheduleTask(startupTask); +} catch (RejectedExecutionException rejectedExecutionException) { +final ValidationState validationState = getValidationState(); +LOG.error("Unable to start {}. Last known validation state was {} : {}", this, validationState, validationState.getValidationErrors(), rejectedExecutionException); +return; Review Comment: Thanks for the explanation! Makes sense -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-9167) Refactor nifi-framework-bundle to use JUnit 5
[ https://issues.apache.org/jira/browse/NIFI-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-9167: --- Fix Version/s: 1.20.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Refactor nifi-framework-bundle to use JUnit 5 > - > > Key: NIFI-9167 > URL: https://issues.apache.org/jira/browse/NIFI-9167 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Mike Thomsen >Assignee: David Handermann >Priority: Minor > Fix For: 1.20.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-9167) Refactor nifi-framework-bundle to use JUnit 5
[ https://issues.apache.org/jira/browse/NIFI-9167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656751#comment-17656751 ] ASF subversion and git services commented on NIFI-9167: --- Commit 0d9dc6c540a54edf5fd83f28b89f63c1cfb75486 in nifi's branch refs/heads/main from David Handermann [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0d9dc6c540 ] NIFI-9167 Converted remaining nifi-framework tests to JUnit 5 NIFI-9167 Addressed feedback and improved tests using TempDir Signed-off-by: Matthew Burgess This closes #6823 > Refactor nifi-framework-bundle to use JUnit 5 > - > > Key: NIFI-9167 > URL: https://issues.apache.org/jira/browse/NIFI-9167 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Mike Thomsen >Assignee: David Handermann >Priority: Minor > Time Spent: 1.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mattyb149 closed pull request #6823: NIFI-9167 Converted remaining nifi-framework tests to JUnit 5
mattyb149 closed pull request #6823: NIFI-9167 Converted remaining nifi-framework tests to JUnit 5 URL: https://github.com/apache/nifi/pull/6823 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-11036) Add Cluster Summary metrics to Prometheus components/endpoint
[ https://issues.apache.org/jira/browse/NIFI-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-11036: Description: It would be nice to include Cluster Summary metrics such as "connected node count" and "total node count" to the NiFi REST API Prometheus endpoint. (was: It would be nice to include Cluster Summary metrics such as "connected node count" and "total node count". This could be added to both the NiFi REST API Prometheus endpoint as well as the ReportingTask and RecordSink.) > Add Cluster Summary metrics to Prometheus components/endpoint > - > > Key: NIFI-11036 > URL: https://issues.apache.org/jira/browse/NIFI-11036 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Minor > > It would be nice to include Cluster Summary metrics such as "connected node > count" and "total node count" to the NiFi REST API Prometheus endpoint. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-11036) Add Cluster Summary metrics to Prometheus components/endpoint
[ https://issues.apache.org/jira/browse/NIFI-11036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess reassigned NIFI-11036: --- Assignee: Matt Burgess > Add Cluster Summary metrics to Prometheus components/endpoint > - > > Key: NIFI-11036 > URL: https://issues.apache.org/jira/browse/NIFI-11036 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Minor > > It would be nice to include Cluster Summary metrics such as "connected node > count" and "total node count". This could be added to both the NiFi REST API > Prometheus endpoint as well as the ReportingTask and RecordSink. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] NissimShiman commented on a diff in pull request #6829: NIFI-10772 Clarify logs on shutdown where controller service and/or processor were unable to start
NissimShiman commented on code in PR #6829: URL: https://github.com/apache/nifi/pull/6829#discussion_r1066085426 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/StandardProcessorNode.java: ## @@ -1799,8 +1800,15 @@ private void initiateStart(final ScheduledExecutorService taskScheduler, final l return null; }; -// Trigger the task in a background thread. -final Future taskFuture = schedulingAgentCallback.scheduleTask(startupTask); +final Future taskFuture; +try { +// Trigger the task in a background thread. +taskFuture = schedulingAgentCallback.scheduleTask(startupTask); +} catch (RejectedExecutionException rejectedExecutionException) { +final ValidationState validationState = getValidationState(); +LOG.error("Unable to start {}. Last known validation state was {} : {}", this, validationState, validationState.getValidationErrors(), rejectedExecutionException); +return; Review Comment: Thank for @mattyb149 very much for looking at this (as well as NIFI-10608 just last week as well)! Nice observations... I just tested this with throwing the exception instead of returning and we end up with the stack trace in the logs an additional time (with the second stack trace time starting with the additional line of ["Uncaught Exception in Runnable Task"](https://github.com/apache/nifi/blob/rel/nifi-1.19.1/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core-api/src/main/java/org/apache/nifi/engine/FlowEngine.java#L112) ) so if I am understanding this correctly, returning looks to be cleaner in this case. I also tried to see what happens when not returning or re-throwing in the catch block and the code fails/crashes shortly afterwards. (yeah... that would have been pretty neat if somehow we could work a way to get the processor to start) The try block unsuccessfully initializes [taskFuture](https://github.com/apache/nifi/blob/dbd3a88ac55112812138b048221c1dc35c5ecdad/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/StandardProcessorNode.java#L1806) which is needed [a little later](https://github.com/apache/nifi/blob/dbd3a88ac55112812138b048221c1dc35c5ecdad/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-components/src/main/java/org/apache/nifi/controller/StandardProcessorNode.java#L1827) Normally, when a processor has no issues/no exceptions, then it sails through this code and starts right up, but if there is an exception then we are stuck. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Comment Edited] (NIFI-10582) XSLT element does not work in NiFi 1.15.3
[ https://issues.apache.org/jira/browse/NIFI-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656209#comment-17656209 ] Daniel Stieglitz edited comment on NIFI-10582 at 1/10/23 5:27 PM: -- [~gkonar] I believe I see what the issue here is but I am not an XSLT guru to solve the exact problem. I believe you are experiencing the issue seen in the following Stackoverflow post [Why is xsl:value-of behaving completely different depending on the xsl:stylesheet version|https://stackoverflow.com/questions/73497698/why-is-xslvalue-of-behaving-completely-different-depending-on-the-xslstyleshee]. Please note under the hood TransformXml is using Saxon HE 10.6 which conforms with the W3C Recommendations for XSLT 3.0, XPath 3.1, and XQuery 3.1 while xsltproc is an XSLT 1.0 processor as stated [here|https://stackoverflow.com/questions/25061696/xsltproc-doesnt-recognize-xslt-2-0]. Hence you are seeing a difference in how xsl:value-of is being interpreted on line 182 of your XSLT: {code:java} {code} Please note the the new line is being inserted for both files when there is no text for the node but rather a sequence of empty spaces. In 14_R01.svg lines 49-50 {code:java} {code} In 21_R01.svg (lines 43-44) {code:java} {code} Based on this [documentation|https://xsltdev.com/xslt/xsl-value-of/] there is a difference how XSLT 1.0 and 2.0 (and I believe the same goes for 3.0) if the select expression evaluates to a sequence containing more than one item. Without a separator specified (which svgTest.xsl does not specify) in XSLT 1.0 only the first item is considered but in XSLT 2.0 (and 3.0) the sequence of items separated by the default separator a space is used. When using xsltproc which uses XSLT 1.0, there is no new line since only the first item a blank space is chosen, while when using TransformXml a "new line" is inserted as it is the sequence of spaces found in the node separated by single spaces. So in a sense it is part of that data line. It looks like you need some sort of trim function to get rid of the sequence of spaces when capturing the text. As for a possible backwards compatible mode for XSLT 1.0 which seemed possible based on the first article I quoted, it seems there is none for XSLT 3.0 based on the following [support ticket|https://saxonica.plan.io/issues/4266]. Hence I believe what you have observed is not a bug but rather a consequence of using XSLT 3.0. Please let me know if you concur with this conclusion. was (Author: JIRAUSER294662): [~gkonar] I believe I see what the issue here is but I am not an XSLT guru to solve the exact problem. I believe you are experiencing the issue seen in the following Stackoverflow post [Why is xsl:value-of behaving completely different depending on the xsl:stylesheet version|https://stackoverflow.com/questions/73497698/why-is-xslvalue-of-behaving-completely-different-depending-on-the-xslstyleshee]. Please note under the hood TransformXml is using Saxon HE 10.6 which conforms with the W3C Recommendations for XSLT 3.0, XPath 3.1, and XQuery 3.1 while xsltproc is an XSLT 1.0 processor as stated [here|https://stackoverflow.com/questions/25061696/xsltproc-doesnt-recognize-xslt-2-0]. Hence you are seeing a difference in how xsl:value-of is being interpreted on line 182 of your XSLT: {code:java} {code} Please note the the new line is being inserted for both files when there is no text for the node but rather a sequence of empty spaces. In 14_R01.svg lines 49-50 {code:java} {code} In 21_R01.svg (lines 43-44) {code:java} {code} Based on this [documentation|https://xsltdev.com/xslt/xsl-value-of/] there is a difference how XSLT 1.0 and 2.0 (and I believe the same goes for 3.0) if the select expression evaluates to a sequence containing more than one item. Without a separator specified (which svgTest.xsl does not specify) in XSLT 1.0 only the first item is considered but in XSLT 2.0 (and 3.0) the sequence of items separated by the default separator a space is used. When using xsltproc which uses XSLT 1.0, there is no new line since only the first item a blank space is chosen, while when using TransformXml a "new line" is inserted as it is the sequence of spaces found in the node separated by single spaces. So in a sense it is part of that data line. It looks like you need some sort of trim function to get rid of the sequence of spaces when capturing the text. As for a possible backwards compatible mode for XSLT 1.0 which seemed possible based on the first article I quoted, it seems there is none for XSLT 3.0 based on the following [support ticket|https://saxonica.plan.io/issues/4266]. Hence I believe what you have observed is not a bug but rather a consequence of using XSLT 3.0. Please let me know if you concur with this conclusion. > XSLT element does not work in NiFi 1.15.3 >
[jira] [Comment Edited] (NIFI-10582) XSLT element does not work in NiFi 1.15.3
[ https://issues.apache.org/jira/browse/NIFI-10582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656209#comment-17656209 ] Daniel Stieglitz edited comment on NIFI-10582 at 1/10/23 5:25 PM: -- [~gkonar] I believe I see what the issue here is but I am not an XSLT guru to solve the exact problem. I believe you are experiencing the issue seen in the following Stackoverflow post [Why is xsl:value-of behaving completely different depending on the xsl:stylesheet version|https://stackoverflow.com/questions/73497698/why-is-xslvalue-of-behaving-completely-different-depending-on-the-xslstyleshee]. Please note under the hood TransformXml is using Saxon HE 10.6 which conforms with the W3C Recommendations for XSLT 3.0, XPath 3.1, and XQuery 3.1 while xsltproc is an XSLT 1.0 processor as stated [here|https://stackoverflow.com/questions/25061696/xsltproc-doesnt-recognize-xslt-2-0]. Hence you are seeing a difference in how xsl:value-of is being interpreted on line 182 of your XSLT: {code:java} {code} Please note the the new line is being inserted for both files when there is no text for the node but rather a sequence of empty spaces. In 14_R01.svg lines 49-50 {code:java} {code} In 21_R01.svg (lines 43-44) {code:java} {code} Based on this [documentation|https://xsltdev.com/xslt/xsl-value-of/] there is a difference how XSLT 1.0 and 2.0 (and I believe the same goes for 3.0) if the select expression evaluates to a sequence containing more than one item. Without a separator specified (which svgTest.xsl does not specify) in XSLT 1.0 only the first item is considered but in XSLT 2.0 (and 3.0) the sequence of items separated by the default separator a space is used. When using xsltproc which uses XSLT 1.0, there is no new line since only the first item a blank space is chosen, while when using TransformXml a "new line" is inserted as it is the sequence of spaces found in the node separated by single spaces. So in a sense it is part of that data line. It looks like you need some sort of trim function to get rid of the sequence of spaces when capturing the text. As for a possible backwards compatible mode for XSLT 1.0 which seemed possible based on the first article I quoted, it seems there is none for XSLT 3.0 based on the following [support ticket|https://saxonica.plan.io/issues/4266]. Hence I believe what you have observed is not a bug but rather a consequence of using XSLT 3.0. Please let me know if you concur with this conclusion. was (Author: JIRAUSER294662): [~gkonar] I believe I see what the issue here is but I am not an XSLT guru to solve the exact problem. I believe you are experiencing the issue seen in the following Stackoverflow post [Why is xsl:value-of behaving completely different depending on the xsl:stylesheet version|https://stackoverflow.com/questions/73497698/why-is-xslvalue-of-behaving-completely-different-depending-on-the-xslstyleshee]. Please note under the hood TransformXml is using Saxon HE 10.6 which conforms with the W3C Recommendations for XSLT 3.0, XPath 3.1, and XQuery 3.1 while xsltproc is an XSLT 1.0 processor as stated [here|https://stackoverflow.com/questions/25061696/xsltproc-doesnt-recognize-xslt-2-0]. Hence you are seeing a difference in how xsl:value-of is being interpreted on line 182 of your XSLT: {code:java} {code} Please note the the new line is being inserted for both files when there is no text for the node but rather a sequence of empty spaces. In 14_R01.svg lines 49-50 {code:java} {code} In 21_R01.svg (lines 43-44) {code:java} {code} Based on this [documentation|https://xsltdev.com/xslt/xsl-value-of/] there is a difference how XSLT 1.0 and 2.0 (and I believe the same goes for 3.0) if the select expression evaluates to a sequence containing more than one item. Without a separator specified (which svgTest.xsl does not specify) in XSLT 1.0 only the first item is considered but in XSLT 2.0 (and 3.0) the sequence of items separated by the default separator a space is used. When using xsltproc which uses XSLT 1.0 there is no new line since only the first item a blank space is chosen while when using TransformXml a "new line" is inserted as it is the sequence of spaces found in the node separated by single spaces. So in a sense it is part of that data line. It looks like you need some sort of trim function to get rid of the sequence of spaces when capturing the text. As for a possible backwards compatible mode for XSLT 1.0 which seemed possible based on the first article I quoted, it seems there is none for XSLT 3.0 based on the following [support ticket|https://saxonica.plan.io/issues/4266]. Hence I believe what you have observed is not a bug but rather a consequence of using XSLT 3.0. Please let me know if you concur with this conclusion. > XSLT element does not work in NiFi 1.15.3 > -
[GitHub] [nifi] krisztina-zsihovszki opened a new pull request, #6832: NIFI-10965 PutGoogleDrive
krisztina-zsihovszki opened a new pull request, #6832: URL: https://github.com/apache/nifi/pull/6832 # Summary [NIFI-10965](https://issues.apache.org/jira/browse/NIFI-10965) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-10965` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-10965` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [x] JDK 11 - [x] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [x] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-10836) Support Receiving RFC 3195 Syslog Messages
[ https://issues.apache.org/jira/browse/NIFI-10836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lehel Boér reassigned NIFI-10836: - Assignee: Lehel Boér (was: Nathan Gough) > Support Receiving RFC 3195 Syslog Messages > -- > > Key: NIFI-10836 > URL: https://issues.apache.org/jira/browse/NIFI-10836 > Project: Apache NiFi > Issue Type: Improvement > Components: MiNiFi >Reporter: CHANDAN KUMAR >Assignee: Lehel Boér >Priority: Major > > [RFC 3195|https://www.rfc-editor.org/rfc/rfc3195] defines a reliable delivery > format for syslog messages. The {{ListenTCP}} and {{ListenSyslog}} Processors > do not work with this format because messages span multiple lines and both > processors expect messages to be terminated by a single newline. A new > processor could be created to support handling RFC 3195 messages. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8005) ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet with given name
[ https://issues.apache.org/jira/browse/NIFI-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656702#comment-17656702 ] David Handermann commented on NIFI-8005: [~dstiegli1] Yes, using this issue to cover those improvements sounds good. > ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet > with given name > - > > Key: NIFI-8005 > URL: https://issues.apache.org/jira/browse/NIFI-8005 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: Svyatoslav >Priority: Minor > Attachments: NIFI-8005.xlsx, image-2020-11-14-21-49-32-932.png, > image-2023-01-10-00-26-15-959.png > > > Input xlsx file contains sheets: sheet1, sheet2 > ConvertExcelToCSVProcessor configuration: > !image-2020-11-14-21-49-32-932.png! > Input file doesn't have sheet with name mysheet1. After being processed by > ConvertExcelToCSVProcessor it disappears: it is neither in any output or > input queues. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8005) ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet with given name
[ https://issues.apache.org/jira/browse/NIFI-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656687#comment-17656687 ] Daniel Stieglitz commented on NIFI-8005: [~exceptionfactory] Can I use this ticket for the aforementioned improvements documentation and a warning message if the sheet is not found or should a new ticket be created for these? > ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet > with given name > - > > Key: NIFI-8005 > URL: https://issues.apache.org/jira/browse/NIFI-8005 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: Svyatoslav >Priority: Minor > Attachments: NIFI-8005.xlsx, image-2020-11-14-21-49-32-932.png, > image-2023-01-10-00-26-15-959.png > > > Input xlsx file contains sheets: sheet1, sheet2 > ConvertExcelToCSVProcessor configuration: > !image-2020-11-14-21-49-32-932.png! > Input file doesn't have sheet with name mysheet1. After being processed by > ConvertExcelToCSVProcessor it disappears: it is neither in any output or > input queues. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-11036) Add Cluster Summary metrics to Prometheus components/endpoint
Matt Burgess created NIFI-11036: --- Summary: Add Cluster Summary metrics to Prometheus components/endpoint Key: NIFI-11036 URL: https://issues.apache.org/jira/browse/NIFI-11036 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Matt Burgess It would be nice to include Cluster Summary metrics such as "connected node count" and "total node count". This could be added to both the NiFi REST API Prometheus endpoint as well as the ReportingTask and RecordSink. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8005) ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet with given name
[ https://issues.apache.org/jira/browse/NIFI-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656676#comment-17656676 ] David Handermann commented on NIFI-8005: [~dstiegli1] You are correct, so it would require some additional changes to track whether any of the sheets matched the requested configuration, and then log a warning in absence of finding any. > ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet > with given name > - > > Key: NIFI-8005 > URL: https://issues.apache.org/jira/browse/NIFI-8005 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: Svyatoslav >Priority: Minor > Attachments: NIFI-8005.xlsx, image-2020-11-14-21-49-32-932.png, > image-2023-01-10-00-26-15-959.png > > > Input xlsx file contains sheets: sheet1, sheet2 > ConvertExcelToCSVProcessor configuration: > !image-2020-11-14-21-49-32-932.png! > Input file doesn't have sheet with name mysheet1. After being processed by > ConvertExcelToCSVProcessor it disappears: it is neither in any output or > input queues. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8005) ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet with given name
[ https://issues.apache.org/jira/browse/NIFI-8005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656674#comment-17656674 ] Daniel Stieglitz commented on NIFI-8005: [~exceptionfactory] I believe that debug statement is not for when a sheet is not found but rather when it cannot be parsed from the PropertyDescriptor. > ConvertExcelToCSVProcessor swallows flow file if it doesn't contain sheet > with given name > - > > Key: NIFI-8005 > URL: https://issues.apache.org/jira/browse/NIFI-8005 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.12.1 >Reporter: Svyatoslav >Priority: Minor > Attachments: NIFI-8005.xlsx, image-2020-11-14-21-49-32-932.png, > image-2023-01-10-00-26-15-959.png > > > Input xlsx file contains sheets: sheet1, sheet2 > ConvertExcelToCSVProcessor configuration: > !image-2020-11-14-21-49-32-932.png! > Input file doesn't have sheet with name mysheet1. After being processed by > ConvertExcelToCSVProcessor it disappears: it is neither in any output or > input queues. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (MINIFICPP-1995) Add configuring path for flowfile_checkpoint directory
[ https://issues.apache.org/jira/browse/MINIFICPP-1995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi updated MINIFICPP-1995: - Fix Version/s: 0.14.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add configuring path for flowfile_checkpoint directory > -- > > Key: MINIFICPP-1995 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1995 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Affects Versions: 0.12.0 >Reporter: Kondakov Artem >Assignee: Gábor Gyimesi >Priority: Major > Fix For: 0.14.0 > > Time Spent: 2h > Remaining Estimate: 0h > > In the minifi.properties file, there is no way to set the directory path for > flowfile_checkpoint, similar to > * nifi.provenance.repository.directory.default > * nifi.flowfile.repository.directory.default > * nifi.state.management.provider.local.path -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (MINIFICPP-2023) MacOS and docker github actions fail with environmental issues
[ https://issues.apache.org/jira/browse/MINIFICPP-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-2023. -- Resolution: Fixed > MacOS and docker github actions fail with environmental issues > -- > > Key: MINIFICPP-2023 > URL: https://issues.apache.org/jira/browse/MINIFICPP-2023 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Major > Fix For: 0.14.0 > > Time Spent: 50m > Remaining Estimate: 0h > > MacOs environment failure: > Error: The `brew link` step did not complete successfully > The formula built, but is not symlinked into /usr/local > Could not symlink bin/2to3-3.11 > Target /usr/local/bin/2to3-3.11 > already exists. You may want to remove it: > rm '/usr/local/bin/2to3-3.11' > To force the link and overwrite all conflicting files: > brew link --overwrite python@3.11 > To list all files that would be deleted: > brew link --overwrite --dry-run python@3.11 > > > Docker test environment failure: > Exception AttributeError: module 'lib' has no attribute > 'OpenSSL_add_all_algorithms' > Traceback (most recent call last): > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/bin/behave", > line 8, in > sys.exit(main()) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/__main__.py", > line 183, in main > return run_behave(config) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/__main__.py", > line 127, in run_behave > failed = runner.run() > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner.py", > line 804, in run > returnself.run_with_paths() > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner.py", > line 809, in run_with_paths > self.load_step_definitions() > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner.py", > line 796, in load_step_definitions > load_step_modules(step_paths) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner_util.py", > line 412, in load_step_modules > exec_file(os.path.join(path, name), step_module_globals) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner_util.py", > line 386, in exec_file > exec(code, globals_, locals_) > File "steps/steps.py", line 19, in > from minifi.core.SSL_cert_utils import make_ca, make_cert, dump_certificate, > dump_privatekey > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/docker/test/integration/minifi/core/SSL_cert_utils.py", > line 22, in > from OpenSSL import crypto > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/OpenSSL/__init__.py", > line 8, in > from OpenSSL import crypto, SSL > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/OpenSSL/crypto.py", > line 3279, in > _lib.OpenSSL_add_all_algorithms() -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1488: MINIFICPP-2023 Skip brew update to avoid python link failure
fgerlits closed pull request #1488: MINIFICPP-2023 Skip brew update to avoid python link failure URL: https://github.com/apache/nifi-minifi-cpp/pull/1488 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1460: MINIFICPP-1995 Add configuring path for flowfile_checkpoint directory
fgerlits closed pull request #1460: MINIFICPP-1995 Add configuring path for flowfile_checkpoint directory URL: https://github.com/apache/nifi-minifi-cpp/pull/1460 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1476: MINIFICPP-2000 Fixing GetFile's inconsistent attributes
fgerlits closed pull request #1476: MINIFICPP-2000 Fixing GetFile's inconsistent attributes URL: https://github.com/apache/nifi-minifi-cpp/pull/1476 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits closed pull request #1434: MINIFICPP-1949 ConsumeWindowsEventLog precompiled regex
fgerlits closed pull request #1434: MINIFICPP-1949 ConsumeWindowsEventLog precompiled regex URL: https://github.com/apache/nifi-minifi-cpp/pull/1434 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10341) Implement AzureGraphUserGroupProvider for Nifi Registry
[ https://issues.apache.org/jira/browse/NIFI-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17656649#comment-17656649 ] Martin commented on NIFI-10341: --- Guess one could reuse some of the code from the NiFi implementation into registry? https://github.com/apache/nifi/tree/main/nifi-nar-bundles/nifi-azure-bundle/nifi-azure-graph-authorizer > Implement AzureGraphUserGroupProvider for Nifi Registry > --- > > Key: NIFI-10341 > URL: https://issues.apache.org/jira/browse/NIFI-10341 > Project: Apache NiFi > Issue Type: Task > Components: NiFi Registry >Reporter: Martin >Priority: Major > > For Apache NiFi there is an implementation to use AzureGraphUserGroupProvider > for Authorization. See here: > [https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#azuregraphusergroupprovider] > While it is possible to add Azure SSO to NiFi Registry using the OIDC > properties in nifi-registry.properties, it is not possible to add > AzureGraphUserGroupProvider for authorization. > > _Duplicate from old NiFi Registry Jira project_ > _https://issues.apache.org/jira/browse/NIFIREG-458_ -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] briansolo1985 commented on a diff in pull request #6733: NIFI-10895 Update properties command for MiNiFi C2
briansolo1985 commented on code in PR #6733: URL: https://github.com/apache/nifi/pull/6733#discussion_r1065810905 ## c2/c2-client-bundle/c2-client-service/src/main/java/org/apache/nifi/c2/client/service/C2ClientService.java: ## @@ -59,11 +120,30 @@ private void processResponse(C2HeartbeatResponse response) { } } -private void handleRequestedOperations(List requestedOperations) { -for (C2Operation requestedOperation : requestedOperations) { -operationService.handleOperation(requestedOperation) -.ifPresent(client::acknowledgeOperation); +private boolean requiresRestart(C2OperationHandler c2OperationHandler, C2OperationAck c2OperationAck) { +return c2OperationHandler.requiresRestart() && isOperationFullyApplied(c2OperationAck); +} + +private static boolean isOperationFullyApplied(C2OperationAck c2OperationAck) { +return !Optional.ofNullable(c2OperationAck) Review Comment: This is equivalent to ```suggestion return Optional.ofNullable(c2OperationAck) .map(C2OperationAck::getOperationState) .map(C2OperationState::getState) .filter(FULLY_APPLIED::equals) .isPresent(); ``` ## minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/test/java/org/apache/nifi/minifi/c2/FileBasedRequestedOperationDAOTest.java: ## @@ -0,0 +1,114 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.minifi.c2; + +import static org.apache.nifi.minifi.c2.FileBasedRequestedOperationDAO.REQUESTED_OPERATIONS_FILE_NAME; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.ArgumentMatchers.anyList; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.when; + +import com.fasterxml.jackson.databind.ObjectMapper; +import java.io.File; +import java.io.IOException; +import java.util.Collections; +import java.util.Optional; +import org.apache.nifi.c2.client.service.operation.OperationQueue; +import org.apache.nifi.c2.protocol.api.C2Operation; +import org.apache.nifi.c2.protocol.api.OperandType; +import org.apache.nifi.c2.protocol.api.OperationType; +import org.junit.jupiter.api.BeforeEach; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.junit.jupiter.api.io.TempDir; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +@ExtendWith(MockitoExtension.class) +class FileBasedRequestedOperationDAOTest { + +@Mock +private ObjectMapper objectMapper; + +@TempDir +File tmpDir; + +private FileBasedRequestedOperationDAO fileBasedRequestedOperationDAO; + +@BeforeEach +void setup() { +fileBasedRequestedOperationDAO = new FileBasedRequestedOperationDAO(tmpDir.getAbsolutePath(), objectMapper); +} + +@Test +void shouldSaveRequestedOperationsToFile() throws IOException { +OperationQueue operationQueue = getOperationQueue(); +fileBasedRequestedOperationDAO.save(operationQueue); + +verify(objectMapper).writeValue(any(File.class), eq(operationQueue)); +} + +@Test +void shouldThrowRuntimeExceptionWhenExceptionHappensDuringSave() throws IOException { +doThrow(new RuntimeException()).when(objectMapper).writeValue(any(File.class), anyList()); + +assertThrows(RuntimeException.class, () -> fileBasedRequestedOperationDAO.save(mock(OperationQueue.class))); +} + +@Test +void shouldGetReturnEmptyWhenFileDoesntExists() { +assertEquals(Optional.empty(), fileBasedRequestedOperationDAO.load()); +} + +@Test +void shouldGetReturnEmptyWhenExceptionHappens() throws IOException { +new File(tmpDir.getAbsolutePath() + "/" + REQUESTED_OPERATIONS_FILE_NAME).createNewFile(); + +doThrow(new RuntimeException()).when(objectMapper).readValue(any(Fi
[GitHub] [nifi] briansolo1985 commented on a diff in pull request #6733: NIFI-10895 Update properties command for MiNiFi C2
briansolo1985 commented on code in PR #6733: URL: https://github.com/apache/nifi/pull/6733#discussion_r1065809530 ## minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/main/java/org/apache/nifi/minifi/c2/C2NifiClientService.java: ## @@ -152,15 +185,94 @@ private C2ClientConfig generateClientConfig(NiFiProperties properties) { } public void start() { -scheduledExecutorService.scheduleAtFixedRate(() -> c2ClientService.sendHeartbeat(generateRuntimeInfo()), INITIAL_DELAY, heartbeatPeriod, TimeUnit.MILLISECONDS); +handleOngoingOperations(requestedOperationDAO.get()); +heartbeatExecutorService.scheduleAtFixedRate(() -> c2ClientService.sendHeartbeat(generateRuntimeInfo()), INITIAL_DELAY, heartbeatPeriod, TimeUnit.MILLISECONDS); +} + +private synchronized void handleOngoingOperations(Optional operationQueue) { +LOGGER.info("Handling ongoing operations: {}", operationQueue); +if (operationQueue.isPresent()) { +try { +waitForAcknowledgeFromBootstrap(); + c2ClientService.handleRequestedOperations(operationQueue.get().getRemainingOperations()); +} catch (Exception e) { +LOGGER.error("Failed to process c2 operations queue", e); +c2ClientService.enableHeartbeat(); +} +} else { +c2ClientService.enableHeartbeat(); +} +} + +private void waitForAcknowledgeFromBootstrap() { +LOGGER.info("Waiting for ACK signal from Bootstrap"); +int currentWaitTime = 0; +while(!ackReceived) { +int sleep = 1000; +try { +Thread.sleep(sleep); +} catch (InterruptedException e) { +LOGGER.warn("Thread interrupted while waiting for Acknowledge"); +} +currentWaitTime += sleep; +if (MAX_WAIT_FOR_BOOTSTRAP_ACK_MS <= currentWaitTime) { +LOGGER.warn("Max wait time ({}) exceeded for waiting ack from bootstrap, skipping", MAX_WAIT_FOR_BOOTSTRAP_ACK_MS); +break; +} +} +} + +private void registerOperation(C2Operation c2Operation) { +try { +ackReceived = false; +registerAcknowledgeTimeoutTask(); +String command = c2Operation.getOperation().name() + (c2Operation.getOperand() != null ? "_" + c2Operation.getOperand().name() : ""); +bootstrapCommunicator.sendCommand(command, objectMapper.writeValueAsString(c2Operation)); +} catch (IOException e) { +LOGGER.error("Failed to send operation to bootstrap", e); +throw new UncheckedIOException(e); +} +} + +private void registerAcknowledgeTimeoutTask() { +bootstrapAcknowledgeExecutorService.schedule(() -> { +if (!ackReceived) { +LOGGER.info("Does not received acknowledge from bootstrap after {} seconds. Handling remaining operations.", MINIFI_RESTART_TIMEOUT_SECONDS); +handleOngoingOperations(requestedOperationDAO.get()); +} +}, MINIFI_RESTART_TIMEOUT_SECONDS, TimeUnit.SECONDS); +} + +private void acknowledgeHandler(String[] params) { +LOGGER.info("Received acknowledge message from bootstrap process"); +if (params.length < 1) { +LOGGER.error("Invalid arguments coming from bootstrap, skipping acknowledging latest operation"); +return; +} + +Optional optionalOperationQueue = requestedOperationDAO.get(); +ackReceived = true; +optionalOperationQueue.ifPresent(operationQueue -> { +C2Operation c2Operation = operationQueue.getCurrentOperation(); +C2OperationAck c2OperationAck = new C2OperationAck(); +c2OperationAck.setOperationId(c2Operation.getIdentifier()); +C2OperationState c2OperationState = new C2OperationState(); +OperationState state = OperationState.valueOf(params[0]); +c2OperationState.setState(state); +c2OperationAck.setOperationState(c2OperationState); +c2ClientService.sendAcknowledge(c2OperationAck); +if (state != OperationState.FULLY_APPLIED) { +handleOngoingOperations(optionalOperationQueue); Review Comment: Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] briansolo1985 commented on a diff in pull request #6733: NIFI-10895 Update properties command for MiNiFi C2
briansolo1985 commented on code in PR #6733: URL: https://github.com/apache/nifi/pull/6733#discussion_r1065809117 ## minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/main/java/org/apache/nifi/minifi/c2/C2NifiClientService.java: ## @@ -152,15 +185,94 @@ private C2ClientConfig generateClientConfig(NiFiProperties properties) { } public void start() { -scheduledExecutorService.scheduleAtFixedRate(() -> c2ClientService.sendHeartbeat(generateRuntimeInfo()), INITIAL_DELAY, heartbeatPeriod, TimeUnit.MILLISECONDS); +handleOngoingOperations(requestedOperationDAO.get()); +heartbeatExecutorService.scheduleAtFixedRate(() -> c2ClientService.sendHeartbeat(generateRuntimeInfo()), INITIAL_DELAY, heartbeatPeriod, TimeUnit.MILLISECONDS); +} + +private synchronized void handleOngoingOperations(Optional operationQueue) { +LOGGER.info("Handling ongoing operations: {}", operationQueue); +if (operationQueue.isPresent()) { +try { +waitForAcknowledgeFromBootstrap(); + c2ClientService.handleRequestedOperations(operationQueue.get().getRemainingOperations()); +} catch (Exception e) { +LOGGER.error("Failed to process c2 operations queue", e); +c2ClientService.enableHeartbeat(); +} +} else { +c2ClientService.enableHeartbeat(); +} +} + +private void waitForAcknowledgeFromBootstrap() { +LOGGER.info("Waiting for ACK signal from Bootstrap"); +int currentWaitTime = 0; +while(!ackReceived) { +int sleep = 1000; +try { +Thread.sleep(sleep); +} catch (InterruptedException e) { +LOGGER.warn("Thread interrupted while waiting for Acknowledge"); +} +currentWaitTime += sleep; +if (MAX_WAIT_FOR_BOOTSTRAP_ACK_MS <= currentWaitTime) { +LOGGER.warn("Max wait time ({}) exceeded for waiting ack from bootstrap, skipping", MAX_WAIT_FOR_BOOTSTRAP_ACK_MS); +break; +} +} +} + +private void registerOperation(C2Operation c2Operation) { +try { +ackReceived = false; +registerAcknowledgeTimeoutTask(); +String command = c2Operation.getOperation().name() + (c2Operation.getOperand() != null ? "_" + c2Operation.getOperand().name() : ""); +bootstrapCommunicator.sendCommand(command, objectMapper.writeValueAsString(c2Operation)); +} catch (IOException e) { +LOGGER.error("Failed to send operation to bootstrap", e); +throw new UncheckedIOException(e); +} +} + +private void registerAcknowledgeTimeoutTask() { +bootstrapAcknowledgeExecutorService.schedule(() -> { Review Comment: This approach solves the other direction, and prevents to stuck in a forever waiting loop. The other direction is: ack wait loop times out and MiNiFi continues the process remainder operations. Meanwhile bootstrap acks back and starts processing the same list of operations. Even the previously omitted synchronized keyword wouldn't prevent this, just would delay the failure. This is highly unlikely to happen, but if this happens it will result in a very cryptic behavior. Maybe we should create another state variable `isAckTimedOut` besides `ackReceived` and fill in respectively. So in the `acknowledgeHandler` method we would check it's value, and if the operation is timed out we would simply log that the acknowledge has arrived, but we dropped it as it had already been timed out. Wdyt? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on pull request #1460: MINIFICPP-1995 Add configuring path for flowfile_checkpoint directory
lordgamez commented on PR #1460: URL: https://github.com/apache/nifi-minifi-cpp/pull/1460#issuecomment-1377203501 > Unfortunately, the code did not compile on Windows after [373dbdd](https://github.com/apache/nifi-minifi-cpp/commit/373dbdd98b16d1a45788c5bb10c926368af0c891), so I have reverted both [c07ee3e](https://github.com/apache/nifi-minifi-cpp/commit/c07ee3e10c54a19820f7b5ac8d2ee63768ed5759) and [373dbdd](https://github.com/apache/nifi-minifi-cpp/commit/373dbdd98b16d1a45788c5bb10c926368af0c891), and added a `.string()` to the log line: [378501d](https://github.com/apache/nifi-minifi-cpp/commit/378501d61d0d0a98c1688f28c7a35161af5e0d3a) Thanks for the update, you are right it's better to keep it simple at this point. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] fgerlits commented on pull request #1460: MINIFICPP-1995 Add configuring path for flowfile_checkpoint directory
fgerlits commented on PR #1460: URL: https://github.com/apache/nifi-minifi-cpp/pull/1460#issuecomment-1377200086 Unfortunately, the code did not compile on Windows after 373dbdd98b16d1a45788c5bb10c926368af0c891, so I have reverted both c07ee3e10c54a19820f7b5ac8d2ee63768ed5759 and 373dbdd98b16d1a45788c5bb10c926368af0c891, and added a `.string()` to the log line: 378501d61d0d0a98c1688f28c7a35161af5e0d3a -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ferencerdei commented on a diff in pull request #6733: NIFI-10895 Update properties command for MiNiFi C2
ferencerdei commented on code in PR #6733: URL: https://github.com/apache/nifi/pull/6733#discussion_r1062251850 ## c2/c2-client-bundle/c2-client-service/src/test/java/org/apache/nifi/c2/client/service/operation/UpdatePropertiesOperationHandlerTest.java: ## @@ -0,0 +1,128 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.c2.client.service.operation; + +import static org.apache.nifi.c2.protocol.api.OperandType.PROPERTIES; +import static org.apache.nifi.c2.protocol.api.OperationType.UPDATE; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.mockito.Mockito.when; + +import java.util.Collections; +import java.util.HashMap; +import java.util.Map; +import java.util.function.Function; +import org.apache.nifi.c2.protocol.api.C2Operation; +import org.apache.nifi.c2.protocol.api.C2OperationAck; +import org.apache.nifi.c2.protocol.api.C2OperationState; +import org.apache.nifi.c2.protocol.api.C2OperationState.OperationState; +import org.junit.jupiter.api.Test; +import org.junit.jupiter.api.extension.ExtendWith; +import org.mockito.InjectMocks; +import org.mockito.Mock; +import org.mockito.junit.jupiter.MockitoExtension; + +@ExtendWith(MockitoExtension.class) +public class UpdatePropertiesOperationHandlerTest { + +protected static final String ID = "id"; Review Comment: no reason, changed them. ## c2/c2-client-bundle/c2-client-service/src/main/java/org/apache/nifi/c2/client/service/C2ClientService.java: ## @@ -59,11 +111,42 @@ private void processResponse(C2HeartbeatResponse response) { } } -private void handleRequestedOperations(List requestedOperations) { -for (C2Operation requestedOperation : requestedOperations) { -operationService.handleOperation(requestedOperation) -.ifPresent(client::acknowledgeOperation); +private boolean requiresRestart(C2OperationHandler c2OperationHandler, C2OperationAck c2OperationAck) { +return c2OperationHandler.requiresRestart() +&& !Optional.ofNullable(c2OperationAck) Review Comment: isEmpty is added only in java11, I'm not sure if we are ready to use it, but I moved it to a separate method. ## minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/main/java/org/apache/nifi/minifi/c2/C2NiFiProperties.java: ## @@ -36,6 +36,8 @@ public class C2NiFiProperties { public static final String C2_CONNECTION_TIMEOUT = C2_PREFIX + "rest.connectionTimeout"; public static final String C2_READ_TIMEOUT = C2_PREFIX + "rest.readTimeout"; public static final String C2_CALL_TIMEOUT = C2_PREFIX + "rest.callTimeout"; +public static final String C2_MAX_IDLE_CONNECTIONS = C2_PREFIX + "rest.maxIdleConnections"; Review Comment: Merged them. ## minifi/minifi-nar-bundles/minifi-framework-bundle/minifi-framework/minifi-framework-core/src/main/java/org/apache/nifi/minifi/c2/C2NifiClientService.java: ## @@ -152,15 +185,94 @@ private C2ClientConfig generateClientConfig(NiFiProperties properties) { } public void start() { -scheduledExecutorService.scheduleAtFixedRate(() -> c2ClientService.sendHeartbeat(generateRuntimeInfo()), INITIAL_DELAY, heartbeatPeriod, TimeUnit.MILLISECONDS); +handleOngoingOperations(requestedOperationDAO.get()); +heartbeatExecutorService.scheduleAtFixedRate(() -> c2ClientService.sendHeartbeat(generateRuntimeInfo()), INITIAL_DELAY, heartbeatPeriod, TimeUnit.MILLISECONDS); +} + +private synchronized void handleOngoingOperations(Optional operationQueue) { +LOGGER.info("Handling ongoing operations: {}", operationQueue); +if (operationQueue.isPresent()) { +try { +waitForAcknowledgeFromBootstrap(); + c2ClientService.handleRequestedOperations(operationQueue.get().getRemainingOperations()); +} catch (Exception e) { +LOGGER.error("Failed to process c2 operations queue", e); +c2ClientService.enableHeartbeat(); +} +} else { +
[GitHub] [nifi-minifi-cpp] lordgamez opened a new pull request, #1488: MINIFICPP-2023 Skip brew update to avoid python link failure
lordgamez opened a new pull request, #1488: URL: https://github.com/apache/nifi-minifi-cpp/pull/1488 The issue https://github.com/actions/setup-python/issues/577 reappeared with the previous workaround. It seems the best workaround at the moment is to skip the `brew update` until the issue is fixed. - Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Reopened] (MINIFICPP-2023) MacOS and docker github actions fail with environmental issues
[ https://issues.apache.org/jira/browse/MINIFICPP-2023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi reopened MINIFICPP-2023: -- Mac issue reappeared > MacOS and docker github actions fail with environmental issues > -- > > Key: MINIFICPP-2023 > URL: https://issues.apache.org/jira/browse/MINIFICPP-2023 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Major > Fix For: 0.14.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > MacOs environment failure: > Error: The `brew link` step did not complete successfully > The formula built, but is not symlinked into /usr/local > Could not symlink bin/2to3-3.11 > Target /usr/local/bin/2to3-3.11 > already exists. You may want to remove it: > rm '/usr/local/bin/2to3-3.11' > To force the link and overwrite all conflicting files: > brew link --overwrite python@3.11 > To list all files that would be deleted: > brew link --overwrite --dry-run python@3.11 > > > Docker test environment failure: > Exception AttributeError: module 'lib' has no attribute > 'OpenSSL_add_all_algorithms' > Traceback (most recent call last): > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/bin/behave", > line 8, in > sys.exit(main()) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/__main__.py", > line 183, in main > return run_behave(config) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/__main__.py", > line 127, in run_behave > failed = runner.run() > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner.py", > line 804, in run > returnself.run_with_paths() > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner.py", > line 809, in run_with_paths > self.load_step_definitions() > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner.py", > line 796, in load_step_definitions > load_step_modules(step_paths) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner_util.py", > line 412, in load_step_modules > exec_file(os.path.join(path, name), step_module_globals) > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/behave/runner_util.py", > line 386, in exec_file > exec(code, globals_, locals_) > File "steps/steps.py", line 19, in > from minifi.core.SSL_cert_utils import make_ca, make_cert, dump_certificate, > dump_privatekey > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/docker/test/integration/minifi/core/SSL_cert_utils.py", > line 22, in > from OpenSSL import crypto > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/OpenSSL/__init__.py", > line 8, in > from OpenSSL import crypto, SSL > File > "/home/runner/work/nifi-minifi-cpp/nifi-minifi-cpp/build/test-env-py3/lib/python3.8/site-packages/OpenSSL/crypto.py", > line 3279, in > _lib.OpenSSL_add_all_algorithms() -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1487: MINIFICPP-2025 Eliminate cmake CMP0135 warnings
martinzink commented on code in PR #1487: URL: https://github.com/apache/nifi-minifi-cpp/pull/1487#discussion_r1065465776 ## cmake/BundledLibXml2.cmake: ## @@ -38,6 +38,10 @@ function(use_bundled_libxml2 SOURCE_DIR BINARY_DIR) "-DCMAKE_INSTALL_PREFIX=${BINARY_DIR}/thirdparty/libxml2-install") endif() +if (CMAKE_VERSION VERSION_GREATER_EQUAL 3.24) +cmake_policy(SET CMP0135 OLD) # Restore the timestamps from the archive https://gitlab.kitware.com/cmake/cmake/-/issues/24003 +endif() Review Comment: Yeah that was my original intent, but the latest cmake on centos7 doesnt have this feature, so that would only work if I duplicated the whole ExternalProject_Add into an `if (CMAKE_VERSION VERSION_GREATER_EQUAL 3.24)` statement. I also tried to put the "DOWNLOAD_EXTRACT_TIMESTAMP ON" and "" (based on cmakeversion) into a variable and pass that into the ExternalProject_Add but I wanst able to make that work. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org