[jira] [Commented] (NIFI-10048) Apache Nifi Web Server keeps failing to start
[ https://issues.apache.org/jira/browse/NIFI-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558299#comment-17558299 ] Hadi commented on NIFI-10048: - [~moganarich] u are right, but i advice use toolkit to fix sensitive keys and recreate flow.xml.gz > Apache Nifi Web Server keeps failing to start > -- > > Key: NIFI-10048 > URL: https://issues.apache.org/jira/browse/NIFI-10048 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.16.1 >Reporter: Andreas Adamides >Priority: Critical > > The following exception keeps occurring on the Apache NiFi Cluster, this is > from the app logs: > {code:java} > 2022-05-24 15:13:03,669 INFO [main] o.a.n.c.s.VersionedFlowSynchronizer > Synching FlowController with proposed flow: Controller Already Synchronized = > false > 2022-05-24 15:13:04,260 WARN [main] org.apache.nifi.web.server.JettyServer > Failed to start web server... shutting down. > java.lang.NullPointerException: null > at > org.apache.nifi.registry.flow.diff.StandardFlowDifference.hashCode(StandardFlowDifference.java:92) > at java.util.HashMap.hash(HashMap.java:340) > at java.util.HashMap.put(HashMap.java:613) > at java.util.HashSet.add(HashSet.java:220) > at > org.apache.nifi.registry.flow.diff.StandardFlowComparator.addIfDifferent(StandardFlowComparator.java:562) > at > org.apache.nifi.registry.flow.diff.StandardFlowComparator.compare(StandardFlowComparator.java:447) > at > org.apache.nifi.registry.flow.diff.StandardFlowComparator.compare(StandardFlowComparator.java:92) > at > org.apache.nifi.registry.flow.diff.StandardFlowComparator.compare(StandardFlowComparator.java:77) > at > org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.compareFlows(VersionedFlowSynchronizer.java:383) > at > org.apache.nifi.controller.serialization.VersionedFlowSynchronizer.sync(VersionedFlowSynchronizer.java:165) > at > org.apache.nifi.controller.serialization.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:43) > at > org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1524) > at > org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:104) > at > org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:815) > at > org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:457) > at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1086) > at org.apache.nifi.NiFi.(NiFi.java:170) > at org.apache.nifi.NiFi.(NiFi.java:82) > at org.apache.nifi.NiFi.main(NiFi.java:330) > 2022-05-24 15:13:04,261 INFO [Thread-1] org.apache.nifi.NiFi Application > Server shutdown started{code} > There is not a Nifi Registry in the setup, I have tried to tear everything > down to freshly install Nifi Cluster, but it keeps failing to start. This has > been part of a helm chart/k8 setup. This issue occurs in 2 nodes of a 3-node > Nifi cluster. I manage to fix it when downgrading to 1 node overall, which > does not make a lot of sense. > > Can anyone suggest what I am doing wrong? > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #5905: NIFI-9817 Add a Validator for the PutCloudWatchMetric Processor's Unit Field
exceptionfactory commented on code in PR #5905: URL: https://github.com/apache/nifi/pull/5905#discussion_r905603207 ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/cloudwatch/PutCloudWatchMetric.java: ## @@ -70,6 +70,33 @@ public static final Set relationships = Collections.unmodifiableSet( new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE))); +private static final Set units = Collections.unmodifiableSet( +new HashSet<>(Arrays.asList( +"Seconds", "Microseconds", "Milliseconds", "Bytes", +"Kilobytes", "Megabytes", "Gigabytes", "Terabytes", +"Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", +"Percent", "Count", "Bytes/Second", "Kilobytes/Second", +"Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", +"Bits/Second", "Kilobits/Second", "Megabits/Second", +"Gigabits/Second", "Terabits/Second", "Count/Second", +"None", ""))); Review Comment: Thanks for the reference @patalwell! Leveraging the enum values sounds like the optimal way forward. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] patalwell commented on a diff in pull request #5905: NIFI-9817 Add a Validator for the PutCloudWatchMetric Processor's Unit Field
patalwell commented on code in PR #5905: URL: https://github.com/apache/nifi/pull/5905#discussion_r905562979 ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/cloudwatch/PutCloudWatchMetric.java: ## @@ -70,6 +70,33 @@ public static final Set relationships = Collections.unmodifiableSet( new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE))); +private static final Set units = Collections.unmodifiableSet( +new HashSet<>(Arrays.asList( +"Seconds", "Microseconds", "Milliseconds", "Bytes", +"Kilobytes", "Megabytes", "Gigabytes", "Terabytes", +"Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", +"Percent", "Count", "Bytes/Second", "Kilobytes/Second", +"Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", +"Bits/Second", "Kilobits/Second", "Megabits/Second", +"Gigabits/Second", "Terabits/Second", "Count/Second", +"None", ""))); Review Comment: It's a java enum -> https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/cloudwatch/model/StandardUnit.html we could potentially leverage the SDK and call the types from the software.amazon.awssdk.services.cloudwatch.model package to prevent breaking changes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog commented on pull request #6154: NIFI-10070 - Updated merging of status DTO for ControllerService and …
thenatog commented on PR #6154: URL: https://github.com/apache/nifi/pull/6154#issuecomment-1164976636 What I'm not 100% certain about is whether all the merging is handled correctly. I wasn't sure about the ControllerServiceStatusDTO merging, whether that needed to take into account active threads. It appears that the thread count is not tracked in controller service status as this was found to be null in my cluster testing. I would like to know if this in fact should be being set, and where I should set this to track the active threads for the status. Right now I'm just making sure that if any node is disabling, the client entity returned will be disabling. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] thenatog opened a new pull request, #6154: NIFI-10070 - Updated merging of status DTO for ControllerService and …
thenatog opened a new pull request, #6154: URL: https://github.com/apache/nifi/pull/6154 …ReportingTask entities. # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9810) RocksDB does not work on ARM
[ https://issues.apache.org/jira/browse/NIFI-9810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558233#comment-17558233 ] Kevin Doran commented on NIFI-9810: --- After discussing with [~markap14], RocksDB will be moved to its own NAR and deprecated, with a plan to remove it in NiFi 2.0 > RocksDB does not work on ARM > > > Key: NIFI-9810 > URL: https://issues.apache.org/jira/browse/NIFI-9810 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Kevin Doran >Priority: Minor > > {noformat} > [INFO] -< org.apache.nifi:nifi-rocksdb-utils > >- > [INFO] Building nifi-rocksdb-utils 1.16.0-SNAPSHOT > [35/642] > [INFO] [ jar > ]- > [INFO] > [INFO] --- maven-enforcer-plugin:3.0.0:enforce (enforce-maven-version) @ > nifi-rocksdb-utils --- > [INFO] > [INFO] --- maven-enforcer-plugin:3.0.0:enforce (enforce-java-version) @ > nifi-rocksdb-utils --- > [INFO] > [INFO] --- maven-remote-resources-plugin:1.7.0:process > (process-resource-bundles) @ nifi-rocksdb-utils --- > [INFO] Preparing remote bundle org.apache:apache-jar-resource-bundle:1.4 > [INFO] Copying 3 resources from 1 bundle. > [INFO] > [INFO] --- maven-resources-plugin:3.2.0:resources (default-resources) @ > nifi-rocksdb-utils --- > [INFO] Using 'UTF-8' encoding to copy filtered resources. > [INFO] Using 'UTF-8' encoding to copy filtered properties files. > [INFO] skip non existing resourceDirectory > /Users/kdoran/dev/code/nifi/nifi-commons/nifi-rocksdb-utils/src/main/resources > [INFO] Copying 3 resources > [INFO] > [INFO] --- maven-compiler-plugin:3.9.0:compile (default-compile) @ > nifi-rocksdb-utils --- > [INFO] Nothing to compile - all classes are up to date > [INFO] > [INFO] --- maven-resources-plugin:3.2.0:testResources (default-testResources) > @ nifi-rocksdb-utils --- > [INFO] Using 'UTF-8' encoding to copy filtered resources. > [INFO] Using 'UTF-8' encoding to copy filtered properties files. > [INFO] Copying 1 resource > [INFO] Copying 3 resources > [INFO] > [INFO] --- maven-compiler-plugin:3.9.0:testCompile (default-testCompile) @ > nifi-rocksdb-utils --- > [INFO] Nothing to compile - all classes are up to date > [INFO] > [INFO] --- maven-compiler-plugin:3.9.0:testCompile (groovy-tests) @ > nifi-rocksdb-utils --- > [INFO] Changes detected - recompiling the module! > [INFO] Nothing to compile - all classes are up to date > [INFO] > [INFO] --- maven-surefire-plugin:3.0.0-M5:test (default-test) @ > nifi-rocksdb-utils --- > [INFO] > [INFO] --- > [INFO] T E S T S > [INFO] --- > [INFO] Running org.apache.nifi.rocksdb.TestRocksDBMetronome > [ERROR] Tests run: 10, Failures: 2, Errors: 7, Skipped: 0, Time elapsed: > 0.097 s <<< FAILURE! - in org.apache.nifi.rocksdb.TestRocksDBMetronome > [ERROR] org.apache.nifi.rocksdb.TestRocksDBMetronome.testColumnFamilies(Path) > Time elapsed: 0.058 s <<< ERROR! > java.lang.UnsatisfiedLinkError: > /private/var/folders/dj/1c85sd0d6dvcp1fltmwr5nl4gn/T/librocksdbjni1540031708884427750.jnilib: > > dlopen(/private/var/folders/dj/1c85sd0d6dvcp1fltmwr5nl4gn/T/librocksdbjni1540031708884427750.jnilib, > 0x0001): tried: > '/private/var/folders/dj/1c85sd0d6dvcp1fltmwr5nl4gn/T/librocksdbjni1540031708884427750.jnilib' > (mach-o file, but is an incompatible architecture (have 'x86_64', need > 'arm64e')), '/usr/lib/librocksdbjni1540031708884427750.jnilib' (no such file) > at > org.apache.nifi.rocksdb.TestRocksDBMetronome.testColumnFamilies(TestRocksDBMetronome.java:170) > [ERROR] org.apache.nifi.rocksdb.TestRocksDBMetronome.testWaitForSync(Path) > Time elapsed: 0.003 s <<< ERROR! > java.lang.NoClassDefFoundError: Could not initialize class org.rocksdb.RocksDB > at > org.apache.nifi.rocksdb.TestRocksDBMetronome.testWaitForSync(TestRocksDBMetronome.java:267) > [ERROR] > org.apache.nifi.rocksdb.TestRocksDBMetronome.testWaitForSyncWithValue(Path) > Time elapsed: 0.001 s <<< ERROR! > java.lang.NoClassDefFoundError: Could not initialize class org.rocksdb.RocksDB > at > org.apache.nifi.rocksdb.TestRocksDBMetronome.testWaitForSyncWithValue(TestRocksDBMetronome.java:299) > [ERROR] > org.apache.nifi.rocksdb.TestRocksDBMetronome.testCounterIncrement(Path) Time > elapsed: 0.001 s <<< ERROR! > java.lang.NoClassDefFoundError: Could not initialize class org.rocksdb.RocksDB > at > org.apache.nifi.rocksdb.TestRocksDBMetronome.testCounterIncrement(TestRocksDBMetronome.java:247) > [ERROR] org.apache.nifi.rocksdb.TestRocksDBMetronome.testPutGetDelete(Path) > Time elapsed: 0.001 s <<< ERROR! > java.lang.NoClassDefFoundError: Could not initialize class org.rocksdb.RocksDB > at >
[jira] [Resolved] (NIFI-10159) Move internal interfaces from c2-client-api
[ https://issues.apache.org/jira/browse/NIFI-10159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-10159. - Fix Version/s: 1.17.0 Resolution: Fixed > Move internal interfaces from c2-client-api > > > Key: NIFI-10159 > URL: https://issues.apache.org/jira/browse/NIFI-10159 > Project: Apache NiFi > Issue Type: Improvement > Components: C2, MiNiFi >Reporter: Csaba Bejan >Assignee: Csaba Bejan >Priority: Minor > Fix For: 1.17.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Move internal interface definitions from c2-client-api to a more specific / > internal module. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Commented] (NIFI-10159) Move internal interfaces from c2-client-api
[ https://issues.apache.org/jira/browse/NIFI-10159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558231#comment-17558231 ] ASF subversion and git services commented on NIFI-10159: Commit 0e52ccf9e9822627640e0a71914c26ff2b30894b in nifi's branch refs/heads/main from Csaba Bejan [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0e52ccf9e9 ] NIFI-10159 Move internal interfaces from c2-client-api This closes #6152 Signed-off-by: David Handermann > Move internal interfaces from c2-client-api > > > Key: NIFI-10159 > URL: https://issues.apache.org/jira/browse/NIFI-10159 > Project: Apache NiFi > Issue Type: Improvement > Components: C2, MiNiFi >Reporter: Csaba Bejan >Assignee: Csaba Bejan >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Move internal interface definitions from c2-client-api to a more specific / > internal module. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] exceptionfactory closed pull request #6152: NIFI-10159 Move internal interfaces from c2-client-api
exceptionfactory closed pull request #6152: NIFI-10159 Move internal interfaces from c2-client-api URL: https://github.com/apache/nifi/pull/6152 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-8968) Improve throughput performance for InvokeHTTP
[ https://issues.apache.org/jira/browse/NIFI-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558228#comment-17558228 ] David Handermann commented on NIFI-8968: Thanks for documenting these observations and providing examples flows [~markbean]. As described in NIFI-10163, it appears that the framework was not properly tracking bytes read in StandardProcessSession.exportTo(), which explains why those statistics do not appear in the Status History. After making those adjustments, throughput still remains higher in {{PostHTTP}} in the default configuration. Disabling the {{Send as FlowFile}} property in {{PostHTTP}} showed a performance similar to {{InvokeHTTP}}. The FlowFile Packaging enabled in {{PostHTTP}} causes the processor to batch multiple FlowFiles and stream them over a single HTTP request, instead of sending each FlowFile in a separate HTTP request. This behavior is specific to {{PostHTTP}} when paired with {{ListenHTTP}} since {{ListenHTTP}} is able to read multiple FlowFiles from the stream. {{InvokeHTTP}} is already a complex Processor, so attempting to implement a similar batch stream approach over a single HTTP request could be challenging. > Improve throughput performance for InvokeHTTP > - > > Key: NIFI-8968 > URL: https://issues.apache.org/jira/browse/NIFI-8968 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.14.0 >Reporter: Mark Bean >Priority: Major > Attachments: PostHTTP_vs_InvokeHTTP.json, PostHTTP_vs_InvokeHTTP.xml > > > InvokeHTTP is the preferred processor to use over the deprecated PostHTTP. > However, PostHTTP outperforms InvokeHTTP (at least in POST mode). A template > and a JSON file have been attached to this ticket for benchmarking the two > processors. Using this flow, PostHTTP was observed to have a throughput of > approximately 5 times greater than InvokeHTTP. > In addition, it was noted that InvokeHTTP had approximately 5 times as many > tasks and 5 times the task duration for a given 5 minute stats window. And, > the statistics of Bytes Read and Bytes Transferred remain at zero for > InvokeHTTP; this area accurate statistics also needs to be addressed. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (NIFI-10163) StandardProcessSession.exportTo() not tracking bytes read
[ https://issues.apache.org/jira/browse/NIFI-10163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10163: Status: Patch Available (was: Open) > StandardProcessSession.exportTo() not tracking bytes read > - > > Key: NIFI-10163 > URL: https://issues.apache.org/jira/browse/NIFI-10163 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.16.3 >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > The {{StandardProcessSession.exportTo(FlowFile, OutputStream)}} method does > not increment the bytes read or bytes written properties after processing > completes. Although the method uses a {{ByteCountingInputStream}}, the method > does not use the accumulated bytes read. > As a result of this issue, Processors that use this {{exportTo()}} method do > not show any information in the {{Bytes Read}} and {{Bytes Transferred}} > sections of the Processor Status History. > This impacts {{InvokeHTTP}} and {{HandleHttpResponse}} among others. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] exceptionfactory opened a new pull request, #6153: NIFI-10163 Correct StandardProcessSession.exportTo() byte counting
exceptionfactory opened a new pull request, #6153: URL: https://github.com/apache/nifi/pull/6153 # Summary [NIFI-10163](https://issues.apache.org/jira/browse/NIFI-10163) Corrects counting of bytes read and bytes written in `StandardProcessSession.exportTo(FlowFile, OutputStream)` to reflect the the number of bytes processed through the `ByteCountingInputStream`, which wraps the source content stream. This approach follows a strategy similar to the `StandardProcessSession.exportTo(FlowFile, Path)` method, which increments bytes read and bytes written based on the number of bytes copied. This change corrects tracking of `Bytes Read` and `Bytes Transferred` in the `Status History` for Processors such as `InvokeHTTP` and `HandleHttpResponse`. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10163) StandardProcessSession.exportTo() not tracking bytes read
David Handermann created NIFI-10163: --- Summary: StandardProcessSession.exportTo() not tracking bytes read Key: NIFI-10163 URL: https://issues.apache.org/jira/browse/NIFI-10163 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.16.3 Reporter: David Handermann Assignee: David Handermann The {{StandardProcessSession.exportTo(FlowFile, OutputStream)}} method does not increment the bytes read or bytes written properties after processing completes. Although the method uses a {{ByteCountingInputStream}}, the method does not use the accumulated bytes read. As a result of this issue, Processors that use this {{exportTo()}} method do not show any information in the {{Bytes Read}} and {{Bytes Transferred}} sections of the Processor Status History. This impacts {{InvokeHTTP}} and {{HandleHttpResponse}} among others. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] pkelly-nifi commented on pull request #5874: NIFI-9803: Add support for listing versioned object tags
pkelly-nifi commented on PR #5874: URL: https://github.com/apache/nifi/pull/5874#issuecomment-1164816605 Thank you for the feedback, @exceptionfactory. This change does alter behavior, but I think it brings it more in line with expectations and how other S3 clients behave. When fetching a specific version's tags, other clients -- including the AWS CLI -- give the specific version's tags rather than the latest version's tags. As an example flow, if you do: GetFile -> PutS3Object -> TagS3Object (populating the Version ID property) ...you'll only ever get the latest version's tags for all versions of the object when ListS3 runs. You won't retrieve the tags you set for each specific version. I had been thinking of this more as a bug where the wrong tags are being returned, but I'd be happy to add a property to control which set of tags is returned if you think it is necessary for backwards compatibility, even if it is unexpected behavior. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10161) Add Gzip Request Content-Encoding in InvokeHTTP and ListenHTTP
[ https://issues.apache.org/jira/browse/NIFI-10161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10161: Status: Patch Available (was: Open) > Add Gzip Request Content-Encoding in InvokeHTTP and ListenHTTP > -- > > Key: NIFI-10161 > URL: https://issues.apache.org/jira/browse/NIFI-10161 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Time Spent: 10m > Remaining Estimate: 0h > > [RFC 7231 Section > 3.1.2.2|https://datatracker.ietf.org/doc/html/rfc7231#section-3.1.2.2] > describes the {{Content-Encoding}} header as a standard method of indicating > compression applied to content information. HTTP servers use this header to > indicate response compression, and some servers can support receiving HTTP > requests compressed using Gzip. > The {{ListenHTTP}} Processor supports receiving Gzip-compressed requests > using a non-standard header named {{{}flowfile-gzipped{}}}, which the > deprecated {{PostHTTP}} Processor applies when enabling the {{Compression > Level}} property. > The {{ListenHTTP}} Processor should be updated to support handling Gzip > request compression using the standard {{Content-Encoding}} header, and the > {{InvokeHTTP}} should be updated to support enabling Gzip. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] bejancsaba opened a new pull request, #6152: NIFI-10159 Move internal interfaces from c2-client-api
bejancsaba opened a new pull request, #6152: URL: https://github.com/apache/nifi/pull/6152 # Summary [NIFI-10159](https://issues.apache.org/jira/browse/NIFI-10159) Move internal interfaces from c2-client-api # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] NissimShiman commented on pull request #6035: NIFI-9440 Allow Controller Services to have configurable Bulletins
NissimShiman commented on PR #6035: URL: https://github.com/apache/nifi/pull/6035#issuecomment-1164753180 @mattyb149 I was wondering if you could take a look at this again. I have a (hopefully less complex) example at the end of the last comment where the ElasticSearchClientServiceImpl service can be set up to generate ERROR bulletins I just noticed that bulletins can be delayed on the main nifi graph, but if a refresh is done they should be seen immediately from the Bulletins icon (right below the Global menu) They can also be seen via: Global Menu -> Bulletin Board -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1340: MINIFICPP-1829 Export metrics for use with Prometheus
szaszm commented on code in PR #1340: URL: https://github.com/apache/nifi-minifi-cpp/pull/1340#discussion_r905313631 ## METRICS.md: ## @@ -0,0 +1,155 @@ + + +# Apache NiFi - MiNiFi - C++ Metrics Readme. + + +This readme defines the metrics published by Apache NiFi. All options defined are located in minifi.properties. + +## Table of Contents + +- [Description](#description) +- [Configuration](#configuration) +- [Metrics](#metrics) + +## Description + +Apache NiFi MiNiFi C++ can communicate metrics about the agent's status, that can be a system level or component level metric. +These metrics are exposed through the agent implemented metric publishers that can be configured in the minifi.properties. +Aside from the publisher exposed metrics, metrics are also sent through C2 protocol of which there is more information in the +[C2 documentation](C2.md#metrics). + +## Configuration + +To configure the a metrics publisher first we have to set which publisher class should be used: + + # in minifi.properties + + nifi.metrics.publisher.class=PrometheusMetricsPublisher + +Currently PrometheusMetricsPublisher is the only available publisher in MiNiFi C++ which publishes metrics to a Prometheus server. +To use the publisher a port should also be configured where the metrics will be available to be scraped through: + + # in minifi.properties + + nifi.metrics.publisher.PrometheusMetricsPublisher.port=9936 + +The last option defines which metric classes should be exposed through the metrics publisher in configured with a comma separated value: + + # in minifi.properties + + nifi.metrics.publisher.metrics=QueueMetrics,RepositoryMetrics,GetFileMetrics,DeviceInfoNode,FlowInformation + +## Metrics + +The following section defines the currently available metrics to be published by the MiNiFi C++ agent. + +NOTE: In Prometheus all metrics are extended with a `minifi_` prefix to mark the domain of the metric. For example the `connection_name` metric is published as `minifi_connection_name` in Prometheus. + +### QueueMetrics + +QueueMetrics is a system level metric that reports queue metrics for every connection in the flow. + +| Metric name | Labels | Description| +|--||| +| queue_data_size | metric_class, connection_uuid, connection_name | Max queue size to apply back pressure | +| queue_data_size_max | metric_class, connection_uuid, connection_name | Max queue data size to apply back pressure | +| queue_size | metric_class, connection_uuid, connection_name | Current queue size | +| queue_size_max | metric_class, connection_uuid, connection_name | Current queue data size| Review Comment: I realize it would be hard to have metric sink-specific names without some type metadata for metrics, and I wouldn't want to touch the C2 protocol. Let's leave it as is for now, and think about changing the prometheus metrics names later, after some metrics-related refactoring. Could you check the descriptions though? They are mixed up. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-5402) Reduce artifact size by only building .zip archive
[ https://issues.apache.org/jira/browse/NIFI-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558179#comment-17558179 ] ASF subversion and git services commented on NIFI-5402: --- Commit 62eb565daf8079a0aa5d0eee4cf4540239371598 in nifi's branch refs/heads/main from Nathan Gough [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=62eb565daf ] NIFI-5402 - Added more assembly options for different modules. Assemblies should build a zip by default, or a tar.gz with the -Ptargz profile This closes #5694 Signed-off-by: Paul Grey > Reduce artifact size by only building .zip archive > -- > > Key: NIFI-5402 > URL: https://issues.apache.org/jira/browse/NIFI-5402 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Affects Versions: 1.7.0 >Reporter: Andy LoPresto >Assignee: Nathan Gough >Priority: Major > Labels: archive, build, format, maven, tar, zip > Time Spent: 1h 40m > Remaining Estimate: 0h > > The maven build bundles two identical builds (one in .tar.gz format and one > in .zip format). Based on community discussion, there is no longer a need for > separate .tar.gz, and removing this would increase the speed of the build and > greatly decrease the hosting requirements for the binaries. > That said, building a .tar.gz archive should be available through an > (inactive) profile if a user wants to enable it. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] greyp9 closed pull request #5694: NIFI-5402 - Disable the tar.gz build artifact by default. Build will …
greyp9 closed pull request #5694: NIFI-5402 - Disable the tar.gz build artifact by default. Build will … URL: https://github.com/apache/nifi/pull/5694 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1340: MINIFICPP-1829 Export metrics for use with Prometheus
szaszm commented on code in PR #1340: URL: https://github.com/apache/nifi-minifi-cpp/pull/1340#discussion_r905309695 ## libminifi/include/core/state/nodes/RepositoryMetrics.h: ## @@ -90,15 +87,18 @@ class RepositoryMetrics : public ResponseNode { return serialized; } + std::vector calculateMetrics() override { +std::vector metrics; +for (const auto& [_, repo] : repositories_) { + metrics.push_back({"is_running", (repo->isRunning() ? 1.0 : 0.0), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); + metrics.push_back({"is_full", (repo->isFull() ? 1.0 : 0.0), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); + metrics.push_back({"repository_size", static_cast(repo->getRepoSize()), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); +} +return metrics; + } Review Comment: Now each metric class needs to have two different member functions (serialize and calculateMetrics) that perform essentially the same task: collect metrics and format them in a specific way. I would prefer to separate the "collect metrics" part from the formatting, and have 1 "collect metrics" function here, and one formatting function in each of C2Client and somewhere in the prometheus extension. I didn't think about the details yet, only have this high level idea. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] NissimShiman opened a new pull request, #6151: NIFI-10154 ReplaceText AdminYielding on long line
NissimShiman opened a new pull request, #6151: URL: https://github.com/apache/nifi/pull/6151 # Summary [NIFI-10154](https://issues.apache.org/jira/browse/NIFI-10154) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1340: MINIFICPP-1829 Export metrics for use with Prometheus
szaszm commented on code in PR #1340: URL: https://github.com/apache/nifi-minifi-cpp/pull/1340#discussion_r905305999 ## libminifi/include/properties/Configuration.h: ## @@ -156,12 +156,17 @@ class Configuration : public Properties { static constexpr const char *nifi_asset_directory = "nifi.asset.directory"; + // Metrics publisher options + static constexpr const char *nifi_metrics_publisher_class = "nifi.metrics.publisher.class"; + static constexpr const char *nifi_metrics_publisher_prometheus_metrics_publisher_port = "nifi.metrics.publisher.PrometheusMetricsPublisher.port"; Review Comment: Fair point. Let's leave it as is, and maybe think about it later, if it starts to become a problem. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] greyp9 commented on pull request #5694: NIFI-5402 - Disable the tar.gz build artifact by default. Build will …
greyp9 commented on PR #5694: URL: https://github.com/apache/nifi/pull/5694#issuecomment-1164697206 ``` nifi % mvn clean install -DskipTests -Pdir-only nifi % find . -name "*bin.zip" ./nifi-assembly/target/nifi-1.17.0-SNAPSHOT-bin.zip ./minifi/minifi-c2/minifi-c2-assembly/target/minifi-c2-1.17.0-SNAPSHOT-bin.zip ./minifi/minifi-toolkit/minifi-toolkit-assembly/target/minifi-toolkit-1.17.0-SNAPSHOT-bin.zip ./minifi/minifi-assembly/target/minifi-1.17.0-SNAPSHOT-bin.zip ./nifi-registry/nifi-registry-toolkit/nifi-registry-toolkit-assembly/target/nifi-registry-toolkit-1.17.0-SNAPSHOT-bin.zip ./nifi-registry/nifi-registry-extensions/nifi-registry-aws/nifi-registry-aws-assembly/target/nifi-registry-aws-assembly-1.17.0-SNAPSHOT-bin.zip ./nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-assembly/target/nifi-registry-ranger-assembly-1.17.0-SNAPSHOT-bin.zip ./nifi-stateless/nifi-stateless-assembly/target/nifi-stateless-1.17.0-SNAPSHOT-bin.zip nifi % nifi % find . -name "*bin.tar.gz" nifi % nifi % mvn clean install -DskipTests -P targz nifi % find . -name "*bin.zip" nifi % nifi % find . -name "*bin.tar.gz" ./nifi-assembly/target/nifi-1.17.0-SNAPSHOT-bin.tar.gz ./nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.17.0-SNAPSHOT-bin.tar.gz ./minifi/minifi-c2/minifi-c2-assembly/target/minifi-c2-1.17.0-SNAPSHOT-bin.tar.gz ./minifi/minifi-toolkit/minifi-toolkit-assembly/target/minifi-toolkit-1.17.0-SNAPSHOT-bin.tar.gz ./minifi/minifi-assembly/target/minifi-1.17.0-SNAPSHOT-bin.tar.gz ./nifi-registry/nifi-registry-toolkit/nifi-registry-toolkit-assembly/target/nifi-registry-toolkit-1.17.0-SNAPSHOT-bin.tar.gz ./nifi-registry/nifi-registry-extensions/nifi-registry-aws/nifi-registry-aws-assembly/target/nifi-registry-aws-assembly-1.17.0-SNAPSHOT-bin.tar.gz ./nifi-registry/nifi-registry-extensions/nifi-registry-ranger/nifi-registry-ranger-assembly/target/nifi-registry-ranger-assembly-1.17.0-SNAPSHOT-bin.tar.gz ./nifi-registry/nifi-registry-assembly/target/nifi-registry-1.17.0-SNAPSHOT-bin.tar.gz ./nifi-stateless/nifi-stateless-assembly/target/nifi-stateless-1.17.0-SNAPSHOT-bin.tar.gz nifi % ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory opened a new pull request, #6150: NIFI-10161 Add Gzip Content-Encoding to InvokeHTTP and ListenHTTP
exceptionfactory opened a new pull request, #6150: URL: https://github.com/apache/nifi/pull/6150 # Summary [NIFI-10161](https://issues.apache.org/jira/browse/NIFI-10161) Adds support for optional HTTP request content compression using Gzip with the standard `Content-Encoding` header described in [RFC 7231 Section 3.1.2.2](https://datatracker.ietf.org/doc/html/rfc7231#section-3.1.2.2). The implementation adds a new property named `Content-Encoding` to `InvokeHTTP` with a default value of `DISABLED` and an optional value of `GZIP` to enable request compression. The `ListenHTTP` Processor currently supports reading Gzip compressed requests when requests have the non-standard `flowfile-gzipped` header, and this implementation extends that behavior to check for the presence of a `Content-Encoding` header with a value of `gzip`. Both `InvokeHTTP` and `ListenHTTP` changes include new unit tests to validate Gzip compression handling. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [X] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [X] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [X] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [X] Pull Request based on current revision of the `main` branch - [X] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [X] Build completed using `mvn clean install -P contrib-check` - [X] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10154) ReplaceText processor AdminYields on flowfile containing long line
[ https://issues.apache.org/jira/browse/NIFI-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman updated NIFI-10154: - Attachment: (was: longLineContainingNulls.txt) > ReplaceText processor AdminYields on flowfile containing long line > -- > > Key: NIFI-10154 > URL: https://issues.apache.org/jira/browse/NIFI-10154 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.3 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Attachments: apacheLicenseOnOneLine.txt > > > When ReplaceText processor's Evaluation Mode property is set to Line-by-Line > and > the line's size is greater than Maximum Buffer Size > flowfile will not go to failure relationship but will remain in input queue > and give: > {code:java} > Failed to process session due to java.nio.BufferOverflowException; > Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException > {code} > Using attached file (apache license all on one line, a little over 11KB) > with Maximum Buffer Size set to 5KB will produce this situation. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (NIFI-10154) ReplaceText processor AdminYields on flowfile containing long line
[ https://issues.apache.org/jira/browse/NIFI-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman updated NIFI-10154: - Attachment: apacheLicenseOnOneLine.txt > ReplaceText processor AdminYields on flowfile containing long line > -- > > Key: NIFI-10154 > URL: https://issues.apache.org/jira/browse/NIFI-10154 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.3 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Attachments: apacheLicenseOnOneLine.txt, longLineContainingNulls.txt > > > When ReplaceText processor's Evaluation Mode property is set to Line-by-Line > and > the line's size is greater than Maximum Buffer Size > flowfile will not go to failure relationship but will remain in input queue > and give: > {code:java} > Failed to process session due to java.nio.BufferOverflowException; > Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException > {code} > Using attached file (apache license all on one line, a little over 11KB) > with Maximum Buffer Size set to 5KB will produce this situation. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (NIFI-10154) ReplaceText processor AdminYields on flowfile containing long line
[ https://issues.apache.org/jira/browse/NIFI-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman updated NIFI-10154: - Attachment: (was: apacheLicenseOnOneLine.txt) > ReplaceText processor AdminYields on flowfile containing long line > -- > > Key: NIFI-10154 > URL: https://issues.apache.org/jira/browse/NIFI-10154 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.3 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Attachments: longLineContainingNulls.txt > > > When ReplaceText processor's Evaluation Mode property is set to Line-by-Line > and > the line's size is greater than Maximum Buffer Size > flowfile will not go to failure relationship but will remain in input queue > and give: > {code:java} > Failed to process session due to java.nio.BufferOverflowException; > Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException > {code} > Using attached file (apache license all on one line, a little over 11KB) > with Maximum Buffer Size set to 5KB will produce this situation. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (NIFI-10154) ReplaceText processor AdminYields on flowfile containing long line
[ https://issues.apache.org/jira/browse/NIFI-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman updated NIFI-10154: - Attachment: apacheLicenseOnOneLine.txt > ReplaceText processor AdminYields on flowfile containing long line > -- > > Key: NIFI-10154 > URL: https://issues.apache.org/jira/browse/NIFI-10154 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.3 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Attachments: longLineContainingNulls.txt > > > When ReplaceText processor's Evaluation Mode property is set to Line-by-Line > and > the line's size is greater than Maximum Buffer Size > flowfile will not go to failure relationship but will remain in input queue > and give: > {code:java} > Failed to process session due to java.nio.BufferOverflowException; > Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException > {code} > Using attached file (apache license all on one line, a little over 11KB) > with Maximum Buffer Size set to 5KB will produce this situation. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (NIFI-10154) ReplaceText processor AdminYields on flowfile containing long line
[ https://issues.apache.org/jira/browse/NIFI-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman updated NIFI-10154: - Description: When ReplaceText processor's Evaluation Mode property is set to Line-by-Line and the line's size is greater than Maximum Buffer Size flowfile will not go to failure relationship but will remain in input queue and give: {code:java} Failed to process session due to java.nio.BufferOverflowException; Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException {code} Using attached file (apache license all on one line, a little over 11KB) with Maximum Buffer Size set to 5KB will produce this situation. was: When ReplaceText processor's Evaluation Mode property is set to Line-by-Line and the line's size is greater than Maximum Buffer Size and the line has many null characters flowfile will not go to failure relationship but will remain in input queue and give: {code:java} Failed to process session due to java.nio.BufferOverflowException; Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException {code} Using attached file with the settings above will reproduce this issue > ReplaceText processor AdminYields on flowfile containing long line > -- > > Key: NIFI-10154 > URL: https://issues.apache.org/jira/browse/NIFI-10154 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.3 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Attachments: longLineContainingNulls.txt > > > When ReplaceText processor's Evaluation Mode property is set to Line-by-Line > and > the line's size is greater than Maximum Buffer Size > flowfile will not go to failure relationship but will remain in input queue > and give: > {code:java} > Failed to process session due to java.nio.BufferOverflowException; > Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException > {code} > Using attached file (apache license all on one line, a little over 11KB) > with Maximum Buffer Size set to 5KB will produce this situation. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (NIFI-10154) ReplaceText processor AdminYields on flowfile containing long line
[ https://issues.apache.org/jira/browse/NIFI-10154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman updated NIFI-10154: - Summary: ReplaceText processor AdminYields on flowfile containing long line (was: ReplaceText processor AdminYields on flowfile containing long line of nulls) > ReplaceText processor AdminYields on flowfile containing long line > -- > > Key: NIFI-10154 > URL: https://issues.apache.org/jira/browse/NIFI-10154 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.15.3 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > Attachments: longLineContainingNulls.txt > > > When ReplaceText processor's Evaluation Mode property is set to Line-by-Line > and > the line's size is greater than Maximum Buffer Size > and > the line has many null characters > flowfile will not go to failure relationship but will remain in input queue > and give: > {code:java} > Failed to process session due to java.nio.BufferOverflowException; > Processor Administratively Yielded for 1 sec: java.nio.BufferOverflowException > {code} > Using attached file with the settings above will reproduce this issue -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (NIFI-10162) Improve InvokeHTTP Property Configuration
David Handermann created NIFI-10162: --- Summary: Improve InvokeHTTP Property Configuration Key: NIFI-10162 URL: https://issues.apache.org/jira/browse/NIFI-10162 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: David Handermann Assignee: David Handermann The {{InvokeHTTP}} Processor includes a number of required and optional properties that support a variety of use cases. The introduction of framework support for dependent properties provides the opportunity to streamline the number of properties visible in the default configuration. Among others, properties related to proxy configuration and authentication can have dependencies applied to indicate optional status. Adjusting property ordering to place required properties first would also make the configuration easier to follow. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (NIFI-10161) Add Gzip Request Content-Encoding in InvokeHTTP and ListenHTTP
David Handermann created NIFI-10161: --- Summary: Add Gzip Request Content-Encoding in InvokeHTTP and ListenHTTP Key: NIFI-10161 URL: https://issues.apache.org/jira/browse/NIFI-10161 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: David Handermann Assignee: David Handermann [RFC 7231 Section 3.1.2.2|https://datatracker.ietf.org/doc/html/rfc7231#section-3.1.2.2] describes the {{Content-Encoding}} header as a standard method of indicating compression applied to content information. HTTP servers use this header to indicate response compression, and some servers can support receiving HTTP requests compressed using Gzip. The {{ListenHTTP}} Processor supports receiving Gzip-compressed requests using a non-standard header named {{{}flowfile-gzipped{}}}, which the deprecated {{PostHTTP}} Processor applies when enabling the {{Compression Level}} property. The {{ListenHTTP}} Processor should be updated to support handling Gzip request compression using the standard {{Content-Encoding}} header, and the {{InvokeHTTP}} should be updated to support enabling Gzip. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] exceptionfactory commented on pull request #5654: NIFI-9558: ConnectWebSocket leaks connections and duplicates FlowFile
exceptionfactory commented on PR #5654: URL: https://github.com/apache/nifi/pull/5654#issuecomment-1164569570 Thanks for the note @Lehel44! If the current PR may introduce a new bug, perhaps it would be better to close it for now and open a new PR after further investigation? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] Lehel44 commented on pull request #5654: NIFI-9558: ConnectWebSocket leaks connections and duplicates FlowFile
Lehel44 commented on PR #5654: URL: https://github.com/apache/nifi/pull/5654#issuecomment-1164541069 Hi @exceptionfactory, This is PR fixes the duplication issue, but it might introduce a new bug. When a new flowfile is passed to the processor during sending data through an existing websocket connection, might terminate the connection before the entire data is processed. We tried to reproduce it and we've been able to do it only once. The PR's has a low priority due to the lack of usage of this feature. @turcsanyip The last thing I can do is to debug the connection while sending a new flowfile to the processor to check thoroughly again if the connection gets terminated, but as I said I wasn't able to reproduce it lately. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1340: MINIFICPP-1829 Export metrics for use with Prometheus
szaszm commented on code in PR #1340: URL: https://github.com/apache/nifi-minifi-cpp/pull/1340#discussion_r905075513 ## libminifi/src/c2/C2Client.cpp: ## @@ -206,35 +130,13 @@ void C2Client::loadC2ResponseConfiguration(const std::string ) { } std::shared_ptr new_node = std::make_shared(name); if (configuration_->get(classOption, class_definitions)) { -std::vector classes = utils::StringUtils::split(class_definitions, ","); -for (const std::string& clazz : classes) { - // instantiate the object - std::shared_ptr ptr = core::ClassLoader::getDefaultClassLoader().instantiate(clazz, clazz); - if (nullptr == ptr) { -const bool found_metric = [&] { - std::lock_guard guard{metrics_mutex_}; - auto metric = component_metrics_.find(clazz); - if (metric != component_metrics_.end()) { -ptr = metric->second; -return true; - } - return false; -}(); -if (!found_metric) { - logger_->log_error("No metric defined for %s", clazz); - continue; -} - } - auto node = std::dynamic_pointer_cast(ptr); - std::static_pointer_cast(new_node)->add_node(node); -} - +loadNodeClasses(class_definitions, new_node); } else { std::string optionName = option + "." + name; -auto node = loadC2ResponseConfiguration(optionName, new_node); +loadC2ResponseConfiguration(optionName, new_node); } - std::lock_guard guard{metrics_mutex_}; + // We don't need to lock here we do it in the initializeResponseNodes Review Comment: Consider adding a comma. I'm not good a grammar, but I think it helps readability by separating the parts of the sentence. ```suggestion // We don't need to lock here, we do it in the initializeResponseNodes ``` ## libminifi/include/core/state/ConnectionStore.h: ## @@ -0,0 +1,60 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include +#include + +#include "Connection.h" +#include "utils/gsl.h" + +namespace org::apache::nifi::minifi::state { + +class ConnectionStore { + public: + void updateConnection(minifi::Connection* connection) { +if (nullptr != connection) { + connections_[connection->getUUIDStr()] = connection; +} + } + + std::vector calculateConnectionMetrics(const std::string& metric_class) { +std::vector metrics; + +for (const auto& [_, connection] : connections_) { + metrics.push_back({"queue_data_size", static_cast(connection->getQueueDataSize()), +{{"connection_uuid", connection->getUUIDStr()}, {"connection_name", connection->getName()}, {"metric_class", metric_class}}}); + metrics.push_back({"queue_data_size_max", static_cast(connection->getMaxQueueDataSize()), +{{"connection_uuid", connection->getUUIDStr()}, {"connection_name", connection->getName()}, {"metric_class", metric_class}}}); + metrics.push_back({"queue_size", static_cast(connection->getQueueSize()), +{{"connection_uuid", connection->getUUIDStr()}, {"connection_name", connection->getName()}, {"metric_class", metric_class}}}); + metrics.push_back({"queue_size_max", static_cast(connection->getMaxQueueSize()), +{{"connection_uuid", connection->getUUIDStr()}, {"connection_name", connection->getName()}, {"metric_class", metric_class}}}); +} + +return metrics; + } + + virtual ~ConnectionStore() = default; + + protected: + std::map connections_; Review Comment: Consider using `std::unordered_map`. Untested pseudocode to help with `std::hash` specialization for `Identifier`: ``` namespace std { template<> struct hash { size_t operator()(const Identifier& id) const noexcept { constexpr int slices = sizeof(Identifier) / sizeof(size_t); const auto combine = [](size_t& seed, size_t new_hash) { // from the boost hash_combine docs seed ^= new_hash + 0x9e3779b9 + (seed << 6) + (seed >> 2); }; const auto get_slice = [](const
[GitHub] [nifi-minifi-cpp] lordgamez commented on a diff in pull request #1340: MINIFICPP-1829 Export metrics for use with Prometheus
lordgamez commented on code in PR #1340: URL: https://github.com/apache/nifi-minifi-cpp/pull/1340#discussion_r905120725 ## libminifi/include/core/state/nodes/RepositoryMetrics.h: ## @@ -90,15 +87,18 @@ class RepositoryMetrics : public ResponseNode { return serialized; } + std::vector calculateMetrics() override { +std::vector metrics; +for (const auto& [_, repo] : repositories_) { + metrics.push_back({"is_running", (repo->isRunning() ? 1.0 : 0.0), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); + metrics.push_back({"is_full", (repo->isFull() ? 1.0 : 0.0), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); + metrics.push_back({"repository_size", static_cast(repo->getRepoSize()), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); +} +return metrics; + } Review Comment: Could you elaborate a bit more please? What do you think is currently too Prometheus specific format here that should be changed and transformed later? Also what do you mean by unifying `serialize`? The serialized nodes are not used by Prometheus only on the C2 protocol. ## libminifi/include/properties/Configuration.h: ## @@ -156,12 +156,17 @@ class Configuration : public Properties { static constexpr const char *nifi_asset_directory = "nifi.asset.directory"; + // Metrics publisher options + static constexpr const char *nifi_metrics_publisher_class = "nifi.metrics.publisher.class"; + static constexpr const char *nifi_metrics_publisher_prometheus_metrics_publisher_port = "nifi.metrics.publisher.PrometheusMetricsPublisher.port"; Review Comment: The problem is that all possible configuration options sent through C2 are retrieved from here, so if we want to advertise the prometheus options as well, they should be added here. ## METRICS.md: ## @@ -0,0 +1,155 @@ + + +# Apache NiFi - MiNiFi - C++ Metrics Readme. + + +This readme defines the metrics published by Apache NiFi. All options defined are located in minifi.properties. + +## Table of Contents + +- [Description](#description) +- [Configuration](#configuration) +- [Metrics](#metrics) + +## Description + +Apache NiFi MiNiFi C++ can communicate metrics about the agent's status, that can be a system level or component level metric. +These metrics are exposed through the agent implemented metric publishers that can be configured in the minifi.properties. +Aside from the publisher exposed metrics, metrics are also sent through C2 protocol of which there is more information in the +[C2 documentation](C2.md#metrics). + +## Configuration + +To configure the a metrics publisher first we have to set which publisher class should be used: + + # in minifi.properties + + nifi.metrics.publisher.class=PrometheusMetricsPublisher + +Currently PrometheusMetricsPublisher is the only available publisher in MiNiFi C++ which publishes metrics to a Prometheus server. +To use the publisher a port should also be configured where the metrics will be available to be scraped through: + + # in minifi.properties + + nifi.metrics.publisher.PrometheusMetricsPublisher.port=9936 + +The last option defines which metric classes should be exposed through the metrics publisher in configured with a comma separated value: + + # in minifi.properties + + nifi.metrics.publisher.metrics=QueueMetrics,RepositoryMetrics,GetFileMetrics,DeviceInfoNode,FlowInformation + +## Metrics + +The following section defines the currently available metrics to be published by the MiNiFi C++ agent. + +NOTE: In Prometheus all metrics are extended with a `minifi_` prefix to mark the domain of the metric. For example the `connection_name` metric is published as `minifi_connection_name` in Prometheus. + +### QueueMetrics + +QueueMetrics is a system level metric that reports queue metrics for every connection in the flow. + +| Metric name | Labels | Description| +|--||| +| queue_data_size | metric_class, connection_uuid, connection_name | Max queue size to apply back pressure | +| queue_data_size_max | metric_class, connection_uuid, connection_name | Max queue data size to apply back pressure | +| queue_size | metric_class, connection_uuid, connection_name | Current queue size | +| queue_size_max | metric_class, connection_uuid, connection_name | Current queue data size| Review Comment: What would you suggest chaning in the names? The 'minifi' prefix is applied in the Prometheus extension, I intended these names to be used in other metric collectors as well, so I only added a note above that in Prometheus these are extended with the `minifi` prefix. Do
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1340: MINIFICPP-1829 Export metrics for use with Prometheus
szaszm commented on code in PR #1340: URL: https://github.com/apache/nifi-minifi-cpp/pull/1340#discussion_r901852584 ## libminifi/include/properties/Configuration.h: ## @@ -156,12 +156,17 @@ class Configuration : public Properties { static constexpr const char *nifi_asset_directory = "nifi.asset.directory"; + // Metrics publisher options + static constexpr const char *nifi_metrics_publisher_class = "nifi.metrics.publisher.class"; + static constexpr const char *nifi_metrics_publisher_prometheus_metrics_publisher_port = "nifi.metrics.publisher.PrometheusMetricsPublisher.port"; Review Comment: Any way to move this config property definition to the prometheus extension? ## METRICS.md: ## @@ -0,0 +1,155 @@ + + +# Apache NiFi - MiNiFi - C++ Metrics Readme. + + +This readme defines the metrics published by Apache NiFi. All options defined are located in minifi.properties. + +## Table of Contents + +- [Description](#description) +- [Configuration](#configuration) +- [Metrics](#metrics) + +## Description + +Apache NiFi MiNiFi C++ can communicate metrics about the agent's status, that can be a system level or component level metric. +These metrics are exposed through the agent implemented metric publishers that can be configured in the minifi.properties. +Aside from the publisher exposed metrics, metrics are also sent through C2 protocol of which there is more information in the +[C2 documentation](C2.md#metrics). + +## Configuration + +To configure the a metrics publisher first we have to set which publisher class should be used: + + # in minifi.properties + + nifi.metrics.publisher.class=PrometheusMetricsPublisher + +Currently PrometheusMetricsPublisher is the only available publisher in MiNiFi C++ which publishes metrics to a Prometheus server. +To use the publisher a port should also be configured where the metrics will be available to be scraped through: + + # in minifi.properties + + nifi.metrics.publisher.PrometheusMetricsPublisher.port=9936 + +The last option defines which metric classes should be exposed through the metrics publisher in configured with a comma separated value: + + # in minifi.properties + + nifi.metrics.publisher.metrics=QueueMetrics,RepositoryMetrics,GetFileMetrics,DeviceInfoNode,FlowInformation + +## Metrics + +The following section defines the currently available metrics to be published by the MiNiFi C++ agent. + +NOTE: In Prometheus all metrics are extended with a `minifi_` prefix to mark the domain of the metric. For example the `connection_name` metric is published as `minifi_connection_name` in Prometheus. + +### QueueMetrics + +QueueMetrics is a system level metric that reports queue metrics for every connection in the flow. + +| Metric name | Labels | Description| +|--||| +| queue_data_size | metric_class, connection_uuid, connection_name | Max queue size to apply back pressure | +| queue_data_size_max | metric_class, connection_uuid, connection_name | Max queue data size to apply back pressure | +| queue_size | metric_class, connection_uuid, connection_name | Current queue size | +| queue_size_max | metric_class, connection_uuid, connection_name | Current queue data size| Review Comment: I've found this guide to metric naming: https://prometheus.io/docs/practices/naming/ By the way I think the descriptions are not in the correct order here. ## libminifi/include/core/state/nodes/RepositoryMetrics.h: ## @@ -90,15 +87,18 @@ class RepositoryMetrics : public ResponseNode { return serialized; } + std::vector calculateMetrics() override { +std::vector metrics; +for (const auto& [_, repo] : repositories_) { + metrics.push_back({"is_running", (repo->isRunning() ? 1.0 : 0.0), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); + metrics.push_back({"is_full", (repo->isFull() ? 1.0 : 0.0), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); + metrics.push_back({"repository_size", static_cast(repo->getRepoSize()), {{"metric_class", getName()}, {"repository_name", repo->getName()}}}); +} +return metrics; + } Review Comment: I feel like the logic here and in `serialize` would be better, if they were unified for all metric nodes, with a common mapping step for prometheus-style transformation, that could happen in prometheus extension code. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6028: NIFI-9992 Improve configuration of InfluxDB processors, fix IT
exceptionfactory commented on code in PR #6028: URL: https://github.com/apache/nifi/pull/6028#discussion_r905051766 ## nifi-nar-bundles/nifi-influxdb-bundle/nifi-influxdb-processors/src/main/java/org/apache/nifi/processors/influxdb/AbstractInfluxDBProcessor.java: ## @@ -56,8 +56,8 @@ public abstract class AbstractInfluxDBProcessor extends AbstractProcessor { .build(); public static final PropertyDescriptor INFLUX_DB_CONNECTION_TIMEOUT = new PropertyDescriptor.Builder() -.name("InfluxDB Max Connection Time Out (seconds)") -.displayName("InfluxDB Max Connection Time Out (seconds)") +.name("InfluxDB Max Connection Time Out") Review Comment: The property `name` cannot be changed as it would result in breaking existing configurations during an upgrade. Changing the `displayName` is acceptable. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on pull request #5654: NIFI-9558: ConnectWebSocket leaks connections and duplicates FlowFile
exceptionfactory commented on PR #5654: URL: https://github.com/apache/nifi/pull/5654#issuecomment-1164434172 Could you take a look at these changes @turcsanyip? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #5905: NiFi-9817 Add a Validator for the PutCloudWatchMetric Processor's Unit Field
exceptionfactory commented on code in PR #5905: URL: https://github.com/apache/nifi/pull/5905#discussion_r905038747 ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/cloudwatch/PutCloudWatchMetric.java: ## @@ -70,6 +70,33 @@ public static final Set relationships = Collections.unmodifiableSet( new HashSet<>(Arrays.asList(REL_SUCCESS, REL_FAILURE))); +private static final Set units = Collections.unmodifiableSet( +new HashSet<>(Arrays.asList( +"Seconds", "Microseconds", "Milliseconds", "Bytes", +"Kilobytes", "Megabytes", "Gigabytes", "Terabytes", +"Bits", "Kilobits", "Megabits", "Gigabits", "Terabits", +"Percent", "Count", "Bytes/Second", "Kilobytes/Second", +"Megabytes/Second", "Gigabytes/Second", "Terabytes/Second", +"Bits/Second", "Kilobits/Second", "Megabits/Second", +"Gigabits/Second", "Terabits/Second", "Count/Second", +"None", ""))); Review Comment: Following up on this discussion, it seems better to check for an empty string separately from the list of allowed types. Taking another look at this, do the valid values exist anywhere in the AWS SDK? It seems like this list could become difficult to maintain. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-9981) Add Avro UUID support to the record API
[ https://issues.apache.org/jira/browse/NIFI-9981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-9981: --- Fix Version/s: 1.17.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add Avro UUID support to the record API > --- > > Key: NIFI-9981 > URL: https://issues.apache.org/jira/browse/NIFI-9981 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > Fix For: 1.17.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > Update the record api and associated readers and writers to support the new > UUID logical type. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] exceptionfactory closed pull request #6013: NIFI-9981 Added support for Avro UUID types
exceptionfactory closed pull request #6013: NIFI-9981 Added support for Avro UUID types URL: https://github.com/apache/nifi/pull/6013 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9981) Add Avro UUID support to the record API
[ https://issues.apache.org/jira/browse/NIFI-9981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558096#comment-17558096 ] ASF subversion and git services commented on NIFI-9981: --- Commit a3e8048b2d9c42d975965d482e33c28885f198e2 in nifi's branch refs/heads/main from Mike Thomsen [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=a3e8048b2d ] NIFI-9981 Added support for Avro UUID types This closes #6013 Signed-off-by: David Handermann > Add Avro UUID support to the record API > --- > > Key: NIFI-9981 > URL: https://issues.apache.org/jira/browse/NIFI-9981 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > Time Spent: 1h 40m > Remaining Estimate: 0h > > Update the record api and associated readers and writers to support the new > UUID logical type. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6149: NIFI-9908 C2 refactor and test coverage
exceptionfactory commented on code in PR #6149: URL: https://github.com/apache/nifi/pull/6149#discussion_r904997394 ## c2/c2-client-bundle/c2-client-http/src/main/java/org/apache/nifi/c2/client/http/C2ServerException.java: ## @@ -0,0 +1,26 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.c2.client.http; + +import java.io.IOException; + Review Comment: It would be helpful to add a class-level comment describing the basic purpose of this exception class. ## c2/c2-client-bundle/c2-client-http/pom.xml: ## @@ -47,5 +47,32 @@ limitations under the License. com.squareup.okhttp3 logging-interceptor + + +org.junit.jupiter +junit-jupiter-api +test + + +org.junit.jupiter +junit-jupiter-engine +test + + +org.mockito +mockito-core +test + + +org.mockito +mockito-junit-jupiter +test + + +com.github.tomakehurst +wiremock-jre8 +2.33.2 +test + Review Comment: Instead of introducing a new dependency on WireMock, and the Java 8 dependency, recommend using OkHttp [MockWebServer](https://github.com/square/okhttp/tree/master/mockwebserver), which is already used for testing `InvokeHTTP` and `HttpNotificationService`, among other components. ## c2/c2-client-bundle/c2-client-http/src/main/java/org/apache/nifi/c2/client/http/C2HttpClient.java: ## @@ -91,6 +92,45 @@ public Optional publishHeartbeat(C2Heartbeat heartbeat) { return serializer.serialize(heartbeat).flatMap(this::sendHeartbeat); } +@Override +public Optional retrieveUpdateContent(String flowUpdateUrl) { Review Comment: Although not required, it is helpful to use the `final` keyword on method arguments. ```suggestion public Optional retrieveUpdateContent(final String flowUpdateUrl) { ``` ## c2/c2-client-bundle/c2-client-http/src/test/java/org/apache/nifi/c2/client/http/C2HttpClientTest.java: ## @@ -0,0 +1,158 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.c2.client.http; + +import static com.github.tomakehurst.wiremock.client.WireMock.aResponse; +import static com.github.tomakehurst.wiremock.client.WireMock.containing; +import static com.github.tomakehurst.wiremock.client.WireMock.matching; +import static com.github.tomakehurst.wiremock.client.WireMock.postRequestedFor; +import static com.github.tomakehurst.wiremock.client.WireMock.stubFor; +import static com.github.tomakehurst.wiremock.client.WireMock.urlEqualTo; +import static com.github.tomakehurst.wiremock.client.WireMock.verify; +import static org.apache.nifi.c2.client.http.C2HttpClient.HTTP_STATUS_BAD_REQUEST; +import static org.junit.jupiter.api.Assertions.assertArrayEquals; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertFalse; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.junit.jupiter.api.Assertions.assertTrue; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.BDDMockito.given; + +import com.github.tomakehurst.wiremock.client.WireMock; +import
[jira] [Commented] (NIFI-10120) Refactor CassandraSessionProvider to use the latest Cassandra driver
[ https://issues.apache.org/jira/browse/NIFI-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17558056#comment-17558056 ] Steven Matison commented on NIFI-10120: --- Created initial NIFI-10120 branch of my nifi fork: https://github.com/steven-matison/nifi/commit/77fb768c20c488fdefd3c801bd1a6b701320fd15 > Refactor CassandraSessionProvider to use the latest Cassandra driver > > > Key: NIFI-10120 > URL: https://issues.apache.org/jira/browse/NIFI-10120 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Mike Thomsen >Assignee: Steven Matison >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (MINIFICPP-1851) Add a processor to gather cluster-level pod metrics in Kubernetes, from inside the cluster
[ https://issues.apache.org/jira/browse/MINIFICPP-1851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits updated MINIFICPP-1851: -- Fix Version/s: 0.13.0 > Add a processor to gather cluster-level pod metrics in Kubernetes, from > inside the cluster > -- > > Key: MINIFICPP-1851 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1851 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Marton Szasz >Assignee: Ferenc Gerlits >Priority: Major > Fix For: 0.13.0 > > > One may want to run minifi in a kubernetes cluster and use it to gather > various metrics about the pods running there. A good first convenience > processor could be one that gathers data from > /apis/metrics.k8s.io/v1beta1/pods. It would be like a wrapper for InvokeHTTP, > or just an HTTPClient, to get the necessary data with automatic API access > from inside the cluster. > There is no similar processor in NiFi. My first idea for a name is > CollectKubernetesPodMetrics. > Alternatively, add two separate entities: KubernetesPodMetrics, to the > pattern of our internal metrics that are exported over C2, and CollectMetrics > that takes an arbitrary metrics class and collects the metrics on every > trigger. This would be easier to extend in the future with more metrics. > Is it feasible to allow querying different metrics, or to add filtering > capabilities? -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Updated] (MINIFICPP-1858) Minor tweaks to the clang-tidy CI job
[ https://issues.apache.org/jira/browse/MINIFICPP-1858?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Gerlits updated MINIFICPP-1858: -- Fix Version/s: 0.13.0 > Minor tweaks to the clang-tidy CI job > - > > Key: MINIFICPP-1858 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1858 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Ferenc Gerlits >Priority: Minor > Fix For: 0.13.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > Exclude extensions which are not built in this CI run, and increase the > timeout from 2 to 3 hours, as clang-tidy takes a lng time. A better fix > would be to parallelize the clang-tidy checks, but until then, this will > prevent timeouts. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Resolved] (MINIFICPP-1858) Minor tweaks to the clang-tidy CI job
[jira] [Assigned] (NIFI-10160) Address c2 libraries present in NiFi root lib directory
[ https://issues.apache.org/jira/browse/NIFI-10160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Erdei reassigned NIFI-10160: --- Assignee: Ferenc Erdei > Address c2 libraries present in NiFi root lib directory > --- > > Key: NIFI-10160 > URL: https://issues.apache.org/jira/browse/NIFI-10160 > Project: Apache NiFi > Issue Type: Improvement > Components: C2, MiNiFi >Reporter: Csaba Bejan >Assignee: Ferenc Erdei >Priority: Minor > > Some c2 libraries are now present in the root lib directory which shouldn't > happen as they should be tied to nifi-framework-nar. Potentially this is just > a leftover coming via the nifi-nar-utils. This should be cleaned up. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (NIFI-10160) Address c2 libraries present in NiFi root lib directory
Csaba Bejan created NIFI-10160: -- Summary: Address c2 libraries present in NiFi root lib directory Key: NIFI-10160 URL: https://issues.apache.org/jira/browse/NIFI-10160 Project: Apache NiFi Issue Type: Improvement Components: C2, MiNiFi Reporter: Csaba Bejan Some c2 libraries are now present in the root lib directory which shouldn't happen as they should be tied to nifi-framework-nar. Potentially this is just a leftover coming via the nifi-nar-utils. This should be cleaned up. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (NIFI-10159) Move internal interfaces from c2-client-api
Csaba Bejan created NIFI-10159: -- Summary: Move internal interfaces from c2-client-api Key: NIFI-10159 URL: https://issues.apache.org/jira/browse/NIFI-10159 Project: Apache NiFi Issue Type: Improvement Components: C2, MiNiFi Reporter: Csaba Bejan Assignee: Csaba Bejan Move internal interface definitions from c2-client-api to a more specific / internal module. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[GitHub] [nifi] bejancsaba opened a new pull request, #6149: NIFI-9908 C2 refactor and test coverage
bejancsaba opened a new pull request, #6149: URL: https://github.com/apache/nifi/pull/6149 # Summary [NIFI-9908](https://issues.apache.org/jira/browse/NIFI-9908) After the C2 change was merged a few opportunities for improvement were identified along with proper test coverage added for the essential classes in the C2 module. If needed there is room for extending the coverage but for now this should cover the critical parts. # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10158) ListFTP required field can not use Variable Registry.
[ https://issues.apache.org/jira/browse/NIFI-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17557920#comment-17557920 ] humpfhumpf commented on NIFI-10158: --- The exception means that the value cannot be parsed into Java Long. Could you verify that the value of your variable does not contain an extra space character? > ListFTP required field can not use Variable Registry. > -- > > Key: NIFI-10158 > URL: https://issues.apache.org/jira/browse/NIFI-10158 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.16.1, 1.16.2, 1.16.3 >Reporter: Hadi >Priority: Minor > Attachments: image-2022-06-23-15-04-46-410.png, > image-2022-06-23-15-05-04-912.png > > > !image-2022-06-23-15-04-46-410.png! > !image-2022-06-23-15-05-04-912.png! > 1.16.X port field cant use variable registry, but 1.15.X can. > > > -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (NIFI-10158) ListFTP required field can not use Variable Registry.
Hadi created NIFI-10158: --- Summary: ListFTP required field can not use Variable Registry. Key: NIFI-10158 URL: https://issues.apache.org/jira/browse/NIFI-10158 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.16.3, 1.16.2, 1.16.1 Reporter: Hadi Attachments: image-2022-06-23-15-04-46-410.png, image-2022-06-23-15-05-04-912.png !image-2022-06-23-15-04-46-410.png! !image-2022-06-23-15-05-04-912.png! 1.16.X port field cant use variable registry, but 1.15.X can. -- This message was sent by Atlassian Jira (v8.20.7#820007)