[GitHub] [nifi] emiliosetiadarma opened a new pull request, #6644: NIFI-10789: set flowfile attributes upon failure/error when fetching …
emiliosetiadarma opened a new pull request, #6644: URL: https://github.com/apache/nifi/pull/6644 …object in Azure Data Lake Storage # Summary [NIFI-10789](https://issues.apache.org/jira/browse/NIFI-10789) - Write `azure.datalake.storage.*` flowfile attributes when encountering failures to fetch files from Azure Data Lake Storage # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [x] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [x] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [x] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [x] Pull Request based on current revision of the `main` branch - [x] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [x] Build completed using `mvn clean install -P contrib-check` - [x] JDK 8 - [x] JDK 11 - [x] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] dependabot[bot] opened a new pull request, #6643: Bump socket.io-parser from 3.3.2 to 3.3.3 in /nifi-registry/nifi-registry-core/nifi-registry-web-ui/src/main
dependabot[bot] opened a new pull request, #6643: URL: https://github.com/apache/nifi/pull/6643 Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 3.3.2 to 3.3.3. Changelog Sourced from https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md";>socket.io-parser's changelog. https://github.com/Automattic/socket.io-parser/compare/3.3.2...3.3.3";>3.3.3 (2022-11-09) Bug Fixes check the format of the index of each attachment (https://github.com/Automattic/socket.io-parser/commit/fb21e422fc193b34347395a33e0f625bebc09983";>fb21e42) https://github.com/socketio/socket.io-parser/compare/3.4.1...3.4.2";>3.4.2 (2022-11-09) Bug Fixes check the format of the index of each attachment (https://github.com/socketio/socket.io-parser/commit/04d23cecafe1b859fb03e0cbf6ba3b74dff56d14";>04d23ce) https://github.com/socketio/socket.io-parser/compare/4.2.0...4.2.1";>4.2.1 (2022-06-27) Bug Fixes check the format of the index of each attachment (https://github.com/socketio/socket.io-parser/commit/b5d0cb7dc56a0601a09b056beaeeb0e43b160050";>b5d0cb7) https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5";>4.0.5 (2022-06-27) Bug Fixes check the format of the index of each attachment (https://github.com/socketio/socket.io-parser/commit/b559f050ee02bd90bd853b9823f8de7fa94a80d4";>b559f05) https://github.com/socketio/socket.io-parser/compare/4.1.2...4.2.0";>4.2.0 (2022-04-17) Features allow the usage of custom replacer and reviver (https://github-redirect.dependabot.com/socketio/socket.io-parser/issues/112";>#112) (https://github.com/socketio/socket.io-parser/commit/b08bc1a93e8e3194b776c8a0bdedee1e29333680";>b08bc1a) https://github.com/socketio/socket.io-parser/compare/4.1.1...4.1.2";>4.1.2 (2022-02-17) Bug Fixes ... (truncated) Commits https://github.com/socketio/socket.io-parser/commit/cd11e38e1a3e2146617bc586f86512605607b212";>cd11e38 chore(release): 3.3.3 https://github.com/socketio/socket.io-parser/commit/fb21e422fc193b34347395a33e0f625bebc09983";>fb21e42 fix: check the format of the index of each attachment See full diff in https://github.com/socketio/socket.io-parser/compare/3.3.2...3.3.3";>compare view [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=socket.io-parser&package-manager=npm_and_yarn&previous-version=3.3.2&new-version=3.3.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- Dependabot commands and options You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/apache/nifi/network/alerts). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to g
[jira] [Updated] (MINIFICPP-1978) MergeContent should flush bins even when they don't exactly reach the max size.
[ https://issues.apache.org/jira/browse/MINIFICPP-1978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Szasz updated MINIFICPP-1978: Status: Patch Available (was: Open) https://github.com/apache/nifi-minifi-cpp/pull/1449 > MergeContent should flush bins even when they don't exactly reach the max > size. > --- > > Key: MINIFICPP-1978 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1978 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Marton Szasz >Assignee: Adam Debreceni >Priority: Major > Fix For: 0.13.0 > > Time Spent: 50m > Remaining Estimate: 0h > > It looks like MergeContent will fill up Bins until they reach the max group > size, and reject further flow files when that would push the bin over the max > size. If the last flow file doesn't exactly reach the max group size, the bin > is not flushed, but it also cannot accept new flow files. We should come up > with a way to flush "almost full" bins in a timely manner. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory commented on pull request #6506: NIFI-10243: allow ControlRate to throttle on combination of data rate or flowfile rate
exceptionfactory commented on PR #6506: URL: https://github.com/apache/nifi/pull/6506#issuecomment-1309629082 Thanks for making the updates and noting the detail about the minimum validation setting @markobean, the changes look good. The new test method push the total test time to over 12 seconds for standard execution, which will place it among the slower unit tests, contributing to an overall slow build process. I have an idea for updating the unit test to extend ControlRate for the test class and implement an alternative current time method. With that background, I'm not opposed to moving forward with this PR, and I can evaluate an improvement in a subsequent Jira issue. I will defer to @thenatog for final review at this point, thanks! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen opened a new pull request, #6642: NIFI-10562 Moved MongoDB to using testcontainers for integration test…
MikeThomsen opened a new pull request, #6642: URL: https://github.com/apache/nifi/pull/6642 … support. # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markobean commented on pull request #6506: NIFI-10243: allow ControlRate to throttle on combination of data rate or flowfile rate
markobean commented on PR #6506: URL: https://github.com/apache/nifi/pull/6506#issuecomment-1309455563 @exceptionfactory The time period cannot be reduced to 500 ms. The validator requires a minimum value of 1 sec. This is because the exact rate becomes less accurate the smaller the time period, especially once in sub-second range (and made worse on a busy system.) In order to change the unit tests to a smaller value, the validator would have to change. I do not think that is an advisable approach - to allow configurations which could mislead users in terms of accuracy for the sake of shortening unit tests. In order to test properly, the sleep time is a necessary evil. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markobean commented on a diff in pull request #6506: NIFI-10243: allow ControlRate to throttle on combination of data rate or flowfile rate
markobean commented on code in PR #6506: URL: https://github.com/apache/nifi/pull/6506#discussion_r1018464607 ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java: ## @@ -408,34 +498,59 @@ public FlowFileFilterResult filter(FlowFile flowFile) { groupName = DEFAULT_GROUP_ATTRIBUTE; } -Throttle throttle = throttleMap.get(groupName); -if (throttle == null) { -throttle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); +Throttle dataThrottle = dataThrottleMap.get(groupName); +Throttle countThrottle = countThrottleMap.get(groupName); -final long newRate; -if (DataUnit.DATA_SIZE_PATTERN.matcher(maximumRateStr).matches()) { -newRate = DataUnit.parseDataSize(maximumRateStr, DataUnit.B).longValue(); -} else { -newRate = Long.parseLong(maximumRateStr); +boolean dataThrottlingActive = false; +if (dataThrottleRequired()) { +if (dataThrottle == null) { +dataThrottle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); + dataThrottle.setMaxRate(DataUnit.parseDataSize(maximumRateStr, DataUnit.B).longValue()); +dataThrottleMap.put(groupName, dataThrottle); } -throttle.setMaxRate(newRate); -throttleMap.put(groupName, throttle); +dataThrottle.lock(); +try { +if (dataThrottle.tryAdd(getDataSizeAccrual(flowFile))) { +flowFilesInBatch += 1; +if (flowFilesInBatch>= flowFilesPerBatch) { +flowFilesInBatch = 0; +return FlowFileFilterResult.ACCEPT_AND_TERMINATE; +} else { +// only accept flowfile if additional count throttle does not need to run +if (!countThrottleRequired()) { +return FlowFileFilterResult.ACCEPT_AND_CONTINUE; +} +} +} else { +dataThrottlingActive = true; +} +} finally { +dataThrottle.unlock(); +} } -throttle.lock(); -try { -if (throttle.tryAdd(accrual)) { -flowFilesInBatch += 1; -if (flowFilesInBatch>= flowFilesPerBatch) { -flowFilesInBatch = 0; -return FlowFileFilterResult.ACCEPT_AND_TERMINATE; -} else { -return FlowFileFilterResult.ACCEPT_AND_CONTINUE; +// continue processing count throttle only if required and if data throttle is not already limiting flowfiles +if (countThrottleRequired() && !dataThrottlingActive) { +if (countThrottle == null) { +countThrottle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); + countThrottle.setMaxRate(Long.parseLong(maximumCountRateStr)); +countThrottleMap.put(groupName, countThrottle); +} +countThrottle.lock(); +try { +if (countThrottle.tryAdd(getCountAccrual(flowFile))) { +flowFilesInBatch += 1; Review Comment: Done. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markobean commented on a diff in pull request #6506: NIFI-10243: allow ControlRate to throttle on combination of data rate or flowfile rate
markobean commented on code in PR #6506: URL: https://github.com/apache/nifi/pull/6506#discussion_r1018464360 ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java: ## @@ -268,48 +336,67 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final ComponentLog logger = getLogger(); for (FlowFile flowFile : flowFiles) { // call this to capture potential error -final long accrualAmount = getFlowFileAccrual(flowFile); -if (accrualAmount < 0) { -logger.error("Routing {} to 'failure' due to missing or invalid attribute", new Object[]{flowFile}); +if (!isAccrualPossible(flowFile)) { +logger.error("Routing {} to 'failure' due to missing or invalid attribute", flowFile); session.transfer(flowFile, REL_FAILURE); } else { -logger.info("transferring {} to 'success'", new Object[]{flowFile}); +logger.info("transferring {} to 'success'", flowFile); session.transfer(flowFile, REL_SUCCESS); } } } +/* + * Determine if the accrual amount is valid for the type of throttle being applied. For example, if throttling based on + * flowfile attribute, the specified attribute must be present and must be a long integer. + */ +private boolean isAccrualPossible(FlowFile flowFile) { +if (rateControlCriteria.equals(ATTRIBUTE_RATE)) { +final String attributeValue = flowFile.getAttribute(rateControlAttribute); +return attributeValue != null && POSITIVE_LONG_PATTERN.matcher(attributeValue).matches(); +} +return true; +} + /* * Determine the amount this FlowFile will incur against the maximum allowed rate. - * If the value returned is negative then the flowfile given is missing the required attribute - * or the attribute has an invalid value for accrual. + * This is applicable to data size accrual only */ -private long getFlowFileAccrual(FlowFile flowFile) { -long rateValue; -switch (rateControlCriteria) { -case DATA_RATE: -rateValue = flowFile.getSize(); -break; -case FLOWFILE_RATE: -rateValue = 1; -break; -case ATTRIBUTE_RATE: -final String attributeValue = flowFile.getAttribute(rateControlAttribute); -if (attributeValue == null) { -return -1L; -} +private long getDataSizeAccrual(FlowFile flowFile) { +return flowFile.getSize(); +} -if (!POSITIVE_LONG_PATTERN.matcher(attributeValue).matches()) { -return -1L; -} -rateValue = Long.parseLong(attributeValue); -break; -default: -throw new AssertionError(" property set to illegal value of " + rateControlCriteria); +/* + * Determine the amount this FlowFile will incur against the maximum allowed rate. + * This is applicable to counting accruals, flowfiles or attributes + */ +private long getCountAccrual(FlowFile flowFile) { +long rateValue = -1L; Review Comment: Done. ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java: ## @@ -408,34 +498,59 @@ public FlowFileFilterResult filter(FlowFile flowFile) { groupName = DEFAULT_GROUP_ATTRIBUTE; } -Throttle throttle = throttleMap.get(groupName); -if (throttle == null) { -throttle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); +Throttle dataThrottle = dataThrottleMap.get(groupName); +Throttle countThrottle = countThrottleMap.get(groupName); -final long newRate; -if (DataUnit.DATA_SIZE_PATTERN.matcher(maximumRateStr).matches()) { -newRate = DataUnit.parseDataSize(maximumRateStr, DataUnit.B).longValue(); -} else { -newRate = Long.parseLong(maximumRateStr); +boolean dataThrottlingActive = false; +if (dataThrottleRequired()) { +if (dataThrottle == null) { +dataThrottle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); + dataThrottle.setMaxRate(DataUnit.parseDataSize(maximumRateStr, DataUnit.B).longValue()); +dataThrottleMap.put(groupName, dataThrottle); } -throttle.setMaxRate(newRate); -throttleMap.put(groupName, throttle); +dataThrottle.lock(); +try { +if (
[GitHub] [nifi-fds] dependabot[bot] opened a new pull request, #69: Bump socket.io-parser from 4.0.4 to 4.0.5
dependabot[bot] opened a new pull request, #69: URL: https://github.com/apache/nifi-fds/pull/69 Bumps [socket.io-parser](https://github.com/socketio/socket.io-parser) from 4.0.4 to 4.0.5. Release notes Sourced from https://github.com/socketio/socket.io-parser/releases";>socket.io-parser's releases. 4.0.5 Bug Fixes check the format of the index of each attachment (https://github.com/socketio/socket.io-parser/commit/b559f050ee02bd90bd853b9823f8de7fa94a80d4";>b559f05) Links Diff: https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5";>https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5 Changelog Sourced from https://github.com/socketio/socket.io-parser/blob/main/CHANGELOG.md";>socket.io-parser's changelog. https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5";>4.0.5 (2022-06-27) Bug Fixes check the format of the index of each attachment (https://github.com/socketio/socket.io-parser/commit/b559f050ee02bd90bd853b9823f8de7fa94a80d4";>b559f05) https://github.com/socketio/socket.io-parser/compare/4.1.2...4.2.0";>4.2.0 (2022-04-17) Features allow the usage of custom replacer and reviver (https://github-redirect.dependabot.com/socketio/socket.io-parser/issues/112";>#112) (https://github.com/socketio/socket.io-parser/commit/b08bc1a93e8e3194b776c8a0bdedee1e29333680";>b08bc1a) https://github.com/socketio/socket.io-parser/compare/4.1.1...4.1.2";>4.1.2 (2022-02-17) Bug Fixes allow objects with a null prototype in binary packets (https://github-redirect.dependabot.com/socketio/socket.io-parser/issues/114";>#114) (https://github.com/socketio/socket.io-parser/commit/7f6b262ac83bdf43c53a7eb02417e56e0cf491c8";>7f6b262) https://github.com/socketio/socket.io-parser/compare/4.1.0...4.1.1";>4.1.1 (2021-10-14) https://github.com/socketio/socket.io-parser/compare/4.0.4...4.1.0";>4.1.0 (2021-10-11) Features provide an ESM build with and without debug (https://github.com/socketio/socket.io-parser/commit/388c616a9221e4341945f8487e729e93a81d2da5";>388c616) Commits https://github.com/socketio/socket.io-parser/commit/f3329eb5a46b215a3fdf91b6008c56cf177a4124";>f3329eb chore(release): 4.0.5 https://github.com/socketio/socket.io-parser/commit/b559f050ee02bd90bd853b9823f8de7fa94a80d4";>b559f05 fix: check the format of the index of each attachment See full diff in https://github.com/socketio/socket.io-parser/compare/4.0.4...4.0.5";>compare view [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=socket.io-parser&package-manager=npm_and_yarn&previous-version=4.0.4&new-version=4.0.5)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- Dependabot commands and options You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated se
[GitHub] [nifi] markobean commented on pull request #6638: NIFI-10703 - Updated VersionedDataflow to support MaxEventDrivenThrea…
markobean commented on PR #6638: URL: https://github.com/apache/nifi/pull/6638#issuecomment-1309432249 I found the issue. I don't think the max event driven thread count was being read from the flow on startup. See https://github.com/apache/nifi/blob/0643f336e8266043c4ec01e1c07b8ef5bb38b02a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/serialization/VersionedFlowSynchronizer.java#L372 The following needs to be added: ``` controller.setMaxEventDrivenThreadCount(versionedFlow.getMaxEventDrivenThreadCount()); ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10130) AzureGraphUserGroupProvider fails with nested groups
[ https://issues.apache.org/jira/browse/NIFI-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631309#comment-17631309 ] ASF subversion and git services commented on NIFI-10130: Commit 0643f336e8266043c4ec01e1c07b8ef5bb38b02a in nifi's branch refs/heads/main from Seokwon Yang [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0643f336e8 ] NIFI-10130 AzureGraphUserGroupProvider handles group with transitive members This closes #6135 Signed-off-by: David Handermann > AzureGraphUserGroupProvider fails with nested groups > > > Key: NIFI-10130 > URL: https://issues.apache.org/jira/browse/NIFI-10130 > Project: Apache NiFi > Issue Type: Bug > Components: Security >Affects Versions: 1.16.3 > Environment: Azure AD >Reporter: Daniel Scheiner >Assignee: Seokwon Yang >Priority: Major > Original Estimate: 48h > Time Spent: 0.5h > Remaining Estimate: 47.5h > > Using the AzureGraphUserGroupProvider fails if one of the groups in the > AzureAD has another group as "members". > Error is: > Caused by: java.lang.NullPointerException: null > at > org.apache.nifi.authorization.azure.AzureGraphUserGroupProvider.getUsersFrom(AzureGraphUserGroupProvider.java:383) > The function "getUsersFrom" needs to check if a "user" is actually another > group and get its users from there... -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-10130) AzureGraphUserGroupProvider fails with nested groups
[ https://issues.apache.org/jira/browse/NIFI-10130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-10130. - Fix Version/s: 1.19.0 Resolution: Fixed > AzureGraphUserGroupProvider fails with nested groups > > > Key: NIFI-10130 > URL: https://issues.apache.org/jira/browse/NIFI-10130 > Project: Apache NiFi > Issue Type: Bug > Components: Security >Affects Versions: 1.16.3 > Environment: Azure AD >Reporter: Daniel Scheiner >Assignee: Seokwon Yang >Priority: Major > Fix For: 1.19.0 > > Original Estimate: 48h > Time Spent: 0.5h > Remaining Estimate: 47.5h > > Using the AzureGraphUserGroupProvider fails if one of the groups in the > AzureAD has another group as "members". > Error is: > Caused by: java.lang.NullPointerException: null > at > org.apache.nifi.authorization.azure.AzureGraphUserGroupProvider.getUsersFrom(AzureGraphUserGroupProvider.java:383) > The function "getUsersFrom" needs to check if a "user" is actually another > group and get its users from there... -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory closed pull request #6135: NIFI-10130 AzureGraphUserGroupProvider handles nested group
exceptionfactory closed pull request #6135: NIFI-10130 AzureGraphUserGroupProvider handles nested group URL: https://github.com/apache/nifi/pull/6135 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10790) Update Snowflake JDBC driver to 3.13.24
[ https://issues.apache.org/jira/browse/NIFI-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10790: Priority: Minor (was: Major) > Update Snowflake JDBC driver to 3.13.24 > --- > > Key: NIFI-10790 > URL: https://issues.apache.org/jira/browse/NIFI-10790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Fix For: 1.19.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Update Snowflake JDBC driver to 3.13.24 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10790) Update Snowflake JDBC driver to 3.13.24
[ https://issues.apache.org/jira/browse/NIFI-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10790: Fix Version/s: 1.19.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Update Snowflake JDBC driver to 3.13.24 > --- > > Key: NIFI-10790 > URL: https://issues.apache.org/jira/browse/NIFI-10790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.19.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Update Snowflake JDBC driver to 3.13.24 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10790) Update Snowflake JDBC driver to 3.13.24
[ https://issues.apache.org/jira/browse/NIFI-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10790: Labels: dependency-upgrade (was: ) > Update Snowflake JDBC driver to 3.13.24 > --- > > Key: NIFI-10790 > URL: https://issues.apache.org/jira/browse/NIFI-10790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Minor > Labels: dependency-upgrade > Fix For: 1.19.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Update Snowflake JDBC driver to 3.13.24 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10790) Update Snowflake JDBC driver to 3.13.24
[ https://issues.apache.org/jira/browse/NIFI-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631301#comment-17631301 ] ASF subversion and git services commented on NIFI-10790: Commit 425dd6a848fd81b8585751c18afddacfb63755fb in nifi's branch refs/heads/main from Pierre Villard [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=425dd6a848 ] NIFI-10790 Updated Snowflake JDBC driver from 3.13.21 to 3.13.24 This closes #6641 Signed-off-by: David Handermann > Update Snowflake JDBC driver to 3.13.24 > --- > > Key: NIFI-10790 > URL: https://issues.apache.org/jira/browse/NIFI-10790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Time Spent: 20m > Remaining Estimate: 0h > > Update Snowflake JDBC driver to 3.13.24 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory closed pull request #6641: NIFI-10790 - Update Snowflake JDBC driver to 3.13.24
exceptionfactory closed pull request #6641: NIFI-10790 - Update Snowflake JDBC driver to 3.13.24 URL: https://github.com/apache/nifi/pull/6641 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6506: NIFI-10243: allow ControlRate to throttle on combination of data rate or flowfile rate
exceptionfactory commented on code in PR #6506: URL: https://github.com/apache/nifi/pull/6506#discussion_r1018429196 ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java: ## @@ -268,48 +336,67 @@ public void onTrigger(final ProcessContext context, final ProcessSession session final ComponentLog logger = getLogger(); for (FlowFile flowFile : flowFiles) { // call this to capture potential error -final long accrualAmount = getFlowFileAccrual(flowFile); -if (accrualAmount < 0) { -logger.error("Routing {} to 'failure' due to missing or invalid attribute", new Object[]{flowFile}); +if (!isAccrualPossible(flowFile)) { +logger.error("Routing {} to 'failure' due to missing or invalid attribute", flowFile); session.transfer(flowFile, REL_FAILURE); } else { -logger.info("transferring {} to 'success'", new Object[]{flowFile}); +logger.info("transferring {} to 'success'", flowFile); session.transfer(flowFile, REL_SUCCESS); } } } +/* + * Determine if the accrual amount is valid for the type of throttle being applied. For example, if throttling based on + * flowfile attribute, the specified attribute must be present and must be a long integer. + */ +private boolean isAccrualPossible(FlowFile flowFile) { +if (rateControlCriteria.equals(ATTRIBUTE_RATE)) { +final String attributeValue = flowFile.getAttribute(rateControlAttribute); +return attributeValue != null && POSITIVE_LONG_PATTERN.matcher(attributeValue).matches(); +} +return true; +} + /* * Determine the amount this FlowFile will incur against the maximum allowed rate. - * If the value returned is negative then the flowfile given is missing the required attribute - * or the attribute has an invalid value for accrual. + * This is applicable to data size accrual only */ -private long getFlowFileAccrual(FlowFile flowFile) { -long rateValue; -switch (rateControlCriteria) { -case DATA_RATE: -rateValue = flowFile.getSize(); -break; -case FLOWFILE_RATE: -rateValue = 1; -break; -case ATTRIBUTE_RATE: -final String attributeValue = flowFile.getAttribute(rateControlAttribute); -if (attributeValue == null) { -return -1L; -} +private long getDataSizeAccrual(FlowFile flowFile) { +return flowFile.getSize(); +} -if (!POSITIVE_LONG_PATTERN.matcher(attributeValue).matches()) { -return -1L; -} -rateValue = Long.parseLong(attributeValue); -break; -default: -throw new AssertionError(" property set to illegal value of " + rateControlCriteria); +/* + * Determine the amount this FlowFile will incur against the maximum allowed rate. + * This is applicable to counting accruals, flowfiles or attributes + */ +private long getCountAccrual(FlowFile flowFile) { +long rateValue = -1L; Review Comment: It would be helpful to define a `private static final` value for the default value of `-1` and reuse that in multiple places. ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ControlRate.java: ## @@ -408,34 +498,59 @@ public FlowFileFilterResult filter(FlowFile flowFile) { groupName = DEFAULT_GROUP_ATTRIBUTE; } -Throttle throttle = throttleMap.get(groupName); -if (throttle == null) { -throttle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); +Throttle dataThrottle = dataThrottleMap.get(groupName); +Throttle countThrottle = countThrottleMap.get(groupName); -final long newRate; -if (DataUnit.DATA_SIZE_PATTERN.matcher(maximumRateStr).matches()) { -newRate = DataUnit.parseDataSize(maximumRateStr, DataUnit.B).longValue(); -} else { -newRate = Long.parseLong(maximumRateStr); +boolean dataThrottlingActive = false; +if (dataThrottleRequired()) { +if (dataThrottle == null) { +dataThrottle = new Throttle(timePeriodSeconds, TimeUnit.SECONDS, getLogger()); + dataThrottle.setMaxRate(DataUnit.parseDataSize(maximumRateStr, DataUnit.B).longValue()); +dataThrottleMap.put(groupName, dataThrottle); } -throttle.setMaxRate(newRate); -
[GitHub] [nifi] exceptionfactory closed pull request #6497: [NIFI-10428] Added support for an Avro directory based registry.
exceptionfactory closed pull request #6497: [NIFI-10428] Added support for an Avro directory based registry. URL: https://github.com/apache/nifi/pull/6497 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on pull request #6497: [NIFI-10428] Added support for an Avro directory based registry.
exceptionfactory commented on PR #6497: URL: https://github.com/apache/nifi/pull/6497#issuecomment-1309388054 Closing this pull request for now, pending further discussion on a way forward in [NIFI-10428](https://issues.apache.org/jira/browse/NIFI-10428). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6589: NIFI-10710 implement processor for AWS Polly, Textract, Translate, Tr…
exceptionfactory commented on code in PR #6589: URL: https://github.com/apache/nifi/pull/6589#discussion_r1018397754 ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/ml/AwsMLJobStatusGetter.java: ## @@ -0,0 +1,134 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.processors.aws.ml; + +import com.amazonaws.AmazonWebServiceClient; +import com.amazonaws.ClientConfiguration; +import com.amazonaws.ResponseMetadata; +import com.amazonaws.auth.AWSCredentials; +import com.amazonaws.http.SdkHttpMetadata; +import com.fasterxml.jackson.databind.MapperFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.json.JsonMapper; +import com.fasterxml.jackson.databind.module.SimpleModule; +import java.io.BufferedWriter; +import java.io.OutputStreamWriter; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processors.aws.AbstractAWSCredentialsProviderProcessor; + +abstract public class AwsMLJobStatusGetter Review Comment: Recommend spelling out the name: ```suggestion public abstract class AwsMachineLearningJobStatusGetter ``` ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/ml/AwsMLJobStatusGetter.java: ## @@ -0,0 +1,134 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.processors.aws.ml; + +import com.amazonaws.AmazonWebServiceClient; +import com.amazonaws.ClientConfiguration; +import com.amazonaws.ResponseMetadata; +import com.amazonaws.auth.AWSCredentials; +import com.amazonaws.http.SdkHttpMetadata; +import com.fasterxml.jackson.databind.MapperFeature; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.json.JsonMapper; +import com.fasterxml.jackson.databind.module.SimpleModule; +import java.io.BufferedWriter; +import java.io.OutputStreamWriter; +import java.nio.charset.StandardCharsets; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processors.aws.AbstractAWSCredentialsProviderProcessor; + +abstract public class AwsMLJobStatusGetter +extends AbstractAWSCredentialsProviderProcessor { Review Comment: The general convention for type variables is a single letter, so recommend changing `SERVICE` to `T`: ```suggestion extends AbstractAWSCredentialsProviderProcessor { ``` ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/ml/AwsMLJobStatusGetter.java: ##
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6589: NIFI-10710 implement processor for AWS Polly, Textract, Translate, Tr…
exceptionfactory commented on code in PR #6589: URL: https://github.com/apache/nifi/pull/6589#discussion_r1018396631 ## nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/pom.xml: ## @@ -117,6 +117,26 @@ 1.19.0-SNAPSHOT provided + +com.amazonaws +aws-java-sdk-translate +1.12.328 + + +com.amazonaws +aws-java-sdk-polly +1.12.328 + + +com.amazonaws +aws-java-sdk-transcribe +1.12.328 + + +com.amazonaws +aws-java-sdk-textract +1.12.328 + Review Comment: On further review, moving forward with the current implementation based on SDK version 1 works, and subsequent refactoring for SDK version 2 can be handled once a new AWS Credentials Service is ready. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markobean commented on pull request #6638: NIFI-10703 - Updated VersionedDataflow to support MaxEventDrivenThrea…
markobean commented on PR #6638: URL: https://github.com/apache/nifi/pull/6638#issuecomment-1309351043 Performed a full build with Java 11 (basic build without contrib-check profile). Installed and started NiFi. The flow.json.gz file included a property for event driven threads with default value of 1: ` "maxEventDrivenThreadCount": 1,` Changed the value to 5. Confirmed new value in UI and in flow.json.gz and flow.xml.gz. ``` "maxEventDrivenThreadCount": 5, 5 ``` Restarted NiFi. The Maximum event driven thread count returned to "1" in the UI. Confirmed in flow.json.gz and flow.xml.gz. ``` "maxEventDrivenThreadCount": 1, 1 ``` This PR does not fix the issue. @thenatog Did you perform the above test? Did you get different results? I am about to be away for the long holiday weekend. If it is not resolved by next week, I'll take a deeper look to determine what is happening. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6611: NIFI-10722 - Add handling of TBCD-STRING in nifi-asn1-services
exceptionfactory commented on code in PR #6611: URL: https://github.com/apache/nifi/pull/6611#discussion_r1018390500 ## nifi-nar-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/java/org/apache/nifi/jasn1/convert/converters/TbcdStringConverter.java: ## @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.jasn1.convert.converters; + +import com.beanit.asn1bean.ber.types.BerOctetString; +import com.beanit.asn1bean.ber.types.BerType; +import org.apache.nifi.jasn1.convert.JASN1Converter; +import org.apache.nifi.jasn1.convert.JASN1TypeAndValueConverter; +import org.apache.nifi.serialization.record.DataType; +import org.apache.nifi.serialization.record.RecordFieldType; + +public class TbcdStringConverter implements JASN1TypeAndValueConverter { + +private static final String TBCD_STRING_TYPE = "TBCDSTRING"; +private static final char[] TBCD_SYMBOLS = "0123456789*#abc".toCharArray(); + +@Override +public boolean supportsType(Class berType) { +boolean supportsType = BerOctetString.class.isAssignableFrom(berType) && isTbcdString(berType); + +return supportsType; +} + +@Override +public DataType convertType(Class berType, JASN1Converter converter) { +DataType dataType = RecordFieldType.STRING.getDataType(); + +return dataType; +} + +@Override +public boolean supportsValue(BerType value, DataType dataType) { +boolean supportsValue = value instanceof BerOctetString && isTbcdString(value.getClass()); + +return supportsValue; +} + +@Override +public Object convertValue(BerType value, DataType dataType, JASN1Converter converter) { +final BerOctetString berValue = ((BerOctetString) value); + +byte[] bytes = berValue.value; + +int size = (bytes == null ? 0 : bytes.length); +StringBuilder resultBuilder = new StringBuilder(2 * size); + +for (int octetIndex = 0; octetIndex < size; ++octetIndex) { +int octet = bytes[octetIndex]; + +int digit2 = (octet >> 4) & 0xF; +int digit1 = octet & 0xF; + +if (digit1 == 15) { Review Comment: It would be helpful to define a `private static final` value for `15`, perhaps named something `MAXIMUM_DECIMAL_CODE`? ## nifi-nar-bundles/nifi-asn1-bundle/nifi-asn1-services/src/main/java/org/apache/nifi/jasn1/convert/converters/TbcdStringConverter.java: ## @@ -0,0 +1,110 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.jasn1.convert.converters; + +import com.beanit.asn1bean.ber.types.BerOctetString; +import com.beanit.asn1bean.ber.types.BerType; +import org.apache.nifi.jasn1.convert.JASN1Converter; +import org.apache.nifi.jasn1.convert.JASN1TypeAndValueConverter; +import org.apache.nifi.serialization.record.DataType; +import org.apache.nifi.serialization.record.RecordFieldType; + +public class TbcdStringConverter implements JASN1TypeAndValueConverter { + +private static final String TBCD_STRING_TYPE = "TBCDSTRING"; +private static final char[] TBCD_SYMBOLS = "0123456789*#abc".toCharArray(); + +@Override +public boolean supportsType(Class berType) { +boolean supportsType = BerOctetString.class.isAssignableFrom(berType) && isTbcdString(berType); + +return supportsType; +} + +@Override +public DataType convertType(Class berType, JASN1Converter converte
[GitHub] [nifi] exceptionfactory commented on pull request #6596: NIFI-10717 fix inconsistent tests
exceptionfactory commented on PR #6596: URL: https://github.com/apache/nifi/pull/6596#issuecomment-1309340689 Thanks for the reply @ZhewenFu. The localized sorting makes sense, but it doesn't seem like the best solution given the behavior it is attempting to improve. If it is possible to put together a solution that makes the necessary changes using LinkedHashSet, that would be helpful to evaluate, even if it impacts more classes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10783) TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches uses non-deterministic HashMap
[ https://issues.apache.org/jira/browse/NIFI-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631270#comment-17631270 ] ASF subversion and git services commented on NIFI-10783: Commit ad4e0b05853895ca6f3404ebd8cb27f3960d29f4 in nifi's branch refs/heads/main from sopan98 [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ad4e0b0585 ] NIFI-10783 Switched to LinkedHashMap for CompareFuzzyHash This closes #6639 Signed-off-by: David Handermann > TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches uses > non-deterministic HashMap > > > Key: NIFI-10783 > URL: https://issues.apache.org/jira/browse/NIFI-10783 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions > Environment: Apache Maven 3.6.0; > openjdk version "1.8.0_342"; > OpenJDK Runtime Environment (build 1.8.0_342-8u342-b07-0ubuntu1~20.04-b07); > OpenJDK 64-Bit Server VM (build 25.342-b07, mixed mode); >Reporter: Sopan Phaltankar >Assignee: Sopan Phaltankar >Priority: Trivial > Fix For: 1.19.0 > > Attachments: > org.apache.nifi.processors.cybersecurity.TestCompareFuzzyHash.txt > > Time Spent: 20m > Remaining Estimate: 0h > > {code:java} > org.apache.nifi.processors.cybersecurity.TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches{code} > This is a flaky test, it can pass mvn test while but when run using the tool > [NonDex|https://github.com/TestingResearchIllinois/NonDex], it fails. NonDex > is a tool that will introduce non-determinism in certain java collections. > The test shows below: > {code:java} > [ERROR] Failures: > [ERROR] TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches:232 > Expected attribute fuzzyhash.value.0.match to be > nifi-nar-bundles/nifi-lumberjack-bundle/nifi-lumberjack-processors/pom.xml > but instead it was > nifi-nar-bundles/nifi-beats-bundle/nifi-beats-processors/pom.xml ==> > expected: > > but was: > {code} > *Steps to reproduce the failure:* > # Run the following command in nifi: > # First, build the module: > {noformat} > mvn install -pl > nifi-nar-bundles/nifi-cybersecurity-bundle/nifi-cybersecurity-processors > -DskipTests -Drat.skip -am{noformat} > # Then run the test using > [NonDex|https://github.com/TestingResearchIllinois/NonDex] > {noformat} > mvn -pl > nifi-nar-bundles/nifi-cybersecurity-bundle/nifi-cybersecurity-processors > nondex:nondex > -Dtest=org.apache.nifi.processors.cybersecurity.TestCompareFuzzyHash#testTLSHCompareFuzzyHashMultipleMatches{noformat} > The result will be saved under the module folder in .nondex > Another test, TestCompareFuzzyHash.testSsdeepCompareFuzzyHashMultipleMatches > depended on the HashMap which was changed as a part of the fix for this JIRA. > The simple fix for this was to make the order of checking the items the same > as the insertion order. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-10783) TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches uses non-deterministic HashMap
[ https://issues.apache.org/jira/browse/NIFI-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-10783. - Resolution: Fixed > TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches uses > non-deterministic HashMap > > > Key: NIFI-10783 > URL: https://issues.apache.org/jira/browse/NIFI-10783 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions > Environment: Apache Maven 3.6.0; > openjdk version "1.8.0_342"; > OpenJDK Runtime Environment (build 1.8.0_342-8u342-b07-0ubuntu1~20.04-b07); > OpenJDK 64-Bit Server VM (build 25.342-b07, mixed mode); >Reporter: Sopan Phaltankar >Assignee: Sopan Phaltankar >Priority: Trivial > Fix For: 1.19.0 > > Attachments: > org.apache.nifi.processors.cybersecurity.TestCompareFuzzyHash.txt > > Time Spent: 20m > Remaining Estimate: 0h > > {code:java} > org.apache.nifi.processors.cybersecurity.TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches{code} > This is a flaky test, it can pass mvn test while but when run using the tool > [NonDex|https://github.com/TestingResearchIllinois/NonDex], it fails. NonDex > is a tool that will introduce non-determinism in certain java collections. > The test shows below: > {code:java} > [ERROR] Failures: > [ERROR] TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches:232 > Expected attribute fuzzyhash.value.0.match to be > nifi-nar-bundles/nifi-lumberjack-bundle/nifi-lumberjack-processors/pom.xml > but instead it was > nifi-nar-bundles/nifi-beats-bundle/nifi-beats-processors/pom.xml ==> > expected: > > but was: > {code} > *Steps to reproduce the failure:* > # Run the following command in nifi: > # First, build the module: > {noformat} > mvn install -pl > nifi-nar-bundles/nifi-cybersecurity-bundle/nifi-cybersecurity-processors > -DskipTests -Drat.skip -am{noformat} > # Then run the test using > [NonDex|https://github.com/TestingResearchIllinois/NonDex] > {noformat} > mvn -pl > nifi-nar-bundles/nifi-cybersecurity-bundle/nifi-cybersecurity-processors > nondex:nondex > -Dtest=org.apache.nifi.processors.cybersecurity.TestCompareFuzzyHash#testTLSHCompareFuzzyHashMultipleMatches{noformat} > The result will be saved under the module folder in .nondex > Another test, TestCompareFuzzyHash.testSsdeepCompareFuzzyHashMultipleMatches > depended on the HashMap which was changed as a part of the fix for this JIRA. > The simple fix for this was to make the order of checking the items the same > as the insertion order. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] exceptionfactory closed pull request #6639: NIFI-10783 Fix Flaky Test TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches
exceptionfactory closed pull request #6639: NIFI-10783 Fix Flaky Test TestCompareFuzzyHash.testTLSHCompareFuzzyHashMultipleMatches URL: https://github.com/apache/nifi/pull/6639 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10791) Add AWS V2 SDK implementation to AWSCredentialsProviderControllerService
Joe Gresock created NIFI-10791: -- Summary: Add AWS V2 SDK implementation to AWSCredentialsProviderControllerService Key: NIFI-10791 URL: https://issues.apache.org/jira/browse/NIFI-10791 Project: Apache NiFi Issue Type: Improvement Reporter: Joe Gresock Assignee: Joe Gresock In anticipation of upgrading various AWS processors to use the v2 SDK, it would be good to add support for retrieving a v2 AWS credentials provider to the existing AWSCredentialsProviderControllerService. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10702) GetSMBFile fails with Different server found for same hostname
[ https://issues.apache.org/jira/browse/NIFI-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Turcsanyi updated NIFI-10702: --- Fix Version/s: 1.19.0 Resolution: Fixed Status: Resolved (was: Patch Available) > GetSMBFile fails with Different server found for same hostname > -- > > Key: NIFI-10702 > URL: https://issues.apache.org/jira/browse/NIFI-10702 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.18.0 > Environment: RedHat 6 using openjdk 11 2018-09-25 > OpenJDK Runtime Environment 18.9 (build 11+28) > OpenJDK 64-Bit Server VM 18.9 (build 11+28, mixed mode) >Reporter: Aaron Schultz >Assignee: Kulik Gábor >Priority: Minor > Fix For: 1.19.0 > > Time Spent: 40m > Remaining Estimate: 0h > > After almost exactly 1 week of running 1.18.0, previously configured > GetSMBFile processes are reporting that they cannot retrieve files due to > "Different server found for same hostname" error. This happened on more than > 1 of my GetSMBFile connectors. There does not appear to be a way to force it > to "forget" the server; restarting the processor did not fix it. A full NiFi > restart does resolve the issue. > > Dump information: > {quote}2022-10-26 08:46:51,188 INFO [Timer-Driven Process Thread-55] > c.hierynomus.smbj.connection.Connection Closed connection to servername.domain > 2022-10-26 08:46:51,188 INFO [Packet Reader for servername.domain] > c.h.s.t.tcp.direct.DirectTcpPacketReader Thread[Packet Reader for > servername.domain,5,main] stopped. > 2022-10-26 08:46:51,188 ERROR [Timer-Driven Process Thread-55] > o.apache.nifi.processors.smb.GetSmbFile > GetSmbFile[id=d212416c-9cb0-1f8e-cf35-c4cdfe011e42] Could not establish smb > connection because of error co > m.hierynomus.protocol.transport.TransportException: Different server found > for same hostname 'servername.domain', disconnecting... > com.hierynomus.protocol.transport.TransportException: Different server found > for same hostname 'servername.domain', disconnecting... > at > com.hierynomus.smbj.connection.SMBProtocolNegotiator.initializeOrValidateServerDetails(SMBProtocolNegotiator.java:232) > at > com.hierynomus.smbj.connection.SMBProtocolNegotiator.negotiateDialect(SMBProtocolNegotiator.java:83) > at > com.hierynomus.smbj.connection.Connection.connect(Connection.java:141) > at > com.hierynomus.smbj.SMBClient.getEstablishedOrConnect(SMBClient.java:96) > at com.hierynomus.smbj.SMBClient.connect(SMBClient.java:71) > at > org.apache.nifi.processors.smb.GetSmbFile.onTrigger(GetSmbFile.java:390) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1354) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) > at > java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) > at > java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {quote} > > A possible answer was found here: > [https://github.com/hierynomus/smbj/issues/672] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10702) GetSMBFile fails with Different server found for same hostname
[ https://issues.apache.org/jira/browse/NIFI-10702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631262#comment-17631262 ] ASF subversion and git services commented on NIFI-10702: Commit 1bd4169558c6825bf9f804bf9a48fc91c3d3da4a in nifi's branch refs/heads/main from Gabor Kulik [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1bd4169558 ] NIFI-10702 Clear server list on connection error in SMB processors This closes #6620. Signed-off-by: Peter Turcsanyi > GetSMBFile fails with Different server found for same hostname > -- > > Key: NIFI-10702 > URL: https://issues.apache.org/jira/browse/NIFI-10702 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.18.0 > Environment: RedHat 6 using openjdk 11 2018-09-25 > OpenJDK Runtime Environment 18.9 (build 11+28) > OpenJDK 64-Bit Server VM 18.9 (build 11+28, mixed mode) >Reporter: Aaron Schultz >Assignee: Kulik Gábor >Priority: Minor > Time Spent: 0.5h > Remaining Estimate: 0h > > After almost exactly 1 week of running 1.18.0, previously configured > GetSMBFile processes are reporting that they cannot retrieve files due to > "Different server found for same hostname" error. This happened on more than > 1 of my GetSMBFile connectors. There does not appear to be a way to force it > to "forget" the server; restarting the processor did not fix it. A full NiFi > restart does resolve the issue. > > Dump information: > {quote}2022-10-26 08:46:51,188 INFO [Timer-Driven Process Thread-55] > c.hierynomus.smbj.connection.Connection Closed connection to servername.domain > 2022-10-26 08:46:51,188 INFO [Packet Reader for servername.domain] > c.h.s.t.tcp.direct.DirectTcpPacketReader Thread[Packet Reader for > servername.domain,5,main] stopped. > 2022-10-26 08:46:51,188 ERROR [Timer-Driven Process Thread-55] > o.apache.nifi.processors.smb.GetSmbFile > GetSmbFile[id=d212416c-9cb0-1f8e-cf35-c4cdfe011e42] Could not establish smb > connection because of error co > m.hierynomus.protocol.transport.TransportException: Different server found > for same hostname 'servername.domain', disconnecting... > com.hierynomus.protocol.transport.TransportException: Different server found > for same hostname 'servername.domain', disconnecting... > at > com.hierynomus.smbj.connection.SMBProtocolNegotiator.initializeOrValidateServerDetails(SMBProtocolNegotiator.java:232) > at > com.hierynomus.smbj.connection.SMBProtocolNegotiator.negotiateDialect(SMBProtocolNegotiator.java:83) > at > com.hierynomus.smbj.connection.Connection.connect(Connection.java:141) > at > com.hierynomus.smbj.SMBClient.getEstablishedOrConnect(SMBClient.java:96) > at com.hierynomus.smbj.SMBClient.connect(SMBClient.java:71) > at > org.apache.nifi.processors.smb.GetSmbFile.onTrigger(GetSmbFile.java:390) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1354) > at > org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:246) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:102) > at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110) > at > java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) > at > java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) > at > java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305) > at > java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > at > java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.base/java.lang.Thread.run(Thread.java:834) > {quote} > > A possible answer was found here: > [https://github.com/hierynomus/smbj/issues/672] -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] asfgit closed pull request #6620: NIFI-10702 Clear server list on connection error in SMB processors
asfgit closed pull request #6620: NIFI-10702 Clear server list on connection error in SMB processors URL: https://github.com/apache/nifi/pull/6620 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10788) Component synchronizer not setting proposed fields when adding CS
[ https://issues.apache.org/jira/browse/NIFI-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631259#comment-17631259 ] ASF subversion and git services commented on NIFI-10788: Commit 9c21e26e63ecb52ef6c49b4c173efe5b6318a821 in nifi's branch refs/heads/main from Bryan Bende [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=9c21e26e63 ] NIFI-10788 Ensure proposed service config is applied when component synchronizer adds a new service (#6640) This closes #6640 > Component synchronizer not setting proposed fields when adding CS > - > > Key: NIFI-10788 > URL: https://issues.apache.org/jira/browse/NIFI-10788 > Project: Apache NiFi > Issue Type: Bug >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > When the component synchronizer adds a controller service, it never calls > updateControllerService to set the remaining config from the proposed > component. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-10788) Component synchronizer not setting proposed fields when adding CS
[ https://issues.apache.org/jira/browse/NIFI-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-10788. Fix Version/s: 1.19.0 Resolution: Fixed > Component synchronizer not setting proposed fields when adding CS > - > > Key: NIFI-10788 > URL: https://issues.apache.org/jira/browse/NIFI-10788 > Project: Apache NiFi > Issue Type: Bug >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.19.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > When the component synchronizer adds a controller service, it never calls > updateControllerService to set the remaining config from the proposed > component. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mcgilman merged pull request #6640: NIFI-10788 Ensure proposed service config is applied when component s…
mcgilman merged PR #6640: URL: https://github.com/apache/nifi/pull/6640 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6273: NIFI-9953 - The config encryption tool is too complicated to use and can be simplified
exceptionfactory commented on code in PR #6273: URL: https://github.com/apache/nifi/pull/6273#discussion_r1018358816 ## nifi-toolkit/nifi-property-encryptor-tool/src/main/java/org/apache/nifi/util/console/utils/SchemeCandidates.java: ## @@ -0,0 +1,27 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.util.console.utils; + +import org.apache.nifi.properties.scheme.StandardProtectionSchemeResolver; + +import java.util.ArrayList; + +public class SchemeCandidates extends ArrayList { +SchemeCandidates() { +super(new StandardProtectionSchemeResolver().getSupportedProtectionSchemes()); Review Comment: Instead of returning the internal enum names, we should return the values of `ProtectionScheme.getPath()`. Those are the values used in the configuration files, and this is a good opportunity to switch the public arguments. ## nifi-registry/nifi-registry-core/nifi-registry-properties-loader/pom.xml: ## @@ -54,5 +54,17 @@ org.apache.nifi nifi-property-protection-loader + +org.apache.nifi +nifi-properties-loader +1.18.0-SNAPSHOT Review Comment: ```suggestion 1.19.0-SNAPSHOT ``` ## nifi-toolkit/nifi-property-encryptor-tool/src/main/java/org/apache/nifi/PropertyEncryptorCommand.java: ## @@ -0,0 +1,281 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi; + +import org.apache.nifi.encrypt.PropertyEncryptionMethod; +import org.apache.nifi.encrypt.PropertyEncryptor; +import org.apache.nifi.encrypt.PropertyEncryptorBuilder; +import org.apache.nifi.flow.encryptor.StandardFlowEncryptor; +import org.apache.nifi.properties.AbstractBootstrapPropertiesLoader; +import org.apache.nifi.properties.ApplicationProperties; +import org.apache.nifi.properties.BootstrapProperties; +import org.apache.nifi.properties.MutableApplicationProperties; +import org.apache.nifi.properties.MutableBootstrapProperties; +import org.apache.nifi.properties.NiFiPropertiesLoader; +import org.apache.nifi.properties.PropertiesLoader; +import org.apache.nifi.properties.ProtectedPropertyContext; +import org.apache.nifi.properties.SensitivePropertyProvider; +import org.apache.nifi.properties.SensitivePropertyProviderFactory; +import org.apache.nifi.properties.StandardSensitivePropertyProviderFactory; +import org.apache.nifi.properties.scheme.ProtectionScheme; +import org.apache.nifi.registry.properties.NiFiRegistryPropertiesLoader; +import org.apache.nifi.registry.properties.util.NiFiRegistryBootstrapPropertiesLoader; +import org.apache.nifi.security.util.KeyDerivationFunction; +import org.apache.nifi.security.util.crypto.SecureHasherFactory; +import org.apache.nifi.serde.StandardPropertiesWriter; +import org.apache.nifi.util.NiFiBootstrapPropertiesLoader; +import org.apache.nifi.util.NiFiProperties; +import org.apache.nifi.util.file.ConfigurationFileResolver; +import org.apache.nifi.util.file.ConfigurationFileUtils; +import org.apache.nifi.util.file.NiFiConfigurationFileResolver; +import org.apache.nifi.util.file.NiFiFlowDefinitionFileResolver; +import org.apache.nifi.util.file.NiFiRegistryConfigurationFileResolver; +import org.apache.nifi.util.properties.NiFiRegistrySensitivePropertyResolver; +import org.apache.nifi.util.properties.NiFiSensitivePropertyResolver; +import org.apache.nifi.util.properties.SensitivePropertyResolver
[jira] [Updated] (NIFI-10790) Update Snowflake JDBC driver to 3.13.24
[ https://issues.apache.org/jira/browse/NIFI-10790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-10790: -- Status: Patch Available (was: Open) > Update Snowflake JDBC driver to 3.13.24 > --- > > Key: NIFI-10790 > URL: https://issues.apache.org/jira/browse/NIFI-10790 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Update Snowflake JDBC driver to 3.13.24 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] pvillard31 opened a new pull request, #6641: NIFI-10790 - Update Snowflake JDBC driver to 3.13.24
pvillard31 opened a new pull request, #6641: URL: https://github.com/apache/nifi/pull/6641 # Summary [NIFI-10790](https://issues.apache.org/jira/browse/NIFI-10790) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10790) Update Snowflake JDBC driver to 3.13.24
Pierre Villard created NIFI-10790: - Summary: Update Snowflake JDBC driver to 3.13.24 Key: NIFI-10790 URL: https://issues.apache.org/jira/browse/NIFI-10790 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Pierre Villard Assignee: Pierre Villard Update Snowflake JDBC driver to 3.13.24 -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10713) Add Deprecation Logging to OpenPGP in EncryptContent
[ https://issues.apache.org/jira/browse/NIFI-10713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10713: Fix Version/s: 1.19.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Add Deprecation Logging to OpenPGP in EncryptContent > > > Key: NIFI-10713 > URL: https://issues.apache.org/jira/browse/NIFI-10713 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Fix For: 1.19.0 > > Time Spent: 40m > Remaining Estimate: 0h > > The OpenPGP encryption and decryption features of EncryptContent should be > marked as deprecated in favor of the EncryptContentPGP and DecryptContentPGP > Processors. Configuring OpenPGP properties in the EncryptContent Processor > should produce deprecation logs indicating that the features will be removed > from the Processor in future major releases. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-10782) Upgrade Apache Ivy to 2.5.1
[ https://issues.apache.org/jira/browse/NIFI-10782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-10782. - Fix Version/s: 1.19.0 Resolution: Fixed > Upgrade Apache Ivy to 2.5.1 > --- > > Key: NIFI-10782 > URL: https://issues.apache.org/jira/browse/NIFI-10782 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: David Handermann >Assignee: David Handermann >Priority: Minor > Labels: dependency-upgrade > Fix For: 1.19.0 > > Time Spent: 40m > Remaining Estimate: 0h > > Dependencies on Apache Ivy should be upgraded from 2.5.0 to > [2.5.1|https://ant.apache.org/ivy/history/2.5.1/release-notes.html] in > scripting bundles. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10789) Write error details in FlowFile attributes for FetchAzureDatalakeStorage
Emilio Setiadarma created NIFI-10789: Summary: Write error details in FlowFile attributes for FetchAzureDatalakeStorage Key: NIFI-10789 URL: https://issues.apache.org/jira/browse/NIFI-10789 Project: Apache NiFi Issue Type: Improvement Reporter: Emilio Setiadarma Assignee: Emilio Setiadarma Processors such as `PutS3Object` or `FetchHDFS` writes failure reasons as a flowfile attribute. Currently `FetchAzureDatalakeStorage` logs failures in the logs, but sets no attributes that indicate the failure and the reasons. This issue would like to add a flowfile attribute to store error and the details of the failure for `FetchAzureDatalakeStorage`. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10256) CSVRecordReader using RFC 4180 CSV format trimming starting and ending double quotes
[ https://issues.apache.org/jira/browse/NIFI-10256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-10256: Fix Version/s: 1.18.0 > CSVRecordReader using RFC 4180 CSV format trimming starting and ending double > quotes > > > Key: NIFI-10256 > URL: https://issues.apache.org/jira/browse/NIFI-10256 > Project: Apache NiFi > Issue Type: Bug >Reporter: Timea Barna >Assignee: Timea Barna >Priority: Major > Fix For: 1.18.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Given an input CSV file: > scenario,name > Honors escape beginning," ""John ""PA""RKINSON""" > problematic,"""John ""PA""RKINSON""" > honors escape end,"""John ""PA""RKINSON" > Based on the RFC 4180 spec: > https://datatracker.ietf.org/doc/html/rfc4180 > " If double-quotes are used to enclose fields, then a double-quote > appearing inside a field must be escaped by preceding it with > another double quote. For example: > "aaa","b""bb","ccc" > " > The output should be like this: > [ > { "scenario" : "expected_with_space", "name" : " \"John \"PA\"RKINSON\"" } > , > { "scenario" : "problematic", "name" : "\"John \"PA\"RKINSON\"" } > , > { "scenario" : "expected_remove_end_quote", "name" : "\"John \"PA\"RKINSON" } > ] > However the output is like this" > [ > { "scenario" : "expected_with_space", "name" : " \"John \"PA\"RKINSON\"" } > , > { "scenario" : "problematic", "name" : "John \"PA\"RKINSON" } > , > { "scenario" : "expected_remove_end_quote", "name" : "\"John \"PA\"RKINSON" } > ] > Notice the "problematic" field which initially is """John ""PA""RKINSON""" > and based on the RFC spec it should have returned this value "\"John > \"PA\"RKINSON\"" but instead it returns "John \"PA\"RKINSON" missing the > staring and ending double quotes. > Notice that the other 2 fields expected_remove_end_quote and > expected_with_space do work as expected given the RFC spec. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-9540) Failed to start web server on 1.15.x when using OIDC
[ https://issues.apache.org/jira/browse/NIFI-9540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-9540. Resolution: Cannot Reproduce The {{nifi.security.user.knox.url}} property also needs to be blank, or this error can occur. Feel free to follow up if this issue persists in current versions. > Failed to start web server on 1.15.x when using OIDC > > > Key: NIFI-9540 > URL: https://issues.apache.org/jira/browse/NIFI-9540 > Project: Apache NiFi > Issue Type: Bug >Reporter: Pedro Naresi >Priority: Major > > When trying to setup OIDC auth with Azure AD on the 1.15.x versions I keep > getting: > > _2022-01-05 22:23:38,242 *ERROR* [NiFi logging handler] > org.apache.nifi.StdErr Failed to start web server: Error creating bean with > name 'oidcService' defined in > org.apache.nifi.web.security.configuration.OidcAuthenticationSecurityConfiguration: > Bean instantiation via factory method failed; nested exception is > org.springframework.beans.BeanInstantiationException: Failed to instantiate > [org.apache.nifi.web.security.oidc.OidcService]: Factory method 'oidcService' > threw exception; nested exception is java.lang.RuntimeException: OpenId > Connect support cannot be enabled if the Login Identity Provider or Apache > Knox SSO is configured._ > > My _nifi.properties_ setup is: > nifi.security.user.authorizer=managed-authorizer > nifi.security.user.login.identity.provider= > nifi.login.identity.provider.configuration.file=./conf/login-identity-providers.xml > > I have already tried every single combination of authorizer and identity and > still had no success. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-9406) Registry logout with OIDC redirects to HTTP DELETE
[ https://issues.apache.org/jira/browse/NIFI-9406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631195#comment-17631195 ] Emilio Setiadarma commented on NIFI-9406: - Should be resolved after resolving (NIFI-10177) https://github.com/apache/nifi/pull/6637 > Registry logout with OIDC redirects to HTTP DELETE > -- > > Key: NIFI-9406 > URL: https://issues.apache.org/jira/browse/NIFI-9406 > Project: Apache NiFi > Issue Type: Bug > Components: NiFi Registry >Affects Versions: 0.8.0, 1.13.2 >Reporter: humpfhumpf >Priority: Major > > When NiFi Registry is configured with OIDC authentification, the logout link > redirects to the logout URL of the Identity Provider (Keycloack in my case) > with HTTP *DELETE* method, instead of {*}GET{*}. > NiFi Registry shows an error box : "Please contact your System Administrator." > NiFi does not have this bug. > +Network calls :+ > * DELETE > [https://revproxy/nifi-registry/logout|https://proxy-nifi-registry-zds1.admin.artemis/nifi-registry/logout] > ** HTTP 302 ("redirect") > * OPTIONS > [https://revproxy/auth/realms/myrealm/protocol/openid-connect/logout?post_logout_redirect_uri=https://revproxy/nifi-registry-api/../nifi-registry] > * > ** HTTP 204 (No Content) > * DELETE > [https://revproxy/auth/realms/myrealm/protocol/openid-connect/logout?post_logout_redirect_uri=https://revproxy/nifi-registry-api/../nifi-registry] > ** HTTP 405 (Method not allowed) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-8836) Logout causes NullPointerException and continutes to display resources anonymous should not see
[ https://issues.apache.org/jira/browse/NIFI-8836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631194#comment-17631194 ] Emilio Setiadarma commented on NIFI-8836: - Should be resolved after resolving (NIFI-10177) https://github.com/apache/nifi/pull/6637 > Logout causes NullPointerException and continutes to display resources > anonymous should not see > --- > > Key: NIFI-8836 > URL: https://issues.apache.org/jira/browse/NIFI-8836 > Project: Apache NiFi > Issue Type: Bug > Components: NiFi Registry >Reporter: Chris Sampson >Assignee: Nathan Gough >Priority: Major > > After configuring OIDC login through NiFi Registry UI (which I note appears > to need an explicit click of the {{Login}} button in the UI rather than > automatically logging the user in like NiFi UI), I see the following > behaviour: > * {{Login}} via OIDC (link in UI) > * Display list of buckets (to which {{anonymous}} users do not have access) > * {{Logout}} (link in UI) > * See the below log from NiFi Registry > * Note that the buckets are still displayed in the UI for the {{anonymous}} > user > {code:java} > 2021-02-15 17:10:48,374 ERROR [NiFi Registry Web Server-18] > o.a.n.r.web.mapper.ThrowableMapper An unexpected error has occurred: > java.lang.NullPointerException. Returning Internal Server Error response. > java.lang.NullPointerException: null > at java.util.regex.Matcher.getTextLength(Matcher.java:1283) > at java.util.regex.Matcher.reset(Matcher.java:309) > at java.util.regex.Matcher.(Matcher.java:229) > at java.util.regex.Pattern.matcher(Pattern.java:1093) > at > org.apache.nifi.registry.web.security.authentication.jwt.JwtService.getTokenFromHeader(JwtService.java:238) > at > org.apache.nifi.registry.web.security.authentication.jwt.JwtService.logOutUsingAuthHeader(JwtService.java:233) > at > org.apache.nifi.registry.web.api.AccessResource.oidcLogout(AccessResource.java:708) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) > at > org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$VoidOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:159) > at > org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) > at > org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) > at > org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) > at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) > at org.glassfish.jersey.internal.Errors.process(Errors.java:292) > at org.glassfish.jersey.internal.Errors.process(Errors.java:274) > at org.glassfish.jersey.internal.Errors.process(Errors.java:244) > at > org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) > at > org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) > at > org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:680) > at > org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392) > at > org.glassfish.jersey.servlet.ServletContainer.serviceImpl(ServletContainer.java:385) > at > org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:560) > at > org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:501) > at > org.glassfish.jersey.servlet.ServletContainer.doFilter(ServletContainer.java:438) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610) > at > org.eclipse.jetty.servlet.ServletH
[jira] [Resolved] (NIFI-6837) Test and document 2FA using an external OIDC provider
[ https://issues.apache.org/jira/browse/NIFI-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann resolved NIFI-6837. Resolution: Information Provided > Test and document 2FA using an external OIDC provider > -- > > Key: NIFI-6837 > URL: https://issues.apache.org/jira/browse/NIFI-6837 > Project: Apache NiFi > Issue Type: Sub-task > Components: Security >Reporter: Nathan Gough >Assignee: Nathan Gough >Priority: Major > Attachments: Google OIDC and 2FA with NiFi.pdf > > > * Enable 2FA for an OIDC provider and enforce that users must require 2FA to > authenticate before they can access NiFi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] tpalfy commented on pull request #6620: NIFI-10702 Clear server list on connection error in SMB processors
tpalfy commented on PR #6620: URL: https://github.com/apache/nifi/pull/6620#issuecomment-1309137762 LGTM+1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] mcgilman commented on pull request #6640: NIFI-10788 Ensure proposed service config is applied when component s…
mcgilman commented on PR #6640: URL: https://github.com/apache/nifi/pull/6640#issuecomment-1309131133 Will review... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] bbende opened a new pull request, #6640: NIFI-10788 Ensure proposed service config is applied when component s…
bbende opened a new pull request, #6640: URL: https://github.com/apache/nifi/pull/6640 …ynchronizer adds a new service # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on pull request #1449: MINIFICPP-1978 - Flush MergeContent bundles when its size would grow beyond max group size
adamdebreceni commented on PR #1449: URL: https://github.com/apache/nifi-minifi-cpp/pull/1449#issuecomment-1309099362 > Could you please update BinFiles documentation in `PROCESSORS.md`? To me it is not clear what this Processor is doing, and it could also be confusing for end users. Currently description is: "Bins flow files into buckets based on the number of entries or size of entries". I don't understand the following verbs and nouns in this context : "bins", "buckets", "entries". Please add some more explanation. Property description should also be updated, for example format is missing for Max Bin Age. definitely something we should address, created a ticket for it: https://issues.apache.org/jira/browse/MINIFICPP-1982 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1982) Fix documentation for BinFiles
Adam Debreceni created MINIFICPP-1982: - Summary: Fix documentation for BinFiles Key: MINIFICPP-1982 URL: https://issues.apache.org/jira/browse/MINIFICPP-1982 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Adam Debreceni BinFiles processor documentation is confusing, we should fix the meaning of "bin", "group", "entry" and the general principle/purpose of operation. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (NIFI-10754) Nifi Expression Language method urlEncode does encode a URL path correctly when it contains white space
[ https://issues.apache.org/jira/browse/NIFI-10754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631175#comment-17631175 ] David Handermann edited comment on NIFI-10754 at 11/9/22 5:23 PM: -- Having a function that takes multiple arguments would fall into the category of [subjectless functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#subjectless]. In that case, it might make more sense to name it something like {{getUri}}. was (Author: exceptionfactory): Having a function that takes multiple arguments would fall into the category of [subjectless functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#subjectless]. In that case, it might make more sense to name it something thing {{getUri}}. > Nifi Expression Language method urlEncode does encode a URL path correctly > when it contains white space > --- > > Key: NIFI-10754 > URL: https://issues.apache.org/jira/browse/NIFI-10754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Daniel Stieglitz >Priority: Major > > The Nifi Expression Language urlEncode method replaces white space with a + > and not %20. That is fine for the query section of a URL but not the path. > There are many Stackoverflow posts which detail the issue. Here is one, > [URLEncoder not able to translate space > character|https://stackoverflow.com/questions/4737841/urlencoder-not-able-to-translate-space-character] > Our particular scenario is where we build a URL from attributes and then try > to call the URL with InvokeHTTP > e.g. of a URL with a space in its path > {code:java} > https://somehost/api/v1/somepath > /actual?att1=something&att2=somethingelse{code} > from that URL we only encode the part which may have special characters > {code:java} > somepath /actual{code} > urlEncode will convert that to > {code:java} > somepath+%2Factual{code} > The + in the URL path is not the same as a blank space hence the call to > InvokeHttp fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10754) Nifi Expression Language method urlEncode does encode a URL path correctly when it contains white space
[ https://issues.apache.org/jira/browse/NIFI-10754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631175#comment-17631175 ] David Handermann commented on NIFI-10754: - Having a function that takes multiple arguments would fall into the category of [subjectless functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#subjectless]. In that case, it might make more sense to name it something thing {{getUri}}. > Nifi Expression Language method urlEncode does encode a URL path correctly > when it contains white space > --- > > Key: NIFI-10754 > URL: https://issues.apache.org/jira/browse/NIFI-10754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Daniel Stieglitz >Priority: Major > > The Nifi Expression Language urlEncode method replaces white space with a + > and not %20. That is fine for the query section of a URL but not the path. > There are many Stackoverflow posts which detail the issue. Here is one, > [URLEncoder not able to translate space > character|https://stackoverflow.com/questions/4737841/urlencoder-not-able-to-translate-space-character] > Our particular scenario is where we build a URL from attributes and then try > to call the URL with InvokeHTTP > e.g. of a URL with a space in its path > {code:java} > https://somehost/api/v1/somepath > /actual?att1=something&att2=somethingelse{code} > from that URL we only encode the part which may have special characters > {code:java} > somepath /actual{code} > urlEncode will convert that to > {code:java} > somepath+%2Factual{code} > The + in the URL path is not the same as a blank space hence the call to > InvokeHttp fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10754) Nifi Expression Language method urlEncode does encode a URL path correctly when it contains white space
[ https://issues.apache.org/jira/browse/NIFI-10754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631171#comment-17631171 ] Daniel Stieglitz commented on NIFI-10754: - [~exceptionfactory] I was thinking the new function would be called toUrl and it would take all the arguments java.net.URI takes. This would build the URL from NIFI attributes and encode correctly both path and query sections at once. > Nifi Expression Language method urlEncode does encode a URL path correctly > when it contains white space > --- > > Key: NIFI-10754 > URL: https://issues.apache.org/jira/browse/NIFI-10754 > Project: Apache NiFi > Issue Type: Bug >Reporter: Daniel Stieglitz >Priority: Major > > The Nifi Expression Language urlEncode method replaces white space with a + > and not %20. That is fine for the query section of a URL but not the path. > There are many Stackoverflow posts which detail the issue. Here is one, > [URLEncoder not able to translate space > character|https://stackoverflow.com/questions/4737841/urlencoder-not-able-to-translate-space-character] > Our particular scenario is where we build a URL from attributes and then try > to call the URL with InvokeHTTP > e.g. of a URL with a space in its path > {code:java} > https://somehost/api/v1/somepath > /actual?att1=something&att2=somethingelse{code} > from that URL we only encode the part which may have special characters > {code:java} > somepath /actual{code} > urlEncode will convert that to > {code:java} > somepath+%2Factual{code} > The + in the URL path is not the same as a blank space hence the call to > InvokeHttp fails. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10788) Component synchronizer not setting proposed fields when adding CS
Bryan Bende created NIFI-10788: -- Summary: Component synchronizer not setting proposed fields when adding CS Key: NIFI-10788 URL: https://issues.apache.org/jira/browse/NIFI-10788 Project: Apache NiFi Issue Type: Bug Reporter: Bryan Bende Assignee: Bryan Bende When the component synchronizer adds a controller service, it never calls updateControllerService to set the remaining config from the proposed component. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1434: MINIFICPP-1949 ConsumeWindowsEventLog precompiled regex
szaszm commented on code in PR #1434: URL: https://github.com/apache/nifi-minifi-cpp/pull/1434#discussion_r1018125775 ## extensions/windows-event-log/ConsumeWindowsEventLog.cpp: ## @@ -657,19 +673,16 @@ void ConsumeWindowsEventLog::refreshTimeZoneData() { long tzbias = 0; // NOLINT long comes from WINDOWS API bool dst = false; switch (ret) { -case TIME_ZONE_ID_INVALID: - logger_->log_error("Failed to get timezone information!"); +case TIME_ZONE_ID_INVALID:logger_->log_error("Failed to get timezone information!"); Review Comment: I prefer the old format here. ## extensions/windows-event-log/ConsumeWindowsEventLog.cpp: ## @@ -685,21 +698,27 @@ void ConsumeWindowsEventLog::refreshTimeZoneData() { logger_->log_trace("Timezone name: %s, offset: %s", timezone_name_, timezone_offset_); } -void ConsumeWindowsEventLog::putEventRenderFlowFileToSession(const EventRender& eventRender, core::ProcessSession& session) const { - auto commitFlowFile = [&] (const std::shared_ptr& flowFile, const std::string& content, const std::string& mimeType) { -session.writeBuffer(flowFile, content); -session.putAttribute(flowFile, core::SpecialFlowAttribute::MIME_TYPE, mimeType); -session.putAttribute(flowFile, "timezone.name", timezone_name_); -session.putAttribute(flowFile, "timezone.offset", timezone_offset_); -session.getProvenanceReporter()->receive(flowFile, provenanceUri_, getUUIDStr(), "Consume windows event logs", 0ms); -session.transfer(flowFile, Success); - }; +void ConsumeWindowsEventLog::putEventRenderFlowFileToSession(const EventRender& eventRender, + core::ProcessSession& session) const { Review Comment: Here, too ## extensions/windows-event-log/ConsumeWindowsEventLog.cpp: ## @@ -725,28 +744,24 @@ void ConsumeWindowsEventLog::putEventRenderFlowFileToSession(const EventRender& } } -void ConsumeWindowsEventLog::LogWindowsError(std::string error) const { +void ConsumeWindowsEventLog::LogWindowsError(const std::string& error) const { auto error_id = GetLastError(); LPVOID lpMsg; FormatMessage( -FORMAT_MESSAGE_ALLOCATE_BUFFER | -FORMAT_MESSAGE_FROM_SYSTEM, -NULL, -error_id, -MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), -(LPTSTR)&lpMsg, -0, NULL); + FORMAT_MESSAGE_ALLOCATE_BUFFER | + FORMAT_MESSAGE_FROM_SYSTEM, Review Comment: It would be clearer for these to go on the same line. But even better: use `std::error_code` instead of `FormatMessage`. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen closed pull request #6460: NIFI-10562 Added MongoDB testcontainers to support integration testing.
MikeThomsen closed pull request #6460: NIFI-10562 Added MongoDB testcontainers to support integration testing. URL: https://github.com/apache/nifi/pull/6460 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #6460: NIFI-10562 Added MongoDB testcontainers to support integration testing.
MikeThomsen commented on PR #6460: URL: https://github.com/apache/nifi/pull/6460#issuecomment-1308980196 Going to redo this from scratch @exceptionfactory to limit the scope. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ChrisSamo632 commented on pull request #6628: NIFI-10067 enable use of script for Elasticsearch updates
ChrisSamo632 commented on PR #6628: URL: https://github.com/apache/nifi/pull/6628#issuecomment-1308978863 > > Or have you tried it and found something not working/you think could be improved? > > My vision for that processor would be a query builder that allows you to break down the process. Ex specifying the script or script id and then the update query. That combined with the ability to just use the flowfile body as a raw document to punt to update_by_query. That (to me) sounds like a (potentially breaking) change to the processor's API, best left to another ticket/discussion I'm happy to update the existing processor's documentation to clarify that the `query` property for `UpdateByQuery` is currently the request body that's sent to Elasticsearch for the `_update_by_query` endpoint, i.e. if you want to run a Script, write your JSON like `{"script": {"source": "..."}, "query": {"match_all":{}}}` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] MikeThomsen commented on pull request #6628: NIFI-10067 enable use of script for Elasticsearch updates
MikeThomsen commented on PR #6628: URL: https://github.com/apache/nifi/pull/6628#issuecomment-1308972782 > Or have you tried it and found something not working/you think could be improved? My vision for that processor would be a query builder that allows you to break down the process. Ex specifying the script or script id and then the update query. That combined with the ability to just use the flowfile body as a raw document to punt to update_by_query. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] kevdoran commented on pull request #6587: NIFI-10701 Update MiNiFi docker base images to eclipse-temurin
kevdoran commented on PR #6587: URL: https://github.com/apache/nifi/pull/6587#issuecomment-1308968191 Thanks for the contribution @briansolo1985 and for the help reviewing @bejancsaba @ferencerdei. I should be able to complete my review soon -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on pull request #1450: MINIFICPP-1981 Decrease default C2 heartbeat frequency
lordgamez commented on PR #1450: URL: https://github.com/apache/nifi-minifi-cpp/pull/1450#issuecomment-1308936572 Not all of these are relevant to C2 usage, but I think we should update occurrences in these files as well: C2.md encrypt-config/tests/resources/minifi.properties encrypt-config/tests/resources/with-additional-sensitive-props.minifi.properties libminifi/test/resources/encrypted.minifi.properties -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (MINIFICPP-1934) Implement PutTCP processor
[ https://issues.apache.org/jira/browse/MINIFICPP-1934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marton Szasz updated MINIFICPP-1934: Resolution: Fixed Status: Resolved (was: Patch Available) > Implement PutTCP processor > -- > > Key: MINIFICPP-1934 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1934 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Marton Szasz >Assignee: Martin Zink >Priority: Major > Fix For: 0.13.0 > > Time Spent: 6h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] szaszm opened a new pull request, #1450: MINIFICPP-1981 Decrease default C2 heartbeat frequency
szaszm opened a new pull request, #1450: URL: https://github.com/apache/nifi-minifi-cpp/pull/1450 The default 250 milliseconds heartbeat interval was unnecessarily frequent, and places a large burden on the server. This changes it to 30 sec, which is frequent enough in most use cases --- Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically main)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-5901) Write JSON record in database
[ https://issues.apache.org/jira/browse/NIFI-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17631131#comment-17631131 ] Luigi De Giovanni commented on NIFI-5901: - It would be great to know at least a workaround for this case, if this feature is not being implemented. JOLT transformations don't seem to work, handling the JSON as string, as all the field names lose the double quotes. > Write JSON record in database > - > > Key: NIFI-5901 > URL: https://issues.apache.org/jira/browse/NIFI-5901 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.8.0 >Reporter: Flo Rance >Assignee: Mike Thomsen >Priority: Minor > Time Spent: 3.5h > Remaining Estimate: 0h > > It would be good to be able to store a whole json record in databases that > implement it (e.g. postgresql). This would require to define the field in the > shema as json/jsonb and then let PutDatabaseRecord inserts the json value in > the json/jsonb field. > At the moment, it's possible to store a json/jsonb through Postgresql JDBC > using the Java sql type 'OTHER': > Object data = "\{...}"; // the JSON document > PreparedStatement.setObject(1, data, java.sql.Types.OTHER); -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1431: MINIFICPP-1937 - Dynamically reopen rocksdb on column config change
szaszm closed pull request #1431: MINIFICPP-1937 - Dynamically reopen rocksdb on column config change URL: https://github.com/apache/nifi-minifi-cpp/pull/1431 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1981) Decrease C2 heartbeat frequency
Marton Szasz created MINIFICPP-1981: --- Summary: Decrease C2 heartbeat frequency Key: MINIFICPP-1981 URL: https://issues.apache.org/jira/browse/MINIFICPP-1981 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Marton Szasz Assignee: Marton Szasz The default 250 milliseconds heartbeat interval is unnecessarily frequent, and places a large burden on the server. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10787) Cannot commit flows to nifi registry after updating our nifi release to 1.18.0
[ https://issues.apache.org/jira/browse/NIFI-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahsan updated NIFI-10787: - Summary: Cannot commit flows to nifi registry after updating our nifi release to 1.18.0 (was: Cannot commit flows to nifi registry) > Cannot commit flows to nifi registry after updating our nifi release to 1.18.0 > -- > > Key: NIFI-10787 > URL: https://issues.apache.org/jira/browse/NIFI-10787 > Project: Apache NiFi > Issue Type: Bug > Components: Flow Versioning >Affects Versions: 1.18.0 >Reporter: Ahsan >Priority: Major > Attachments: index.png, stacktrace_nifi.txt, > stacktrace_nifi_registry.txt > > > Hi, > > So we recently updated to Nifi 1.18.0 and registry to 1.18.0. > Some portions of our flows were for no reason not "Commitable" any more. > Attached are the stacktraces from nifi and nifi-registry, when we click the > commit local changes button in nifi. > > Thinking this is a problem on our end, we debugged the issue and found out > the following: > The method in > "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" > below is where things trip and we get a NPE. > {code:java} > private String getRegistryUrl(final FlowRegistryClientNode registry) { > return > registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") > ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; > } {code} > If you note the call "registry.getPropertyDescriptor("URL")" with the > hard-coded string "URL", this is failing although the property is there BUT > with the name in small case "url". > I say this is because if we look at the class > {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is > described as following: > {code:java} > public final static PropertyDescriptor PROPERTY_URL = new > PropertyDescriptor.Builder() > .name("url") > .displayName("URL") > .description("URL of the NiFi Registry") > .addValidator(StandardValidators.URL_VALIDATOR) > .required(true) > .build();{code} > And if you note the property name is described with small case "url". Hence > PropertyDescriptor which bases its hash on the "name" property fails when we > search with uppercase "URL". > {code:java} > // hash def of > nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java > @Override > public int hashCode() { > return 287 + this.name.hashCode() * 47; > } {code} > Hope I have helped here. Can someone fix this issue. We cannot commit in our > registry currently because of the NPE. > > Just in case the debug stacktrace is important showing the src > PropertyDescription being used to search for in the map, I attach it here: > > !index.png! > > Regards > > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10787) Cannot commit flows to nifi registry
[ https://issues.apache.org/jira/browse/NIFI-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahsan updated NIFI-10787: - Description: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason not "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". {code:java} // hash def of nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java @Override public int hashCode() { return 287 + this.name.hashCode() * 47; } {code} Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Just in case the debug stacktrace is important showing the src PropertyDescription being used to search for in the map, I attach it here: !index.png! Regards was: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". {code:java} // hash def of nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java @Override public int hashCode() { return 287 + this.name.hashCode() * 47; } {code} Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Just in case the debug stacktrace is important showing the src PropertyDescription being used to search for in the map, I attach it here: !index.png! Regards > Cannot commit flows to nifi registry > > > Key: NIFI-10787 > URL: https://issues.apache.org/jira/browse/NIFI-10787 > Project: Apache NiFi > Issue Type: Bug > Components: Flow Versioning >Affects Versions: 1.18.0 >Reporter: Ahsan >Priority: Major > Attachments: index.png, stacktrace_nifi.txt, > stacktrace_nifi_registry.txt > > > Hi, > > So we recently updated to Nifi 1.18.0 and registry to 1.18.0. > Some portions of our flows were for no reason not "Commitable" any more. > At
[jira] [Updated] (NIFI-10787) Cannot commit flows to nifi registry
[ https://issues.apache.org/jira/browse/NIFI-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahsan updated NIFI-10787: - Description: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". {code:java} // hash def of nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java @Override public int hashCode() { return 287 + this.name.hashCode() * 47; } {code} Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Just in case the debug stacktrace is important showing the src PropertyDescription being used to search for in the map, I attach it here: !index.png! Regards was: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". {code:java} // hash def of nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java @Override public int hashCode() { return 287 + this.name.hashCode() * 47; } {code} Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Regards > Cannot commit flows to nifi registry > > > Key: NIFI-10787 > URL: https://issues.apache.org/jira/browse/NIFI-10787 > Project: Apache NiFi > Issue Type: Bug > Components: Flow Versioning >Affects Versions: 1.18.0 >Reporter: Ahsan >Priority: Major > Attachments: index.png, stacktrace_nifi.txt, > stacktrace_nifi_registry.txt > > > Hi, > > So we recently updated to Nifi 1.18.0 and registry to 1.18.0. > Some portions of our flows were for no reason "Commitable" any more. Attached > are the stacktraces from nifi and nifi-registry, when we click the commit > local changes button in nifi. > > Thinking this is a problem on our end, we debug
[jira] [Updated] (NIFI-10787) Cannot commit flows to nifi registry
[ https://issues.apache.org/jira/browse/NIFI-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahsan updated NIFI-10787: - Attachment: index.png > Cannot commit flows to nifi registry > > > Key: NIFI-10787 > URL: https://issues.apache.org/jira/browse/NIFI-10787 > Project: Apache NiFi > Issue Type: Bug > Components: Flow Versioning >Affects Versions: 1.18.0 >Reporter: Ahsan >Priority: Major > Attachments: index.png, stacktrace_nifi.txt, > stacktrace_nifi_registry.txt > > > Hi, > > So we recently updated to Nifi 1.18.0 and registry to 1.18.0. > Some portions of our flows were for no reason "Commitable" any more. Attached > are the stacktraces from nifi and nifi-registry, when we click the commit > local changes button in nifi. > > Thinking this is a problem on our end, we debugged the issue and found out > the following: > The method in > "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" > below is where things trip and we get a NPE. > {code:java} > private String getRegistryUrl(final FlowRegistryClientNode registry) { > return > registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") > ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; > } {code} > If you note the call "registry.getPropertyDescriptor("URL")" with the > hard-coded string "URL", this is failing although the property is there BUT > with the name in small case "url". > I say this is because if we look at the class > {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is > described as following: > {code:java} > public final static PropertyDescriptor PROPERTY_URL = new > PropertyDescriptor.Builder() > .name("url") > .displayName("URL") > .description("URL of the NiFi Registry") > .addValidator(StandardValidators.URL_VALIDATOR) > .required(true) > .build();{code} > And if you note the property name is described with small case "url". Hence > PropertyDescriptor which bases its hash on the "name" property fails when we > search with uppercase "URL". > {code:java} > // hash def of > nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java > @Override > public int hashCode() { > return 287 + this.name.hashCode() * 47; > } {code} > Hope I have helped here. Can someone fix this issue. We cannot commit in our > registry currently because of the NPE. > > Regards > > > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10787) Cannot commit flows to nifi registry
[ https://issues.apache.org/jira/browse/NIFI-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahsan updated NIFI-10787: - Description: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". {code:java} // hash def of nifi-api/src/main/java/org/apache/nifi/components/PropertyDescriptor.java @Override public int hashCode() { return 287 + this.name.hashCode() * 47; } {code} Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Regards was: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Regards > Cannot commit flows to nifi registry > > > Key: NIFI-10787 > URL: https://issues.apache.org/jira/browse/NIFI-10787 > Project: Apache NiFi > Issue Type: Bug > Components: Flow Versioning >Affects Versions: 1.18.0 >Reporter: Ahsan >Priority: Major > Attachments: stacktrace_nifi.txt, stacktrace_nifi_registry.txt > > > Hi, > > So we recently updated to Nifi 1.18.0 and registry to 1.18.0. > Some portions of our flows were for no reason "Commitable" any more. Attached > are the stacktraces from nifi and nifi-registry, when we click the commit > local changes button in nifi. > > Thinking this is a problem on our end, we debugged the issue and found out > the following: > The method in > "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" > below is where things trip and we get a NPE. > {code:java} > private String getRegistryUrl(final FlowRegistryClientNode registry) { > return > registry.getComponentType().equals("org.apache.nifi.registry.fl
[GitHub] [nifi-minifi-cpp] adam-markovics commented on pull request #1449: MINIFICPP-1978 - Flush MergeContent bundles when its size would grow beyond max group size
adam-markovics commented on PR #1449: URL: https://github.com/apache/nifi-minifi-cpp/pull/1449#issuecomment-1308818056 Could you please update BinFiles documentation in `PROCESSORS.md`? To me it is not clear what this Processor is doing, and it could also be confusing for end users. Currently description is: "Bins flow files into buckets based on the number of entries or size of entries". I don't understand the following verbs and nouns in this context : "bins", "buckets", "entries". Please add some more explanation. Property description should also be updated, for example format is missing for Max Bin Age. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10787) Cannot commit flows to nifi registry
[ https://issues.apache.org/jira/browse/NIFI-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ahsan updated NIFI-10787: - Description: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method in "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Regards was: Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Regards > Cannot commit flows to nifi registry > > > Key: NIFI-10787 > URL: https://issues.apache.org/jira/browse/NIFI-10787 > Project: Apache NiFi > Issue Type: Bug > Components: Flow Versioning >Affects Versions: 1.18.0 >Reporter: Ahsan >Priority: Major > Attachments: stacktrace_nifi.txt, stacktrace_nifi_registry.txt > > > Hi, > > So we recently updated to Nifi 1.18.0 and registry to 1.18.0. > Some portions of our flows were for no reason "Commitable" any more. Attached > are the stacktraces from nifi and nifi-registry, when we click the commit > local changes button in nifi. > > Thinking this is a problem on our end, we debugged the issue and found out > the following: > The method in > "src/main/java/org/apache/nifi/registry/flow/mapping/NiFiRegistryFlowMapper.java" > below is where things trip and we get a NPE. > {code:java} > private String getRegistryUrl(final FlowRegistryClientNode registry) { > return > registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") > ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; > } {code} > If you note the call "registry.getPropertyDescriptor("URL")" with the > hard-coded string "URL", this is failing although the property is there BUT
[jira] [Created] (NIFI-10787) Cannot commit flows to nifi registry
Ahsan created NIFI-10787: Summary: Cannot commit flows to nifi registry Key: NIFI-10787 URL: https://issues.apache.org/jira/browse/NIFI-10787 Project: Apache NiFi Issue Type: Bug Components: Flow Versioning Affects Versions: 1.18.0 Reporter: Ahsan Attachments: stacktrace_nifi.txt, stacktrace_nifi_registry.txt Hi, So we recently updated to Nifi 1.18.0 and registry to 1.18.0. Some portions of our flows were for no reason "Commitable" any more. Attached are the stacktraces from nifi and nifi-registry, when we click the commit local changes button in nifi. Thinking this is a problem on our end, we debugged the issue and found out the following: The method below is where things trip and we get a NPE. {code:java} private String getRegistryUrl(final FlowRegistryClientNode registry) { return registry.getComponentType().equals("org.apache.nifi.registry.flow.NifiRegistryFlowRegistryClient") ? registry.getRawPropertyValue(registry.getPropertyDescriptor("URL")) : ""; } {code} If you note the call "registry.getPropertyDescriptor("URL")" with the hard-coded string "URL", this is failing although the property is there BUT with the name in small case "url". I say this is because if we look at the class {color:#6a8759}"NifiRegistryFlowRegistryClient", {color}the url property is described as following: {code:java} public final static PropertyDescriptor PROPERTY_URL = new PropertyDescriptor.Builder() .name("url") .displayName("URL") .description("URL of the NiFi Registry") .addValidator(StandardValidators.URL_VALIDATOR) .required(true) .build();{code} And if you note the property name is described with small case "url". Hence PropertyDescriptor which bases its hash on the "name" property fails when we search with uppercase "URL". Hope I have helped here. Can someone fix this issue. We cannot commit in our registry currently because of the NPE. Regards -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] adam-markovics commented on a diff in pull request #1430: MINIFICPP-1922 Implement ListenUDP processor
adam-markovics commented on code in PR #1430: URL: https://github.com/apache/nifi-minifi-cpp/pull/1430#discussion_r1017910318 ## extensions/standard-processors/processors/ListenUDP.h: ## @@ -0,0 +1,62 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#pragma once + +#include +#include + +#include "NetworkListenerProcessor.h" +#include "core/logging/LoggerConfiguration.h" +#include "utils/Enum.h" Review Comment: I don't see this being used. ## extensions/standard-processors/processors/ListenUDP.cpp: ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "ListenUDP.h" + +#include "core/Resource.h" +#include "core/PropertyBuilder.h" +#include "controllers/SSLContextService.h" +#include "utils/ProcessorConfigUtils.h" + +namespace org::apache::nifi::minifi::processors { + +const core::Property ListenUDP::Port( +core::PropertyBuilder::createProperty("Listening Port") +->withDescription("The port to listen on for communication.") +->withType(core::StandardValidators::get().LISTEN_PORT_VALIDATOR) +->isRequired(true) +->build()); + +const core::Property ListenUDP::MaxQueueSize( +core::PropertyBuilder::createProperty("Max Size of Message Queue") +->withDescription("Maximum number of messages allowed to be buffered before processing them when the processor is triggered. " + "If the buffer is full, the message is ignored. If set to zero the buffer is unlimited.") +->withDefaultValue(1) +->isRequired(true) +->build()); + +const core::Property ListenUDP::MaxBatchSize( +core::PropertyBuilder::createProperty("Max Batch Size") +->withDescription("The maximum number of messages to process at a time.") +->withDefaultValue(500) +->isRequired(true) Review Comment: Same as above. ## extensions/standard-processors/processors/ListenUDP.cpp: ## @@ -0,0 +1,83 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +#include "ListenUDP.h" + +#include "core/Resource.h" +#include "core/PropertyBuilder.h" +#include "controllers/SSLContextService.h" +#include "utils/ProcessorConfigUtils.h" + +namespace org::apache::nifi::minifi::processors { + +const core::Property ListenUDP::Port( +core::PropertyBuilder::createProperty("Listening Port") +->withDescription("The port to listen on for communication.") +->withType(core::StandardValidators::get().LISTEN_PORT_VALIDATOR) +->isRequired(true) +->build()); + +const core::Property ListenUDP::MaxQueueSize( +core::PropertyBuilder::createProperty("Max Size of Message Queue") +
[jira] [Resolved] (MINIFICPP-1927) ExecuteProcess does not support escaping white spaces in arguments
[ https://issues.apache.org/jira/browse/MINIFICPP-1927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1927. -- Fix Version/s: 0.13.0 Resolution: Fixed > ExecuteProcess does not support escaping white spaces in arguments > -- > > Key: MINIFICPP-1927 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1927 > Project: Apache NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andre Araujo >Assignee: Gábor Gyimesi >Priority: Major > Fix For: 0.13.0 > > Time Spent: 3h 50m > Remaining Estimate: 0h > > The description of the {{{}ExecuteProcess{}}}'s {{Command Arguments}} > property says: > {quote}The arguments to supply to the executable delimited by white space. > White > space can be escaped by enclosing it in > double-quotes. > {quote} > However, this is not what happens. The arguments are > [split|https://github.com/apache/nifi-minifi-cpp/blob/main/extensions/standard-processors/processors/ExecuteProcess.cpp#L111] > into an array using whitespace as a separator and there's no way to escape > whitespaces in an argument. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (MINIFICPP-1966) Add AgentStatus to Prometheus metrics
[ https://issues.apache.org/jira/browse/MINIFICPP-1966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1966. -- Fix Version/s: 0.13.0 Resolution: Fixed > Add AgentStatus to Prometheus metrics > - > > Key: MINIFICPP-1966 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1966 > Project: Apache NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 0.13.0 > > Time Spent: 50m > Remaining Estimate: 0h > > AgentsStatus containing agent specific metrics like agent's cpu and memory > utilization is missing from the prometheus metrics and should be added. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1438: MINIFICPP-1966 Add AgentStatus to Prometheus metrics
szaszm closed pull request #1438: MINIFICPP-1966 Add AgentStatus to Prometheus metrics URL: https://github.com/apache/nifi-minifi-cpp/pull/1438 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1414: MINIFICPP-1927 Fix ExecuteProcess command argument issue and refactor
szaszm closed pull request #1414: MINIFICPP-1927 Fix ExecuteProcess command argument issue and refactor URL: https://github.com/apache/nifi-minifi-cpp/pull/1414 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm closed pull request #1419: MINIFICPP-1934 PutTCP processor
szaszm closed pull request #1419: MINIFICPP-1934 PutTCP processor URL: https://github.com/apache/nifi-minifi-cpp/pull/1419 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1449: MINIFICPP-1978 - Flush MergeContent bundles when its size would grow beyond max group size
adamdebreceni commented on code in PR #1449: URL: https://github.com/apache/nifi-minifi-cpp/pull/1449#discussion_r1017906452 ## extensions/libarchive/BinFiles.h: ## @@ -87,8 +87,10 @@ class Bin { } } -if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() + 1) > maxEntries_) +if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() + 1) > maxEntries_) { + closed_ = true; Review Comment: incoming flow files that are larger than max group size are immediately assigned their own bin, and flushed by themselves, so they won't be a problem, flow files that are large but not max size large, could cause the flush of a single bin, as we only try to insert into a single bin (the last in the group's queue) before creating a new bin for it -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a diff in pull request #1449: MINIFICPP-1978 - Flush MergeContent bundles when its size would grow beyond max group size
szaszm commented on code in PR #1449: URL: https://github.com/apache/nifi-minifi-cpp/pull/1449#discussion_r1017846663 ## extensions/libarchive/BinFiles.h: ## @@ -87,8 +87,10 @@ class Bin { } } -if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() + 1) > maxEntries_) +if ((queued_data_size_ + flow->getSize()) > maxSize_ || (queue_.size() + 1) > maxEntries_) { + closed_ = true; Review Comment: Do you think it would make sense to penalize flow files that are larger than the max group size? Currently if there is such a flow file, it will keep flushing out all bins regardless of how "full" they are. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] dam4rus commented on a diff in pull request #6584: NIFI-10370 Create record oriented PutSnowflake processor
dam4rus commented on code in PR #6584: URL: https://github.com/apache/nifi/pull/6584#discussion_r1017876293 ## nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-processors/pom.xml: ## @@ -0,0 +1,118 @@ + + + +http://maven.apache.org/POM/4.0.0"; +xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; +xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd";> + +nifi-snowflake-bundle +org.apache.nifi +1.19.0-SNAPSHOT + +4.0.0 + +nifi-snowflake-processors +jar + + + +org.apache.nifi +nifi-api +1.19.0-SNAPSHOT +provided + + +org.apache.nifi +nifi-utils +1.19.0-SNAPSHOT +provided + + +org.apache.nifi +nifi-snowflake-services +1.19.0-SNAPSHOT +provided + + +org.apache.nifi +nifi-snowflake-services-api +1.19.0-SNAPSHOT +provided + + +net.snowflake +snowflake-ingest-sdk +1.0.2-beta.3 +provided + + +commons-io +commons-io +provided + + +com.squareup.okhttp3 +mockwebserver +test + + +org.apache.nifi +nifi-mock +test + + +org.apache.nifi +nifi-kerberos-credentials-service-api +test + + +org.apache.nifi +nifi-kerberos-user-service-api +test + + +org.slf4j +jcl-over-slf4j +test + + +org.apache.nifi +nifi-key-service-api +1.19.0-SNAPSHOT +test + + +org.apache.nifi +nifi-key-service +1.19.0-SNAPSHOT +test + + +org.apache.nifi +nifi-dbcp-service-api +1.19.0-SNAPSHOT +test + + +org.apache.nifi +nifi-dbcp-base +1.19.0-SNAPSHOT +test + Review Comment: You are right. Probably forgot to run them after pulling in proxy settings. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (MINIFICPP-1980) Enable multithreading in PutUDP
Martin Zink created MINIFICPP-1980: -- Summary: Enable multithreading in PutUDP Key: MINIFICPP-1980 URL: https://issues.apache.org/jira/browse/MINIFICPP-1980 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Martin Zink Assignee: Martin Zink There is nothing preventing PutUDP to be multi-threaded. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] adamdebreceni opened a new pull request, #1449: MINIFICPP-1978 - Flush MergeContent bundles when its size would grow beyond max group size
adamdebreceni opened a new pull request, #1449: URL: https://github.com/apache/nifi-minifi-cpp/pull/1449 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically main)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI results for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] dam4rus commented on a diff in pull request #6584: NIFI-10370 Create record oriented PutSnowflake processor
dam4rus commented on code in PR #6584: URL: https://github.com/apache/nifi/pull/6584#discussion_r1017813657 ## nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-services-api/src/main/java/org/apache/nifi/processors/snowflake/SnowflakeConnectionWrapper.java: ## @@ -0,0 +1,40 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.processors.snowflake; + +import java.sql.Connection; +import java.sql.SQLException; +import net.snowflake.client.jdbc.SnowflakeConnection; + +public class SnowflakeConnectionWrapper implements AutoCloseable { Review Comment: The issue with returning a `Connection` is that we need to unwrap it into a `SnowflakeConnection` interface in the processors `onTrigger` to enable uploading `Stream`s. But this hasn't worked worked for me because the `Connection` instance is of a class in the service-api-nar, not the processors-nar. This caused an exception when calling `Connection.unwrap`. Tried annotating the processor with `@RequiresInstanceClassLoading` as well but haven't solved the issue. Maybe there's a solution I don't know about? As for the `AutoCloseable`: We could return a `SnowflakeConnection` instance but `SnowflakeConnection` doesn't implements `AutoCloseable`. So there's no way to close the connection via a `SnowflakeConnection` instance. This is a workaround to enable closing the connection while also providing a way to unwrap the connection -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] adamdebreceni commented on a diff in pull request #1431: MINIFICPP-1937 - Dynamically reopen rocksdb on column config change
adamdebreceni commented on code in PR #1431: URL: https://github.com/apache/nifi-minifi-cpp/pull/1431#discussion_r1017778091 ## extensions/rocksdb-repos/database/RocksDbInstance.cpp: ## @@ -18,56 +18,138 @@ #include "RocksDbInstance.h" #include +#include #include "logging/LoggerConfiguration.h" #include "rocksdb/utilities/options_util.h" #include "OpenRocksDb.h" #include "ColumnHandle.h" +#include "DbHandle.h" -namespace org { -namespace apache { -namespace nifi { -namespace minifi { -namespace internal { +namespace org::apache::nifi::minifi::internal { std::shared_ptr RocksDbInstance::logger_ = core::logging::LoggerFactory::getLogger(); -RocksDbInstance::RocksDbInstance(const std::string& path, RocksDbMode mode) : db_name_(path), mode_(mode) {} +RocksDbInstance::RocksDbInstance(std::string path, RocksDbMode mode) : db_name_(std::move(path)), mode_(mode) {} void RocksDbInstance::invalidate() { std::lock_guard db_guard{mtx_}; + invalidate(db_guard); +} + +void RocksDbInstance::invalidate(const std::lock_guard&) { // discard our own instance columns_.clear(); impl_.reset(); } -std::optional RocksDbInstance::open(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { +void RocksDbInstance::registerColumnConfig(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { + std::lock_guard db_guard{mtx_}; + logger_->log_trace("Registering column '%s' in database '%s'", column, db_name_); + auto it = column_configs_.find(column); + if (it != column_configs_.end()) { +throw std::runtime_error("Configuration is already registered for column '" + column + "'"); + } + column_configs_[column] = {.dbo_patch = db_options_patch, .cfo_patch = cf_options_patch}; + + bool need_reopen = [&] { +if (!impl_) { + logger_->log_trace("Database is already scheduled to be reopened"); + return false; +} +{ + rocksdb::DBOptions db_opts_copy = db_options_; + Writable db_opts_writer(db_opts_copy); + if (db_options_patch) { +db_options_patch(db_opts_writer); +if (db_opts_writer.isModified()) { + logger_->log_trace("Requested a difference DBOptions than the one that was used to open the database"); + return true; +} + } +} +if (!columns_.contains(column)) { + logger_->log_trace("Previously unspecified column, will dynamically create the column"); + return false; +} +if (!cf_options_patch) { + logger_->log_trace("No explicit ColumnFamilyOptions was requested"); + return false; +} +logger_->log_trace("Could not determine if we definitely need to reopen or we are definitely safe, requesting reopen"); +return true; + }(); + if (need_reopen) { +// reset impl_, for the database to be reopened on the next RocksDbInstance::open call +invalidate(db_guard); + } +} + +void RocksDbInstance::unregisterColumnConfig(const std::string& column) { + std::lock_guard db_guard{mtx_}; + auto it = column_configs_.find(column); Review Comment: good idea, changed ## extensions/rocksdb-repos/database/RocksDbInstance.cpp: ## @@ -18,56 +18,138 @@ #include "RocksDbInstance.h" #include +#include #include "logging/LoggerConfiguration.h" #include "rocksdb/utilities/options_util.h" #include "OpenRocksDb.h" #include "ColumnHandle.h" +#include "DbHandle.h" -namespace org { -namespace apache { -namespace nifi { -namespace minifi { -namespace internal { +namespace org::apache::nifi::minifi::internal { std::shared_ptr RocksDbInstance::logger_ = core::logging::LoggerFactory::getLogger(); -RocksDbInstance::RocksDbInstance(const std::string& path, RocksDbMode mode) : db_name_(path), mode_(mode) {} +RocksDbInstance::RocksDbInstance(std::string path, RocksDbMode mode) : db_name_(std::move(path)), mode_(mode) {} void RocksDbInstance::invalidate() { std::lock_guard db_guard{mtx_}; + invalidate(db_guard); +} + +void RocksDbInstance::invalidate(const std::lock_guard&) { // discard our own instance columns_.clear(); impl_.reset(); } -std::optional RocksDbInstance::open(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { +void RocksDbInstance::registerColumnConfig(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { + std::lock_guard db_guard{mtx_}; + logger_->log_trace("Registering column '%s' in database '%s'", column, db_name_); + auto it = column_configs_.find(column); Review Comment: good idea, changed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For quer
[GitHub] [nifi-minifi-cpp] adam-markovics commented on a diff in pull request #1431: MINIFICPP-1937 - Dynamically reopen rocksdb on column config change
adam-markovics commented on code in PR #1431: URL: https://github.com/apache/nifi-minifi-cpp/pull/1431#discussion_r1017715807 ## extensions/rocksdb-repos/database/RocksDbInstance.cpp: ## @@ -18,56 +18,138 @@ #include "RocksDbInstance.h" #include +#include #include "logging/LoggerConfiguration.h" #include "rocksdb/utilities/options_util.h" #include "OpenRocksDb.h" #include "ColumnHandle.h" +#include "DbHandle.h" -namespace org { -namespace apache { -namespace nifi { -namespace minifi { -namespace internal { +namespace org::apache::nifi::minifi::internal { std::shared_ptr RocksDbInstance::logger_ = core::logging::LoggerFactory::getLogger(); -RocksDbInstance::RocksDbInstance(const std::string& path, RocksDbMode mode) : db_name_(path), mode_(mode) {} +RocksDbInstance::RocksDbInstance(std::string path, RocksDbMode mode) : db_name_(std::move(path)), mode_(mode) {} void RocksDbInstance::invalidate() { std::lock_guard db_guard{mtx_}; + invalidate(db_guard); +} + +void RocksDbInstance::invalidate(const std::lock_guard&) { // discard our own instance columns_.clear(); impl_.reset(); } -std::optional RocksDbInstance::open(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { +void RocksDbInstance::registerColumnConfig(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { + std::lock_guard db_guard{mtx_}; + logger_->log_trace("Registering column '%s' in database '%s'", column, db_name_); + auto it = column_configs_.find(column); Review Comment: `contains()` would be simpler, or `insert()` and then checking if insertion happened ## extensions/rocksdb-repos/database/RocksDbInstance.cpp: ## @@ -18,56 +18,138 @@ #include "RocksDbInstance.h" #include +#include #include "logging/LoggerConfiguration.h" #include "rocksdb/utilities/options_util.h" #include "OpenRocksDb.h" #include "ColumnHandle.h" +#include "DbHandle.h" -namespace org { -namespace apache { -namespace nifi { -namespace minifi { -namespace internal { +namespace org::apache::nifi::minifi::internal { std::shared_ptr RocksDbInstance::logger_ = core::logging::LoggerFactory::getLogger(); -RocksDbInstance::RocksDbInstance(const std::string& path, RocksDbMode mode) : db_name_(path), mode_(mode) {} +RocksDbInstance::RocksDbInstance(std::string path, RocksDbMode mode) : db_name_(std::move(path)), mode_(mode) {} void RocksDbInstance::invalidate() { std::lock_guard db_guard{mtx_}; + invalidate(db_guard); +} + +void RocksDbInstance::invalidate(const std::lock_guard&) { // discard our own instance columns_.clear(); impl_.reset(); } -std::optional RocksDbInstance::open(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { +void RocksDbInstance::registerColumnConfig(const std::string& column, const DBOptionsPatch& db_options_patch, const ColumnFamilyOptionsPatch& cf_options_patch) { + std::lock_guard db_guard{mtx_}; + logger_->log_trace("Registering column '%s' in database '%s'", column, db_name_); + auto it = column_configs_.find(column); + if (it != column_configs_.end()) { +throw std::runtime_error("Configuration is already registered for column '" + column + "'"); + } + column_configs_[column] = {.dbo_patch = db_options_patch, .cfo_patch = cf_options_patch}; + + bool need_reopen = [&] { +if (!impl_) { + logger_->log_trace("Database is already scheduled to be reopened"); + return false; +} +{ + rocksdb::DBOptions db_opts_copy = db_options_; + Writable db_opts_writer(db_opts_copy); + if (db_options_patch) { +db_options_patch(db_opts_writer); +if (db_opts_writer.isModified()) { + logger_->log_trace("Requested a difference DBOptions than the one that was used to open the database"); + return true; +} + } +} +if (!columns_.contains(column)) { + logger_->log_trace("Previously unspecified column, will dynamically create the column"); + return false; +} +if (!cf_options_patch) { + logger_->log_trace("No explicit ColumnFamilyOptions was requested"); + return false; +} +logger_->log_trace("Could not determine if we definitely need to reopen or we are definitely safe, requesting reopen"); +return true; + }(); + if (need_reopen) { +// reset impl_, for the database to be reopened on the next RocksDbInstance::open call +invalidate(db_guard); + } +} + +void RocksDbInstance::unregisterColumnConfig(const std::string& column) { + std::lock_guard db_guard{mtx_}; + auto it = column_configs_.find(column); Review Comment: Instead of a lookup with `find()`, you could try erasing and then checking return value if erasure happened. Like: `if (column_configs_.erase(column) == 0) { ... /* throw exception */`
[jira] [Updated] (NIFI-10785) Allow publishing AMQP message with null header value
[ https://issues.apache.org/jira/browse/NIFI-10785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandor Soma Abonyi updated NIFI-10785: -- Description: Since after NIFI-10317 ConsumeAMQP is able to handle null header value it makes sense to support it in PublishAMQP. > Allow publishing AMQP message with null header value > > > Key: NIFI-10785 > URL: https://issues.apache.org/jira/browse/NIFI-10785 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Nandor Soma Abonyi >Assignee: Nandor Soma Abonyi >Priority: Minor > Labels: amqp > > Since after NIFI-10317 ConsumeAMQP is able to handle null header value it > makes sense to support it in PublishAMQP. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10786) CaptureChangeMySQL cannot capture DDL events
YangHai created NIFI-10786: -- Summary: CaptureChangeMySQL cannot capture DDL events Key: NIFI-10786 URL: https://issues.apache.org/jira/browse/NIFI-10786 Project: Apache NiFi Issue Type: Bug Environment: mysql: 5.7.39, Nifi: nifi-1.17.0, java: 8.0 Reporter: YangHai Attachments: nifi.PNG In CaptureChangeMySQL, DDL Events cannot be captured even if the Include DDL Events is set to true. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10785) Allow publishing AMQP message with null header value
Nandor Soma Abonyi created NIFI-10785: - Summary: Allow publishing AMQP message with null header value Key: NIFI-10785 URL: https://issues.apache.org/jira/browse/NIFI-10785 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Nandor Soma Abonyi Assignee: Nandor Soma Abonyi -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] Lehel44 commented on a diff in pull request #6584: NIFI-10370 Create record oriented PutSnowflake processor
Lehel44 commented on code in PR #6584: URL: https://github.com/apache/nifi/pull/6584#discussion_r1017333554 ## nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-processors/src/test/java/org/apache/nifi/processors/snowflake/SnowflakePipeIT.java: ## @@ -0,0 +1,129 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.processors.snowflake; + +import java.util.Arrays; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import net.snowflake.ingest.utils.StagedFileWrapper; +import org.apache.commons.io.FileUtils; +import org.apache.nifi.flowfile.attributes.CoreAttributes; +import org.apache.nifi.processors.snowflake.common.Attributes; +import org.apache.nifi.util.TestRunner; +import org.apache.nifi.util.TestRunners; +import org.junit.jupiter.api.Test; + +public class SnowflakePipeIT implements SnowflakeConfigAware { + +@Test +void shouldPutIntoInternalStage() throws Exception { +final PutSnowflakeInternalStage processor = new PutSnowflakeInternalStage(); + +final TestRunner runner = TestRunners.newTestRunner(processor); +final SnowflakeConnectionProviderService connectionProviderService = createConnectionProviderService(runner); + + runner.setProperty(PutSnowflakeInternalStage.SNOWFLAKE_CONNECTION_PROVIDER, connectionProviderService.getIdentifier()); +runner.setProperty(PutSnowflakeInternalStage.INTERNAL_STAGE_NAME, internalStageName); + +final String uuid = UUID.randomUUID().toString(); +final String fileName = filePath.getFileName().toString(); + +final Map attributes = new HashMap<>(); +attributes.put(CoreAttributes.FILENAME.key(), fileName); +attributes.put(CoreAttributes.PATH.key(), uuid + "/"); +runner.enqueue(filePath, attributes); + +runner.run(); + +final Set checkedAttributes = new HashSet<>(Arrays.asList(Attributes.ATTRIBUTE_STAGED_FILE_PATH)); +final Map expectedAttributesMap = new HashMap<>(); +expectedAttributesMap.put(Attributes.ATTRIBUTE_STAGED_FILE_PATH, uuid + "/" + fileName); +final Set> expectedAttributes = new HashSet<>(Arrays.asList(expectedAttributesMap)); Review Comment: ```suggestion final Set checkedAttributes = Collections.singleton(Attributes.ATTRIBUTE_STAGED_FILE_PATH); final Map expectedAttributesMap = Collections.singletonMap(Attributes.ATTRIBUTE_STAGED_FILE_PATH, uuid + "/" + fileName); final Set> expectedAttributes = Collections.singleton(expectedAttributesMap); ``` ## nifi-nar-bundles/nifi-snowflake-bundle/nifi-snowflake-processors/pom.xml: ## @@ -0,0 +1,118 @@ + + + +http://maven.apache.org/POM/4.0.0"; +xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; +xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd";> + +nifi-snowflake-bundle +org.apache.nifi +1.19.0-SNAPSHOT + +4.0.0 + +nifi-snowflake-processors +jar + + + +org.apache.nifi +nifi-api +1.19.0-SNAPSHOT +provided + + +org.apache.nifi +nifi-utils +1.19.0-SNAPSHOT +provided + + +org.apache.nifi +nifi-snowflake-services +1.19.0-SNAPSHOT +provided + + +org.apache.nifi +nifi-snowflake-services-api +1.19.0-SNAPSHOT +provided + + +net.snowflake +snowflake-ingest-sdk +1.0.2-beta.3 +provided + + +commons-io +commons-io +provided + + +com.squareup.okhttp3 +mockwebserver +test + + +org.apache.nifi +nifi-mock +test + + +org.apache.nifi +nifi-kerberos-credentials-service-api +test + + +org.apache.nifi +nifi
[jira] [Created] (MINIFICPP-1979) Use Coroutines with asio
Martin Zink created MINIFICPP-1979: -- Summary: Use Coroutines with asio Key: MINIFICPP-1979 URL: https://issues.apache.org/jira/browse/MINIFICPP-1979 Project: Apache NiFi MiNiFi C++ Issue Type: Improvement Reporter: Martin Zink Assignee: Martin Zink All of our compilers support coroutines, (for gcc 10.2(centos, docker) but it must be enabled with the -fcoroutines flag) The async operations in asio use callback chains, which can be made much simpler with coroutines. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (MINIFICPP-1848) Create a generic solution for processor metrics
[ https://issues.apache.org/jira/browse/MINIFICPP-1848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gábor Gyimesi resolved MINIFICPP-1848. -- Fix Version/s: 0.13.0 Resolution: Fixed > Create a generic solution for processor metrics > --- > > Key: MINIFICPP-1848 > URL: https://issues.apache.org/jira/browse/MINIFICPP-1848 > Project: Apache NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Gábor Gyimesi >Assignee: Gábor Gyimesi >Priority: Minor > Fix For: 0.13.0 > > Time Spent: 4h > Remaining Estimate: 0h > > There are component level metrics (aside from the flow level metrics) that > can be retrieved and published through C2 protocol or through 3rd party > metrics publishers in the future. For example GetFile and GetTCP protocols > have their corresponding GetFileMetrics and GetTCPMetrics. These metrics > collect the same generic processor metrics like trigger invocation count > which could be generalized. We should come up with a generic solution to > collect non-function specific processor metrics that could retrieved from any > processor. -- This message was sent by Atlassian Jira (v8.20.10#820010)