[GitHub] [nifi] mcgilman commented on a diff in pull request #5671: NIFI-9514 NIFI-9515: Add UI support for Parameter Providers in Controller Services
mcgilman commented on code in PR #5671: URL: https://github.com/apache/nifi/pull/5671#discussion_r968676295 ## nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-ui/src/main/webapp/js/nf/canvas/nf-parameter-provider.js: ## @@ -0,0 +1,2778 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +/* global define, module, require, exports */ + +(function (root, factory) { +if (typeof define === 'function' && define.amd) { +define(['jquery', +'Slick', +'nf.ErrorHandler', +'nf.Common', +'nf.CanvasUtils', +'nf.Dialog', +'nf.Storage', +'nf.Client', +'nf.ControllerService', +'nf.ControllerServices', +'nf.UniversalCapture', +'nf.CustomUi', +'nf.Verify', +'nf.Processor', +'nf.ProcessGroup', +'nf.ParameterContexts', +'nf.ProcessGroupConfiguration', +'lodash'], +function ($, Slick, nfErrorHandler, nfCommon, nfCanvasUtils, nfDialog, nfStorage, nfClient, nfControllerService, nfControllerServices, nfUniversalCapture, nfCustomUi, nfVerify, nfProcessor, nfProcessGroup, nfParameterContexts, nfProcessGroupConfiguration, _) { +return (nf.ParameterProvider = factory($, Slick, nfErrorHandler, nfCommon, nfCanvasUtils, nfDialog, nfStorage, nfClient, nfControllerService, nfControllerServices, nfUniversalCapture, nfCustomUi, nfVerify, nfProcessor, nfProcessGroup, nfParameterContexts, nfProcessGroupConfiguration, _)); +}); +} else if (typeof exports === 'object' && typeof module === 'object') { +module.exports = (nf.ParameterProvider = +factory(require('jquery'), +require('Slick'), +require('nf.ErrorHandler'), +require('nf.Common'), +require('nf.CanvasUtils'), +require('nf.Dialog'), +require('nf.Storage'), +require('nf.Client'), +require('nf.ControllerService'), +require('nf.ControllerServices'), +require('nf.UniversalCapture'), +require('nf.CustomUi'), +require('nf.Verify'), +require('nf.Processor'), +require('nf.ProcessGroup'), +require('nf.ParameterContexts'), +require('nf.ProcessGroupConfiguration'), +require('lodash'))); +} else { +nf.ParameterProvider = factory(root.$, +root.Slick, +root.nf.ErrorHandler, +root.nf.Common, +root.nf.CanvasUtils, +root.nf.Dialog, +root.nf.Storage, +root.nf.Client, +root.nf.ControllerService, +root.nf.ControllerServices, +root.nf.UniversalCapture, +root.nf.CustomUi, +root.nf.Verify, +root.nf.Processor, +root.nf.ProcessGroup, +root.nf.ParameterContexts, +root.nf.ProcessGroupConfiguration, +root._); +} +}(this, function ($, Slick, nfErrorHandler, nfCommon, nfCanvasUtils, nfDialog, nfStorage, nfClient, nfControllerService, nfControllerServices, nfUniversalCapture, nfCustomUi, nfVerify, nfProcessor, nfProcessGroup, nfParameterContexts, nfProcessGroupConfiguration, _) { +'use strict'; + +var nfSettings; +var fetchParameterProviderOptions; + +var config = { +edit: 'edit', +readOnly: 'read-only', +urls: { +parameterProviders: '../nifi-api/parameter-providers', +api: '../nifi-api' +} +}; + +// load the controller services +var controllerServicesUri = config.urls.api + '/flow/controller/controller-services'; + +var groupCount = 0; + +var parameterCount = 0; +var sensitiveParametersArray = []; + +var SENSITIVE = 'SENSITIVE'; +var NON_SENSITIVE = 'NON_SENSITIVE'; + +var parameterGroupsGridOptions = { +autosizeColsMode: Slick.GridAutosizeColsMode.LegacyForceFit, +
[GitHub] [nifi] exceptionfactory commented on pull request #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
exceptionfactory commented on PR #6445: URL: https://github.com/apache/nifi/pull/6445#issuecomment-1256821855 Thanks @joewitt, agreed, re-evaluating the approach at a higher level sounds like a good topic for a separate issue. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
joewitt commented on PR #6445: URL: https://github.com/apache/nifi/pull/6445#issuecomment-1256821429 haha - yeah already pushed (force) the PORT PORT PORT thing. I also thought about the hashing which would be a fine approach. But I'd do that as part of what appears to be a lot of surgery needed in this. It has grown in purposes/reuse for other patterns and frankly could use a substantial house cleaning. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a diff in pull request #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
exceptionfactory commented on code in PR #6445: URL: https://github.com/apache/nifi/pull/6445#discussion_r979130928 ## nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/SFTPTransfer.java: ## @@ -580,12 +587,23 @@ private String getMessage(final SFTPException e) { } protected SFTPClient getSFTPClient(final FlowFile flowFile) throws IOException { +final String evaledHostname = ctx.getProperty(HOSTNAME).evaluateAttributeExpressions(flowFile).getValue(); +final String evaledPort = ctx.getProperty(PORT).evaluateAttributeExpressions(flowFile).getValue(); +final String evaledUsername = ctx.getProperty(PORT).evaluateAttributeExpressions(flowFile).getValue(); +final String evaledPassword = ctx.getProperty(PORT).evaluateAttributeExpressions(flowFile).getValue(); +final String evaledPrivateKeyPath = ctx.getProperty(PORT).evaluateAttributeExpressions(flowFile).getValue(); +final String evaledPrivateKeyPassphrase = ctx.getProperty(PORT).evaluateAttributeExpressions(flowFile).getValue(); Review Comment: It looks like the property descriptor `PORT` reference needs to be replaced with the appropriate property descriptor for the username, password, and private key values. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10541) PutSFTP listing logic fails when credentials/location come from flow file attributes
Joe Witt created NIFI-10541: --- Summary: PutSFTP listing logic fails when credentials/location come from flow file attributes Key: NIFI-10541 URL: https://issues.apache.org/jira/browse/NIFI-10541 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.17.0, 1.18.0 Reporter: Joe Witt In fixing NIFI-10532 it was discovered that PutSFTP logic for listing directories to check if the required directories exist will not work when key values come from flowfile attributes such as login credentials or destination directory or host/port data. This might have worked before NIFI-10532 but only by accident. The solution will be to inject the flowfile and resolve its values properly when constructing the sftp client. The performance will be poor generally in such cases but this is a known limitation of this level of flexibility. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] joewitt commented on pull request #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
joewitt commented on PR #6445: URL: https://github.com/apache/nifi/pull/6445#issuecomment-1256818257 The issue doesnt appear to occur on the previous version though the change should close the risk and also tests to work well. Testing did reveal that file listing logic doesn't work in the case parameters come from flow files as is needed with teh PutSFTP case even to check for missing directories. That can be addressed in a later issue as it isn't a risk of sending data to the wrong destination. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
joewitt commented on PR #6445: URL: https://github.com/apache/nifi/pull/6445#issuecomment-1256803484 Verified behavior before was not correct. Verified behavior after is working as expected. Verified logic in provided example from NII-10532 before/after specifically for FTP processors. Tested behavior of PutFTP, GetFTP, ListFTP, FetchFTP. Need to check SFTP now as well but the exact same change was made. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
joewitt commented on PR #6445: URL: https://github.com/apache/nifi/pull/6445#issuecomment-1256603422 need to do some manual testing now on before/after. Have only walked through the code and made suggeste change. Will share details soon -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt opened a new pull request, #6445: NIFI-10532 ensuring client gets reset if any of the key values host/p…
joewitt opened a new pull request, #6445: URL: https://github.com/apache/nifi/pull/6445 …ort/user/pw change on a per ff basis # Summary [NIFI-0](https://issues.apache.org/jira/browse/NIFI-0) # Tracking Please complete the following tracking steps prior to pull request creation. ### Issue Tracking - [ ] [Apache NiFi Jira](https://issues.apache.org/jira/browse/NIFI) issue created ### Pull Request Tracking - [ ] Pull Request title starts with Apache NiFi Jira issue number, such as `NIFI-0` - [ ] Pull Request commit message starts with Apache NiFi Jira issue number, as such `NIFI-0` ### Pull Request Formatting - [ ] Pull Request based on current revision of the `main` branch - [ ] Pull Request refers to a feature branch with one commit containing changes # Verification Please indicate the verification steps performed prior to pull request creation. ### Build - [ ] Build completed using `mvn clean install -P contrib-check` - [ ] JDK 8 - [ ] JDK 11 - [ ] JDK 17 ### Licensing - [ ] New dependencies are compatible with the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0) according to the [License Policy](https://www.apache.org/legal/resolved.html) - [ ] New dependencies are documented in applicable `LICENSE` and `NOTICE` files ### Documentation - [ ] Documentation formatting appears as expected in rendered files -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10540) Use single Checkstyle configuration file to configure Maven and Intellij Checkstyle plugins
[ https://issues.apache.org/jira/browse/NIFI-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz updated NIFI-10540: Description: In the [Contributor Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] it is suggested to use a Checkstyle configuration file extracted from the top level pom when using the Intellij Checkstyle plugin. In order to avoid possible inconsistencies between what is in that configuration file and what is in the top level pom.xml, place a Checkstyle configuration file as part of the NIFI code base and reference it in the plugin configuration as detailed in [Using a Custom Checkstyle Checker Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] and use the same file for configuring the Intellij Checkstyle plugin (was: I noticed in the [Contributor Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] it is suggested to use a Checkstyle configuration file extracted from the top level pom when using the Intellij Checkstyle plugin. In order to avoid possible inconsistencies between what is in that configuration file and what is in the top level pom.xml, place a Checkstyle configuration file as part of the NIFI code base and reference it in the plugin configuration as detailed in [Using a Custom Checkstyle Checker Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] and use the same file for configuring the Intellij Checkstyle plugin) > Use single Checkstyle configuration file to configure Maven and Intellij > Checkstyle plugins > --- > > Key: NIFI-10540 > URL: https://issues.apache.org/jira/browse/NIFI-10540 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Trivial > > In the [Contributor > Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] > it is suggested to use a Checkstyle configuration file extracted from the > top level pom when using the Intellij Checkstyle plugin. In order to avoid > possible inconsistencies between what is in that configuration file and what > is in the top level pom.xml, place a Checkstyle configuration file as part of > the NIFI code base and reference it in the plugin configuration as detailed > in [Using a Custom Checkstyle Checker > Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] > and use the same file for configuring the Intellij Checkstyle plugin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10540) Use single Checkstyle configuration file to configure Maven and Intellij Checkstyle plugins
[ https://issues.apache.org/jira/browse/NIFI-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz updated NIFI-10540: Summary: Use single Checkstyle configuration file to configure Maven and Intellij Checkstyle plugins (was: Use Checkstyle configuration file to allow configuring Maven and Intellij Checkstyle plugins) > Use single Checkstyle configuration file to configure Maven and Intellij > Checkstyle plugins > --- > > Key: NIFI-10540 > URL: https://issues.apache.org/jira/browse/NIFI-10540 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Minor > > I noticed in the [Contributor > Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] > it is suggested to use a Checkstyle configuration file extracted from the > top level pom when using the Intellij Checkstyle plugin. > In order to avoid possible inconsistencies between what is in that > configuration file and what is in the top level pom.xml, place a Checkstyle > configuration file as part of the NIFI code base and reference it in the > plugin configuration as detailed in [Using a Custom Checkstyle Checker > Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] > and use the same file for configuring the Intellij Checkstyle plugin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10540) Use single Checkstyle configuration file to configure Maven and Intellij Checkstyle plugins
[ https://issues.apache.org/jira/browse/NIFI-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz updated NIFI-10540: Priority: Trivial (was: Minor) > Use single Checkstyle configuration file to configure Maven and Intellij > Checkstyle plugins > --- > > Key: NIFI-10540 > URL: https://issues.apache.org/jira/browse/NIFI-10540 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Trivial > > I noticed in the [Contributor > Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] > it is suggested to use a Checkstyle configuration file extracted from the > top level pom when using the Intellij Checkstyle plugin. > In order to avoid possible inconsistencies between what is in that > configuration file and what is in the top level pom.xml, place a Checkstyle > configuration file as part of the NIFI code base and reference it in the > plugin configuration as detailed in [Using a Custom Checkstyle Checker > Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] > and use the same file for configuring the Intellij Checkstyle plugin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10540) Use Checkstyle configuration file to allow configuring Maven and Intellij Checkstyle plugins
[ https://issues.apache.org/jira/browse/NIFI-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz updated NIFI-10540: Priority: Minor (was: Major) > Use Checkstyle configuration file to allow configuring Maven and Intellij > Checkstyle plugins > > > Key: NIFI-10540 > URL: https://issues.apache.org/jira/browse/NIFI-10540 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Minor > > I noticed in the [Contributor > Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] > it is suggested to use a Checkstyle configuration file extracted from the > top level pom when using the Intellij Checkstyle plugin. > In order to avoid possible inconsistencies between what is in that > configuration file and what is in the top level pom.xml, place a Checkstyle > configuration file as part of the NIFI code base and reference it in the > plugin configuration as detailed in [Using a Custom Checkstyle Checker > Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] > and use the same file for configuring the Intellij Checkstyle plugin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-10540) Use Checkstyle configuration file to allow configuring Maven and Intellij Checkstyle plugins
[ https://issues.apache.org/jira/browse/NIFI-10540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Stieglitz reassigned NIFI-10540: --- Assignee: Daniel Stieglitz > Use Checkstyle configuration file to allow configuring Maven and Intellij > Checkstyle plugins > > > Key: NIFI-10540 > URL: https://issues.apache.org/jira/browse/NIFI-10540 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Daniel Stieglitz >Assignee: Daniel Stieglitz >Priority: Major > > I noticed in the [Contributor > Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] > it is suggested to use a Checkstyle configuration file extracted from the > top level pom when using the Intellij Checkstyle plugin. > In order to avoid possible inconsistencies between what is in that > configuration file and what is in the top level pom.xml, place a Checkstyle > configuration file as part of the NIFI code base and reference it in the > plugin configuration as detailed in [Using a Custom Checkstyle Checker > Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] > and use the same file for configuring the Intellij Checkstyle plugin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10540) Use Checkstyle configuration file to allow configuring Maven and Intellij Checkstyle plugins
Daniel Stieglitz created NIFI-10540: --- Summary: Use Checkstyle configuration file to allow configuring Maven and Intellij Checkstyle plugins Key: NIFI-10540 URL: https://issues.apache.org/jira/browse/NIFI-10540 Project: Apache NiFi Issue Type: Improvement Reporter: Daniel Stieglitz I noticed in the [Contributor Guide|https://cwiki.apache.org/confluence/display/NIFI/Contributor+Guide#ContributorGuide-IntelliJIDEAUsers] it is suggested to use a Checkstyle configuration file extracted from the top level pom when using the Intellij Checkstyle plugin. In order to avoid possible inconsistencies between what is in that configuration file and what is in the top level pom.xml, place a Checkstyle configuration file as part of the NIFI code base and reference it in the plugin configuration as detailed in [Using a Custom Checkstyle Checker Configuration|https://maven.apache.org/plugins/maven-checkstyle-plugin/examples/custom-checker-config.html] and use the same file for configuring the Intellij Checkstyle plugin -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608864#comment-17608864 ] Joe Witt commented on NIFI-10532: - If hostname, port, username, or password as evaluated against each flowfile results in a difference from the previous transfer created then we need to close the connection and make a new one. We need to make it clear to users that this usage pattern is convenient but inefficient so should not be used when performance is desired. Also, in general, it is never a great idea to put sensitive values in flowfile attributes. > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Assignee: Joe Witt >Priority: Critical > Fix For: 1.18.0 > > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-5717) FTPTransfer can't connect to different users in a same host
[ https://issues.apache.org/jira/browse/NIFI-5717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt resolved NIFI-5717. Resolution: Duplicate > FTPTransfer can't connect to different users in a same host > --- > > Key: NIFI-5717 > URL: https://issues.apache.org/jira/browse/NIFI-5717 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.7.1 >Reporter: Daniel do Vale >Priority: Major > Original Estimate: 1h > Remaining Estimate: 1h > > 1) I have one FTP server host which different users can access. > 2) I'm trying to connect with 2 different users in the same host. These users > have different root folders configurated inside my host. > 3) The first user can connect to FTP without any problems. > 4) The second user can't connect to FTP properly, because nifi FTPTransfer > class have one verification that sees if the host is the same that previous > access, it don't reconnect (just reuse the current connection). For this > reason, the folders path dont match when i try to get some files. > > The problem occurs inside "getClient" method in FTPTransfer util class. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10473) Parameter Provider Fetch REST call authorization check is too restrictive
[ https://issues.apache.org/jira/browse/NIFI-10473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608863#comment-17608863 ] ASF subversion and git services commented on NIFI-10473: Commit ece83709f4769fe4b0950c1aa4b32a559599385f in nifi's branch refs/heads/main from Joe Gresock [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=ece83709f4 ] NIFI-10473: Removing referencing components check on param provider f… (#6388) * NIFI-10473: Removing referencing components check on param provider fetch * NIFI-10473: Adding parameter status DTO to ParameterProviderDTO * Allowing parameterStatus to be populated even when no parameters were updated * Adding ParameterStatus enum for parameter fetching * Adding MISSING_BUT_REFERENCED ParameterStatus This closes #6388 > Parameter Provider Fetch REST call authorization check is too restrictive > - > > Key: NIFI-10473 > URL: https://issues.apache.org/jira/browse/NIFI-10473 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Joe Gresock >Assignee: Joe Gresock >Priority: Minor > Fix For: 1.18.0 > > Time Spent: 0.5h > Remaining Estimate: 0h > > When bringing up the Fetch Parameters dialog, if the user is not authorized > on any referencing component, the dialog fails to load. This is overly > restrictive, as NiFi already prevents applying the parameters in this > scenario. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] mcgilman merged pull request #6388: NIFI-10473: Removing referencing components check on param provider f…
mcgilman merged PR #6388: URL: https://github.com/apache/nifi/pull/6388 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markobean commented on pull request #6254: NIFI-10287 ExecuteScript - Allow python scripts to use external modules
markobean commented on PR #6254: URL: https://github.com/apache/nifi/pull/6254#issuecomment-1256482258 Unit test looks good. I built the code (including -Pcontrib-check), and all looks good. I tested by instantiating an ExecuteScript processor. It was configured with a module directory with a .py module file similar to the one used in the unit test. Ran data and it worked like a charm. I changed the module code and re-ran a flowfile (without stopping the processor.) The changes were not reflected in the processed flowfile. However, restarting the processor picks up the module update. I believe this is expected behavior since we do not want to reload modules (or even check if they should be reloaded) on every flowfile; I believe that would be overly burdensome. Overall, looks good to merge to me. Thanks @NissimShiman ! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-10532: Priority: Critical (was: Major) > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Assignee: Joe Witt >Priority: Critical > Fix For: 1.18.0 > > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt updated NIFI-10532: Fix Version/s: 1.18.0 > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Assignee: Joe Witt >Priority: Major > Fix For: 1.18.0 > > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Witt reassigned NIFI-10532: --- Assignee: Joe Witt > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Assignee: Joe Witt >Priority: Major > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608850#comment-17608850 ] Joe Witt commented on NIFI-10532: - likely impacts PutFTP, PutSFTP, FetchFTP, FetchSFTP. > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Priority: Major > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10539) PutEmail with chinese characters has erroneous values in attachment when using Big5 character set
Emilio Setiadarma created NIFI-10539: Summary: PutEmail with chinese characters has erroneous values in attachment when using Big5 character set Key: NIFI-10539 URL: https://issues.apache.org/jira/browse/NIFI-10539 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Emilio Setiadarma Assignee: Emilio Setiadarma # if the GenerateFlowFile character set is Big5 and the PutEmail is UTF-8, the message is correct, but the attachment is wrong # if the GenerateFlowFile character set is Big5 and the PutEmail is Big5, the message is correct, but the attachment is wrong # If both GenerateFlowFile character set is UTF-8 and PutEmail is Big5, the message is wrong, but attachment is correct -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1391: MINIFICPP-1846 - Json configuration support part 1
fgerlits commented on code in PR #1391: URL: https://github.com/apache/nifi-minifi-cpp/pull/1391#discussion_r976409031 ## libminifi/include/Defaults.h: ## @@ -19,12 +19,14 @@ #ifdef WIN32 #define DEFAULT_NIFI_CONFIG_YML "\\conf\\config.yml" +#define DEFAULT_NIFI_CONFIG_JSON "\\conf\\config.json" #define DEFAULT_NIFI_PROPERTIES_FILE "\\conf\\minifi.properties" #define DEFAULT_LOG_PROPERTIES_FILE "\\conf\\minifi-log.properties" #define DEFAULT_UID_PROPERTIES_FILE "\\conf\\minifi-uid.properties" #define DEFAULT_BOOTSTRAP_FILE "\\conf\\bootstrap.conf" #else #define DEFAULT_NIFI_CONFIG_YML "./conf/config.yml" +#define DEFAULT_NIFI_CONFIG_JSON "./conf/config.json" #define DEFAULT_NIFI_PROPERTIES_FILE "./conf/minifi.properties" #define DEFAULT_LOG_PROPERTIES_FILE "./conf/minifi-log.properties" #define DEFAULT_UID_PROPERTIES_FILE "./conf/minifi-uid.properties" Review Comment: Not a problem, just a note: if #1409 gets merged, then these will need to be changed to relative paths `conf\\config.json` and `conf/config.json`. I hope git is smart enough to flag this as a conflict, but it may not be. ## libminifi/include/core/flow/Node.h: ## @@ -0,0 +1,143 @@ +/** + * + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#pragma once + +#include +#include +#include +#include "nonstd/expected.hpp" + +namespace org::apache::nifi::minifi::core::flow { + +class Node { + public: + struct Cursor { +int line; +int column; +int pos; + }; + + class Impl; + class Iterator { + public: +class Value; + +class Impl { + public: + virtual Impl& operator++() = 0; + virtual bool operator==(const Impl& other) const = 0; + virtual Value operator*() const = 0; + bool operator!=(const Impl& other) const {return !(*this == other);} + + virtual std::unique_ptr clone() const = 0; + virtual ~Impl() = default; +}; + +Iterator& operator++() { + impl_->operator++(); + return *this; +} + +explicit Iterator(std::unique_ptr impl) : impl_(std::move(impl)) {} +Iterator(const Iterator& other): impl_(other.impl_->clone()) {} +Iterator(Iterator&&) = default; +Iterator& operator=(const Iterator& other) { + if (this == ) { +return *this; + } + impl_ = other.impl_->clone(); + return *this; +} +Iterator& operator=(Iterator&&) = default; + +bool operator==(const Iterator& other) const {return impl_->operator==(*other.impl_);} +bool operator!=(const Iterator& other) const {return !(*this == other);} + +Value operator*() const; + + private: +std::unique_ptr impl_; + }; + + class Impl { + public: +virtual explicit operator bool() const = 0; +virtual bool isSequence() const = 0; +virtual bool isMap() const = 0; +virtual bool isNull() const = 0; +virtual bool isScalar() const = 0; + +virtual nonstd::expected getString() const = 0; +virtual nonstd::expected getInt() const = 0; +virtual nonstd::expected getUInt() const = 0; +virtual nonstd::expected getBool() const = 0; +virtual nonstd::expected getInt64() const = 0; +virtual nonstd::expected getUInt64() const = 0; + +virtual std::string getDebugString() const = 0; + +virtual size_t size() const = 0; +virtual Iterator begin() const = 0; +virtual Iterator end() const = 0; +virtual Node operator[](std::string_view key) const = 0; + +virtual std::optional getCursor() const {return std::nullopt;} Review Comment: I think it would be nicer to make `getCursor` pure virtual, too, and move this dummy implementation to `JsonNode`. ## libminifi/include/core/flow/README.md: ## @@ -0,0 +1,57 @@ +## Differences between JSON and YAML implementation + +### YAML + +The possible types of a `YAML::Node` are: +* Undefined +* Null +* Map +* Sequence +* Scalar + + Undefined + +The result of querying any member of `Null`, querying non-existing members of `Map`, +or non-existing indices of `Sequence`. + +Note that for `Map`s string conversion applies `map[0]` could be valid, given a key `"0"`, +while for `Sequence`s string index parsing does NOT happen `seq["0"]` +will return
[jira] [Commented] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608844#comment-17608844 ] Joe Witt commented on NIFI-10532: - Hello it would never group them by their values. As a flowfile comes in it will decide what to do such as using an existing connection because the values match vs making a new one since they changed such as host/username/etc.. This approach is not efficient. I recommend running two different PutFTP processors and not using flowfile attributes for these values if efficiency/batching is desired. In any case at times this model is convenient and so we do want it to work. The bug is likely in https://github.com/apache/nifi/blob/main/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/util/FTPTransfer.java#L547-L559 where we're only validating that the hostname hasn't changed. We need to validate the user hasn't changed, etc.. Will look more but that jumps out. > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Priority: Major > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10538) UI changes to support RegsitryClient as an extension point
Shane Ardell created NIFI-10538: --- Summary: UI changes to support RegsitryClient as an extension point Key: NIFI-10538 URL: https://issues.apache.org/jira/browse/NIFI-10538 Project: Apache NiFi Issue Type: Sub-task Reporter: Shane Ardell Assignee: Shane Ardell -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (NIFI-10526) NiFi 1.15.2 -- GUI freezing when remote process group loses connection
[ https://issues.apache.org/jira/browse/NIFI-10526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608810#comment-17608810 ] Evan Falkenstine commented on NIFI-10526: - Roger that. A newer version isn't necessarily an option at the moment but I'll grab a dump. Thank you! > NiFi 1.15.2 -- GUI freezing when remote process group loses connection > -- > > Key: NIFI-10526 > URL: https://issues.apache.org/jira/browse/NIFI-10526 > Project: Apache NiFi > Issue Type: Bug > Components: Core UI >Affects Versions: 1.15.0 > Environment: Redhat 7.9 >Reporter: Evan Falkenstine >Priority: Major > > When my Remote Process Groups lose connection to the distant end the NiFi GUI > freezes and won't let me stop/start processors or create new objects. I'm > able to interact with the GUI such as context menus but none of the actions > go through. I can use the context menu to disable the RPG but it takes 10-15 > minutes for the disable action to apply, lining up with the rest of the GUI > "unlocking". Restarting NiFi does not help to break the connection enough > because during bootup it starts the connection attempt before I can stop the > RPG. I've had success with editing the flow.xml.gz file and changing the RPG > to disabled and restarting NiFi. This happens 100% of the time on NiFi 1.15.2. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] tpalfy commented on pull request #6435: NIFI-10528 Extrat JSON record readers to util module for use in Salesforce NAR
tpalfy commented on PR #6435: URL: https://github.com/apache/nifi/pull/6435#issuecomment-1256343815 LGTM Thanks for the improvement @bbende! Merged to main. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-10528) Extract JSON readers to util module for use in Salesforce NAR
[ https://issues.apache.org/jira/browse/NIFI-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17608800#comment-17608800 ] ASF subversion and git services commented on NIFI-10528: Commit 27e3ee191593b0913da897c6dcad201127fc993a in nifi's branch refs/heads/main from Bryan Bende [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=27e3ee1915 ] NIFI-10528 Create nifi-json-record-utils and updated Salesforce NAR dependencies to use it This closes #6435. Signed-off-by: Tamas Palfy > Extract JSON readers to util module for use in Salesforce NAR > - > > Key: NIFI-10528 > URL: https://issues.apache.org/jira/browse/NIFI-10528 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > Time Spent: 1h 20m > Remaining Estimate: 0h > > The Salesforce NAR currently depends on the record serialization services NAR > in order to use the JsonTreeRecordReader. We should extract these readers to > a util module so that they can be reused and the Salesforce NAR can depend on > standard services API NAR. > Also the OAuth2 API jar is being included when it should be provided from > standard services API. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] asfgit closed pull request #6435: NIFI-10528 Extrat JSON record readers to util module for use in Salesforce NAR
asfgit closed pull request #6435: NIFI-10528 Extrat JSON record readers to util module for use in Salesforce NAR URL: https://github.com/apache/nifi/pull/6435 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-10532) PutFTP does not group the batch by user name
[ https://issues.apache.org/jira/browse/NIFI-10532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christoph Langheld updated NIFI-10532: -- Affects Version/s: 1.17.0 > PutFTP does not group the batch by user name > > > Key: NIFI-10532 > URL: https://issues.apache.org/jira/browse/NIFI-10532 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.17.0, 1.16.1 >Reporter: Christoph Langheld >Priority: Major > Attachments: 01-PutFTP-ProcessGroup.png, 02-PutFTP-Setting.png, > 03-PutFTP-Result.png, PutFTP-Bug.xml > > > Hello, > for the PutFTP processor we set the host, name, password, port, and target > directory dynamically via UpdateAttribute. > We now have the problem, that the PutFTP processor transmits every file with > the same user even the ftp user name changed. The target host does not > change, only the user. > To reproduce I attached a process group as template ([^PutFTP-Bug.xml]). You > have to adapt the ftp server settings within the UpdateAttribute processor to > your environment. > The process group generates 50 flow files. > UpdateAttribute sets the ftp user credentials and sets a filename prefix > (NIFI-A_ respectively NIFI-B_). > If everything would work correctly all files with prefix NIFI-A_ should be > transfered to the ftp server as user nifi-a and the rest as user nifi-b. > But every file will be transferred as the same user (nifi-a). > PutFTP should group the batch by host, login credentials (user, password, > port) and target directory. > !01-PutFTP-ProcessGroup.png|width=639,height=607! > PutFTP settings: > !02-PutFTP-Setting.png|width=750,height=523! > Result on the ftp server (logged in as user nifi-a): > !03-PutFTP-Result.png|width=268,height=482! > > Thank you and regards > Christoph > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (NIFI-10537) "Summary bar" does not compute remote processor connections
Michal Šunka created NIFI-10537: --- Summary: "Summary bar" does not compute remote processor connections Key: NIFI-10537 URL: https://issues.apache.org/jira/browse/NIFI-10537 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.15.3 Reporter: Michal Šunka Attachments: image-2022-09-23-16-23-58-723.png The "summary bar" showing total count of count/size of flowfiles, running, stopped, etc. processors does not show count of remote connections: !image-2022-09-23-16-23-58-723.png! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] turcsanyip commented on a diff in pull request #6158: NIFI-10152 Storage client caching in Azure ADLS processors
turcsanyip commented on code in PR #6158: URL: https://github.com/apache/nifi/pull/6158#discussion_r978594087 ## nifi-nar-bundles/nifi-azure-bundle/nifi-azure-processors/src/main/java/org/apache/nifi/processors/azure/storage/utils/DataLakeServiceClientFactory.java: ## @@ -0,0 +1,124 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.processors.azure.storage.utils; + +import com.azure.core.credential.AccessToken; +import com.azure.core.credential.TokenCredential; +import com.azure.core.http.HttpClient; +import com.azure.core.http.ProxyOptions; +import com.azure.core.http.netty.NettyAsyncHttpClientBuilder; +import com.azure.identity.ClientSecretCredential; +import com.azure.identity.ClientSecretCredentialBuilder; +import com.azure.identity.ManagedIdentityCredential; +import com.azure.identity.ManagedIdentityCredentialBuilder; +import com.azure.storage.common.StorageSharedKeyCredential; +import com.azure.storage.file.datalake.DataLakeServiceClient; +import com.azure.storage.file.datalake.DataLakeServiceClientBuilder; +import com.github.benmanes.caffeine.cache.Cache; +import com.github.benmanes.caffeine.cache.Caffeine; +import org.apache.commons.lang3.StringUtils; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.services.azure.storage.ADLSCredentialsDetails; +import reactor.core.publisher.Mono; + +public class DataLakeServiceClientFactory { + +private static final long STORAGE_CLIENT_CACHE_SIZE = 10; + +private final ComponentLog logger; + +private final Cache clientCache; + +public DataLakeServiceClientFactory(ComponentLog logger) { +this.logger = logger; +this.clientCache = createCache(); +} + +private Cache createCache() { +return Caffeine.newBuilder() +.maximumSize(STORAGE_CLIENT_CACHE_SIZE) +.build(); +} + +public Cache getCache() { +return clientCache; +} Review Comment: @nandorsoma I think it is a bit overkill to expose internal fields publicly for integration tests only. Furthermore, the IT does not really test we should check: it asserts the cache size at the end (1 client in the cache) but it is more important to check how many client instance creation have happened (only 1). It could be tested with a unit test and in that case the public getter would not be needed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (NIFI-10530) MiNiFi C2 Request Compression
[ https://issues.apache.org/jira/browse/NIFI-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferenc Kis resolved NIFI-10530. --- Resolution: Fixed > MiNiFi C2 Request Compression > - > > Key: NIFI-10530 > URL: https://issues.apache.org/jira/browse/NIFI-10530 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Ferenc Kis >Assignee: Ferenc Kis >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > C2 requests (heartbeat and ack) may saturate the network when a large number > of agents are communicating with the C2 server. > With introducing GZip compression for the C2 communication, we can reduce the > network traffic. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (NIFI-10536) Getting java.lang.LinkageError when integrating Nifi with Hashicorp Vault
[ https://issues.apache.org/jira/browse/NIFI-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-10536. --- Fix Version/s: 1.18.0 Assignee: David Handermann Resolution: Duplicate > Getting java.lang.LinkageError when integrating Nifi with Hashicorp Vault > - > > Key: NIFI-10536 > URL: https://issues.apache.org/jira/browse/NIFI-10536 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.16.1 >Reporter: Ruchit Mathur >Assignee: David Handermann >Priority: Major > Fix For: 1.18.0 > > > Hello, > We configured Nifi with Hashicorp Vault using Encrypt Configuration Tool as > mentioned in official Docs. We were able to add Sensitive Properties > (Keystore Passwords and Sensitive Key) in our Vault KV Path, but after > Restarting Nifi we encountered following errors:- > Please note we are not encountering this Error in Nifi version 1.15.3. > After version 1.15.3 we are getting this error in all Releases. > > {code:java} > Caused by: java.lang.LinkageError: loader constraint violation: when > resolving method 'void > org.springframework.http.client.HttpComponentsClientHttpRequestFactory.(org.apache.http.client.HttpClient)' > the class loader > org.apache.nifi.property.protection.loader.PropertyProtectionURLClassLoader > @69c335c4 of the current class, > org/springframework/vault/client/ClientHttpRequestFactoryFactory$HttpComponents, > and the class loader org.apache.nifi.nar.NarClassLoader @5792c08c for the > method's defining class, > org/springframework/http/client/HttpComponentsClientHttpRequestFactory, have > different Class objects for the type org/apache/http/client/HttpClient used > in the signature > (org.springframework.vault.client.ClientHttpRequestFactoryFactory$HttpComponents > is in unnamed module of loader > org.apache.nifi.property.protection.loader.PropertyProtectionURLClassLoader > @69c335c4, parent loader org.eclipse.jetty.webapp.WebAppClassLoader > @6d4502ca; > org.springframework.http.client.HttpComponentsClientHttpRequestFactory is in > unnamed module of loader org.apache.nifi.nar.NarClassLoader @5792c08c, parent > loader org.apache.nifi.nar.NarClassLoader @46e190ed) > at > org.springframework.vault.client.ClientHttpRequestFactoryFactory$HttpComponents.usingHttpComponents(ClientHttpRequestFactoryFactory.java:333) > at > org.springframework.vault.client.ClientHttpRequestFactoryFactory.create(ClientHttpRequestFactoryFactory.java:130) > at > org.apache.nifi.vault.hashicorp.StandardHashiCorpVaultCommunicationService.(StandardHashiCorpVaultCommunicationService.java:59) > at > org.apache.nifi.properties.AbstractHashiCorpVaultSensitivePropertyProvider.(AbstractHashiCorpVaultSensitivePropertyProvider.java:43) > at > org.apache.nifi.properties.HashiCorpVaultKeyValueSensitivePropertyProvider.(HashiCorpVaultKeyValueSensitivePropertyProvider.java:31) > at > org.apache.nifi.properties.StandardSensitivePropertyProviderFactory.getProvider(StandardSensitivePropertyProviderFactory.java:230) > at > java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) > at > java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) > at > java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) > at > java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) > at > java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) > at > java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) > at > java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) > at > org.apache.nifi.properties.StandardSensitivePropertyProviderFactory.getSupportedProviders(StandardSensitivePropertyProviderFactory.java:152) > at > org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:164) > at > org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:190) > at > org.apache.nifi.properties.NiFiPropertiesLoader.loadDefault(NiFiPropertiesLoader.java:215) > at > org.apache.nifi.properties.NiFiPropertiesLoader.loadDefaultWithKeyFromBootstrap(NiFiPropertiesLoader.java:103) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:568) > at >
[jira] [Created] (NIFI-10536) Getting java.lang.LinkageError when integrating Nifi with Hashicorp Vault
Ruchit Mathur created NIFI-10536: Summary: Getting java.lang.LinkageError when integrating Nifi with Hashicorp Vault Key: NIFI-10536 URL: https://issues.apache.org/jira/browse/NIFI-10536 Project: Apache NiFi Issue Type: Bug Components: Tools and Build Affects Versions: 1.16.1 Reporter: Ruchit Mathur Hello, We configured Nifi with Hashicorp Vault using Encrypt Configuration Tool as mentioned in official Docs. We were able to add Sensitive Properties (Keystore Passwords and Sensitive Key) in our Vault KV Path, but after Restarting Nifi we encountered following errors:- Please note we are not encountering this Error in Nifi version 1.15.3. After version 1.15.3 we are getting this error in all Releases. {code:java} Caused by: java.lang.LinkageError: loader constraint violation: when resolving method 'void org.springframework.http.client.HttpComponentsClientHttpRequestFactory.(org.apache.http.client.HttpClient)' the class loader org.apache.nifi.property.protection.loader.PropertyProtectionURLClassLoader @69c335c4 of the current class, org/springframework/vault/client/ClientHttpRequestFactoryFactory$HttpComponents, and the class loader org.apache.nifi.nar.NarClassLoader @5792c08c for the method's defining class, org/springframework/http/client/HttpComponentsClientHttpRequestFactory, have different Class objects for the type org/apache/http/client/HttpClient used in the signature (org.springframework.vault.client.ClientHttpRequestFactoryFactory$HttpComponents is in unnamed module of loader org.apache.nifi.property.protection.loader.PropertyProtectionURLClassLoader @69c335c4, parent loader org.eclipse.jetty.webapp.WebAppClassLoader @6d4502ca; org.springframework.http.client.HttpComponentsClientHttpRequestFactory is in unnamed module of loader org.apache.nifi.nar.NarClassLoader @5792c08c, parent loader org.apache.nifi.nar.NarClassLoader @46e190ed) at org.springframework.vault.client.ClientHttpRequestFactoryFactory$HttpComponents.usingHttpComponents(ClientHttpRequestFactoryFactory.java:333) at org.springframework.vault.client.ClientHttpRequestFactoryFactory.create(ClientHttpRequestFactoryFactory.java:130) at org.apache.nifi.vault.hashicorp.StandardHashiCorpVaultCommunicationService.(StandardHashiCorpVaultCommunicationService.java:59) at org.apache.nifi.properties.AbstractHashiCorpVaultSensitivePropertyProvider.(AbstractHashiCorpVaultSensitivePropertyProvider.java:43) at org.apache.nifi.properties.HashiCorpVaultKeyValueSensitivePropertyProvider.(HashiCorpVaultKeyValueSensitivePropertyProvider.java:31) at org.apache.nifi.properties.StandardSensitivePropertyProviderFactory.getProvider(StandardSensitivePropertyProviderFactory.java:230) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:992) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at org.apache.nifi.properties.StandardSensitivePropertyProviderFactory.getSupportedProviders(StandardSensitivePropertyProviderFactory.java:152) at org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:164) at org.apache.nifi.properties.NiFiPropertiesLoader.load(NiFiPropertiesLoader.java:190) at org.apache.nifi.properties.NiFiPropertiesLoader.loadDefault(NiFiPropertiesLoader.java:215) at org.apache.nifi.properties.NiFiPropertiesLoader.loadDefaultWithKeyFromBootstrap(NiFiPropertiesLoader.java:103) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) ... 124 common frames omitted 2022-09-23 15:04:15,770 INFO [Thread-0] org.apache.nifi.NiFi Application Server shutdown started {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi-minifi-cpp] fgerlits commented on a diff in pull request #1417: MINIFICPP-1834 Add VERSIONINFO resource file
fgerlits commented on code in PR #1417: URL: https://github.com/apache/nifi-minifi-cpp/pull/1417#discussion_r978522773 ## versioninfo.rc.in: ## @@ -0,0 +1,44 @@ +#include + +#define VER_FILEVERSION @PROJECT_VERSION_MAJOR@,@PROJECT_VERSION_MINOR@,@PROJECT_VERSION_PATCH@,0 +#define VER_FILEVERSION_STR "@PROJECT_VERSION_MAJOR@.@PROJECT_VERSION_MINOR@.@PROJECT_VERSION_PATCH@\0" + +#define VER_PRODUCTVERSION @PROJECT_VERSION_MAJOR@,@PROJECT_VERSION_MINOR@,@PROJECT_VERSION_PATCH@,0 +#define VER_PRODUCTVERSION_STR "@PROJECT_VERSION_MAJOR@.@PROJECT_VERSION_MINOR@.@PROJECT_VERSION_PATCH@\0" + +#ifndef DEBUG +#define VER_DEBUG 0 +#else +#define VER_DEBUG VS_FF_DEBUG +#endif + +VS_VERSION_INFO VERSIONINFO +FILEVERSION VER_FILEVERSION +PRODUCTVERSION VER_PRODUCTVERSION +FILEFLAGSMASK VS_FFI_FILEFLAGSMASK +FILEFLAGS VER_DEBUG +FILEOS VOS__WINDOWS32 +FILETYPEVFT_APP +BEGIN +BLOCK "StringFileInfo" +BEGIN +BLOCK "0409FDE9" +BEGIN +VALUE "LegalCopyright", "Apache License v2.0" +VALUE "CompanyName", "Apache Software Foundation" +VALUE "ProductName", "MiNiFi C++" Review Comment: fixed in b2598b784de173952a58c0a8c79377bc08412b43 and 4c7a5c317b598b4a459f7ffc271785f00cbd3dc2 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] Lehel44 commented on a diff in pull request #6379: NIFI-10463: Fix GetHubSpot incremental loading
Lehel44 commented on code in PR #6379: URL: https://github.com/apache/nifi/pull/6379#discussion_r978444386 ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/GetHubSpot.java: ## @@ -187,61 +243,122 @@ private String getResponseBodyAsString(final ProcessContext context, final HttpR } } -private OutputStreamCallback parseHttpResponse(ProcessContext context, String endpoint, StateMap state, HttpResponseEntity response, AtomicInteger objectCountHolder) { +private OutputStreamCallback parseHttpResponse(final ProcessContext context, final HttpResponseEntity response, final AtomicInteger total, + final Map stateMap) { return out -> { try (final JsonParser jsonParser = JSON_FACTORY.createParser(response.body()); final JsonGenerator jsonGenerator = JSON_FACTORY.createGenerator(out, JsonEncoding.UTF8)) { +boolean isCursorAvailable = false; +final String objectType = context.getProperty(OBJECT_TYPE).getValue(); +final String cursorKey = String.format(CURSOR_KEY_PATTERN, objectType); while (jsonParser.nextToken() != null) { +if (jsonParser.getCurrentToken() == JsonToken.FIELD_NAME && jsonParser.getCurrentName() +.equals("total")) { +jsonParser.nextToken(); +total.set(jsonParser.getIntValue()); +} if (jsonParser.getCurrentToken() == JsonToken.FIELD_NAME && jsonParser.getCurrentName() .equals("results")) { jsonParser.nextToken(); jsonGenerator.copyCurrentStructure(jsonParser); -objectCountHolder.incrementAndGet(); } final String fieldName = jsonParser.getCurrentName(); -if (CURSOR_PARAMETER.equals(fieldName)) { +if (PAGING_CURSOR.equals(fieldName)) { +isCursorAvailable = true; jsonParser.nextToken(); -Map newStateMap = new HashMap<>(state.toMap()); -newStateMap.put(endpoint, jsonParser.getText()); -updateState(context, newStateMap); +stateMap.put(cursorKey, jsonParser.getText()); break; } } +if (!isCursorAvailable) { +stateMap.put(cursorKey, NO_PAGING); +} } }; } -HttpUriBuilder getBaseUri(final ProcessContext context) { +URI getBaseUri(final ProcessContext context) { final String path = context.getProperty(OBJECT_TYPE).getValue(); return webClientServiceProvider.getHttpUriBuilder() .scheme(HTTPS) .host(API_BASE_URI) -.encodedPath(path); +.encodedPath(path + "/search") +.build(); } -private HttpResponseEntity getHttpResponseEntity(final String accessToken, final URI uri) { +private HttpResponseEntity getHttpResponseEntity(final String accessToken, final URI uri, final String filters) { +final JsonInputStreamConverter converter = new JsonInputStreamConverter(filters); return webClientServiceProvider.getWebClientService() -.get() +.post() .uri(uri) .header("Authorization", "Bearer " + accessToken) +.header("Content-Type", "application/json") +.body(converter.getInputStream(), OptionalLong.of(converter.getByteSize())) .retrieve(); } -private URI createUri(final ProcessContext context, final StateMap state) { -final String path = context.getProperty(OBJECT_TYPE).getValue(); -final HttpUriBuilder uriBuilder = getBaseUri(context); +String createIncrementalFilters(final ProcessContext context, final Map stateMap) { +final String limit = context.getProperty(RESULT_LIMIT).getValue(); +final String objectType = context.getProperty(OBJECT_TYPE).getValue(); +final HubSpotObjectType hubSpotObjectType = objectTypeLookupMap.get(objectType); +final Long incrDelayMs = context.getProperty(INCREMENTAL_DELAY).asTimePeriod(TimeUnit.MILLISECONDS); +final String startIncrementalKey = String.format("start: %s", objectType); +final String endIncrementalKey = String.format("end: %s", objectType); Review Comment: I think it's necessary because the user can query another object and then return to the previous object and overwrite the state. -- This is an automated message from the Apache Git Service. To respond to the message, please
[GitHub] [nifi] Lehel44 commented on a diff in pull request #6379: NIFI-10463: Fix GetHubSpot incremental loading
Lehel44 commented on code in PR #6379: URL: https://github.com/apache/nifi/pull/6379#discussion_r978438425 ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/GetHubSpot.java: ## @@ -187,61 +243,122 @@ private String getResponseBodyAsString(final ProcessContext context, final HttpR } } -private OutputStreamCallback parseHttpResponse(ProcessContext context, String endpoint, StateMap state, HttpResponseEntity response, AtomicInteger objectCountHolder) { +private OutputStreamCallback parseHttpResponse(final ProcessContext context, final HttpResponseEntity response, final AtomicInteger total, + final Map stateMap) { return out -> { try (final JsonParser jsonParser = JSON_FACTORY.createParser(response.body()); final JsonGenerator jsonGenerator = JSON_FACTORY.createGenerator(out, JsonEncoding.UTF8)) { +boolean isCursorAvailable = false; +final String objectType = context.getProperty(OBJECT_TYPE).getValue(); +final String cursorKey = String.format(CURSOR_KEY_PATTERN, objectType); while (jsonParser.nextToken() != null) { +if (jsonParser.getCurrentToken() == JsonToken.FIELD_NAME && jsonParser.getCurrentName() +.equals("total")) { +jsonParser.nextToken(); +total.set(jsonParser.getIntValue()); +} if (jsonParser.getCurrentToken() == JsonToken.FIELD_NAME && jsonParser.getCurrentName() .equals("results")) { jsonParser.nextToken(); jsonGenerator.copyCurrentStructure(jsonParser); -objectCountHolder.incrementAndGet(); } final String fieldName = jsonParser.getCurrentName(); -if (CURSOR_PARAMETER.equals(fieldName)) { +if (PAGING_CURSOR.equals(fieldName)) { +isCursorAvailable = true; jsonParser.nextToken(); -Map newStateMap = new HashMap<>(state.toMap()); -newStateMap.put(endpoint, jsonParser.getText()); -updateState(context, newStateMap); +stateMap.put(cursorKey, jsonParser.getText()); break; } } +if (!isCursorAvailable) { +stateMap.put(cursorKey, NO_PAGING); +} } }; } -HttpUriBuilder getBaseUri(final ProcessContext context) { +URI getBaseUri(final ProcessContext context) { final String path = context.getProperty(OBJECT_TYPE).getValue(); return webClientServiceProvider.getHttpUriBuilder() .scheme(HTTPS) .host(API_BASE_URI) -.encodedPath(path); +.encodedPath(path + "/search") +.build(); } -private HttpResponseEntity getHttpResponseEntity(final String accessToken, final URI uri) { +private HttpResponseEntity getHttpResponseEntity(final String accessToken, final URI uri, final String filters) { +final JsonInputStreamConverter converter = new JsonInputStreamConverter(filters); return webClientServiceProvider.getWebClientService() -.get() +.post() .uri(uri) .header("Authorization", "Bearer " + accessToken) +.header("Content-Type", "application/json") +.body(converter.getInputStream(), OptionalLong.of(converter.getByteSize())) .retrieve(); } -private URI createUri(final ProcessContext context, final StateMap state) { -final String path = context.getProperty(OBJECT_TYPE).getValue(); -final HttpUriBuilder uriBuilder = getBaseUri(context); +String createIncrementalFilters(final ProcessContext context, final Map stateMap) { +final String limit = context.getProperty(RESULT_LIMIT).getValue(); +final String objectType = context.getProperty(OBJECT_TYPE).getValue(); +final HubSpotObjectType hubSpotObjectType = objectTypeLookupMap.get(objectType); +final Long incrDelayMs = context.getProperty(INCREMENTAL_DELAY).asTimePeriod(TimeUnit.MILLISECONDS); +final String startIncrementalKey = String.format("start: %s", objectType); +final String endIncrementalKey = String.format("end: %s", objectType); +final String cursorKey = String.format(CURSOR_KEY_PATTERN, objectType); + +final ObjectNode root = OBJECT_MAPPER.createObjectNode(); +root.put("limit", limit); -final boolean isLimitSet =
[GitHub] [nifi] Lehel44 commented on a diff in pull request #6379: NIFI-10463: Fix GetHubSpot incremental loading
Lehel44 commented on code in PR #6379: URL: https://github.com/apache/nifi/pull/6379#discussion_r978434795 ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/GetHubSpot.java: ## @@ -187,61 +243,122 @@ private String getResponseBodyAsString(final ProcessContext context, final HttpR } } -private OutputStreamCallback parseHttpResponse(ProcessContext context, String endpoint, StateMap state, HttpResponseEntity response, AtomicInteger objectCountHolder) { +private OutputStreamCallback parseHttpResponse(final ProcessContext context, final HttpResponseEntity response, final AtomicInteger total, + final Map stateMap) { return out -> { try (final JsonParser jsonParser = JSON_FACTORY.createParser(response.body()); final JsonGenerator jsonGenerator = JSON_FACTORY.createGenerator(out, JsonEncoding.UTF8)) { +boolean isCursorAvailable = false; +final String objectType = context.getProperty(OBJECT_TYPE).getValue(); +final String cursorKey = String.format(CURSOR_KEY_PATTERN, objectType); while (jsonParser.nextToken() != null) { +if (jsonParser.getCurrentToken() == JsonToken.FIELD_NAME && jsonParser.getCurrentName() +.equals("total")) { +jsonParser.nextToken(); +total.set(jsonParser.getIntValue()); +} if (jsonParser.getCurrentToken() == JsonToken.FIELD_NAME && jsonParser.getCurrentName() .equals("results")) { jsonParser.nextToken(); jsonGenerator.copyCurrentStructure(jsonParser); -objectCountHolder.incrementAndGet(); } final String fieldName = jsonParser.getCurrentName(); -if (CURSOR_PARAMETER.equals(fieldName)) { +if (PAGING_CURSOR.equals(fieldName)) { +isCursorAvailable = true; jsonParser.nextToken(); -Map newStateMap = new HashMap<>(state.toMap()); -newStateMap.put(endpoint, jsonParser.getText()); -updateState(context, newStateMap); +stateMap.put(cursorKey, jsonParser.getText()); break; } } +if (!isCursorAvailable) { +stateMap.put(cursorKey, NO_PAGING); +} } }; } -HttpUriBuilder getBaseUri(final ProcessContext context) { +URI getBaseUri(final ProcessContext context) { final String path = context.getProperty(OBJECT_TYPE).getValue(); return webClientServiceProvider.getHttpUriBuilder() .scheme(HTTPS) .host(API_BASE_URI) -.encodedPath(path); +.encodedPath(path + "/search") +.build(); } -private HttpResponseEntity getHttpResponseEntity(final String accessToken, final URI uri) { +private HttpResponseEntity getHttpResponseEntity(final String accessToken, final URI uri, final String filters) { +final JsonInputStreamConverter converter = new JsonInputStreamConverter(filters); return webClientServiceProvider.getWebClientService() -.get() +.post() .uri(uri) .header("Authorization", "Bearer " + accessToken) +.header("Content-Type", "application/json") +.body(converter.getInputStream(), OptionalLong.of(converter.getByteSize())) .retrieve(); } -private URI createUri(final ProcessContext context, final StateMap state) { -final String path = context.getProperty(OBJECT_TYPE).getValue(); -final HttpUriBuilder uriBuilder = getBaseUri(context); +String createIncrementalFilters(final ProcessContext context, final Map stateMap) { +final String limit = context.getProperty(RESULT_LIMIT).getValue(); +final String objectType = context.getProperty(OBJECT_TYPE).getValue(); +final HubSpotObjectType hubSpotObjectType = objectTypeLookupMap.get(objectType); +final Long incrDelayMs = context.getProperty(INCREMENTAL_DELAY).asTimePeriod(TimeUnit.MILLISECONDS); +final String startIncrementalKey = String.format("start: %s", objectType); +final String endIncrementalKey = String.format("end: %s", objectType); +final String cursorKey = String.format(CURSOR_KEY_PATTERN, objectType); + +final ObjectNode root = OBJECT_MAPPER.createObjectNode(); +root.put("limit", limit); -final boolean isLimitSet =
[GitHub] [nifi] ferencerdei commented on a diff in pull request #6434: NIFI-10493 MiNiFi: Add C2 handler for Transfer/Debug operation
ferencerdei commented on code in PR #6434: URL: https://github.com/apache/nifi/pull/6434#discussion_r978388502 ## c2/c2-client-bundle/c2-client-service/src/main/java/org/apache/nifi/c2/client/service/operation/DebugOperationHandler.java: ## @@ -0,0 +1,257 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.c2.client.service.operation; + +import static java.nio.file.Files.copy; +import static java.nio.file.Files.createTempDirectory; +import static java.nio.file.Files.deleteIfExists; +import static java.nio.file.Files.lines; +import static java.nio.file.Files.write; +import static java.util.Optional.empty; +import static java.util.Optional.ofNullable; +import static java.util.stream.Collectors.toList; +import static java.util.stream.Stream.concat; +import static org.apache.commons.compress.utils.IOUtils.closeQuietly; +import static org.apache.commons.lang3.StringUtils.EMPTY; +import static org.apache.commons.lang3.StringUtils.isBlank; +import static org.apache.nifi.c2.protocol.api.C2OperationState.OperationState.FULLY_APPLIED; +import static org.apache.nifi.c2.protocol.api.C2OperationState.OperationState.NOT_APPLIED; +import static org.apache.nifi.c2.protocol.api.OperandType.DEBUG; +import static org.apache.nifi.c2.protocol.api.OperationType.TRANSFER; + +import java.io.ByteArrayOutputStream; +import java.io.IOException; +import java.io.UncheckedIOException; +import java.nio.file.Files; +import java.nio.file.Path; +import java.nio.file.Paths; +import java.util.ArrayList; +import java.util.List; +import java.util.Optional; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.stream.Stream; +import org.apache.commons.compress.archivers.tar.TarArchiveEntry; +import org.apache.commons.compress.archivers.tar.TarArchiveOutputStream; +import org.apache.commons.compress.compressors.gzip.GzipCompressorOutputStream; +import org.apache.nifi.c2.client.api.C2Client; +import org.apache.nifi.c2.protocol.api.C2Operation; +import org.apache.nifi.c2.protocol.api.C2OperationAck; +import org.apache.nifi.c2.protocol.api.C2OperationState; +import org.apache.nifi.c2.protocol.api.C2OperationState.OperationState; +import org.apache.nifi.c2.protocol.api.OperandType; +import org.apache.nifi.c2.protocol.api.OperationType; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +public class DebugOperationHandler implements C2OperationHandler { + +private static final Logger LOG = LoggerFactory.getLogger(DebugOperationHandler.class); + +private static final String C2_CALLBACK_URL_NOT_FOUND = "C2 Server callback URL was not found in request"; +private static final String SUCCESSFUL_UPLOAD = "Debug bundle was uploaded successfully"; +private static final String UNABLE_TO_CREATE_BUNDLE = "Unable to create debug bundle"; + +static final String TARGET_ARG = "target"; +static final String NEW_LINE = "\n"; + +private final C2Client c2Client; +private final String configDir; +private final String logDir; +private final Predicate contentFilter; + +private DebugOperationHandler(C2Client c2Client, String configDir, String logDir, Predicate contentFilter) { +this.c2Client = c2Client; +this.configDir = configDir; +this.logDir = logDir; +this.contentFilter = contentFilter; +} + +public static DebugOperationHandler create(C2Client c2Client, String configDir, String logDir, Predicate contentFilter) { +if (c2Client == null) { +throw new IllegalArgumentException("C2Client should not be null"); +} +if (isBlank(configDir)) { +throw new IllegalArgumentException("configDir should not be not null or empty"); +} +if (isBlank(logDir)) { +throw new IllegalArgumentException("logDir should not be not null or empty"); +} +if (contentFilter == null) { +throw new IllegalArgumentException("Exclude sensitive filter should not be null"); +} + +return new DebugOperationHandler(c2Client, configDir, logDir, contentFilter); +} + +@Override +public OperationType getOperationType() { +return
[GitHub] [nifi-minifi-cpp] martinzink commented on a diff in pull request #1412: MINIFICPP-1923 Refactor PutUDP to use asio
martinzink commented on code in PR #1412: URL: https://github.com/apache/nifi-minifi-cpp/pull/1412#discussion_r978385514 ## extensions/standard-processors/processors/PutUDP.cpp: ## @@ -107,48 +98,48 @@ void PutUDP::onTrigger(core::ProcessContext* context, core::ProcessSession* cons return; } - const auto nonthrowing_sockaddr_ntop = [](const sockaddr* const sa) -> std::string { -return utils::try_expression([sa] { return utils::net::sockaddr_ntop(sa); }).value_or("(n/a)"); + asio::io_context io_context; + + const auto resolve_hostname = [_context, , ]() -> nonstd::expected { +udp::resolver resolver(io_context); +std::error_code error_code; +auto resolved_query = resolver.resolve(udp::v4(), hostname, port, error_code); +if (error_code) + return nonstd::make_unexpected(error_code); +return resolved_query; + }; + + const auto debug_log_resolved_endpoint = [, = this->logger_](const udp::resolver::results_type& resolved_query) -> udp::endpoint { +if (logger->should_log(core::logging::LOG_LEVEL::debug)) + core::logging::LOG_DEBUG(logger) << "resolved " << hostname << " to: " << resolved_query->endpoint(); +return resolved_query->endpoint(); Review Comment: I've reworked this part (and also ListenTCP, ListenSyslog) in https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/aba9c5bb58f317143a6301e1552087c8f6d4a5fa ## extensions/standard-processors/processors/PutUDP.cpp: ## @@ -107,48 +98,48 @@ void PutUDP::onTrigger(core::ProcessContext* context, core::ProcessSession* cons return; } - const auto nonthrowing_sockaddr_ntop = [](const sockaddr* const sa) -> std::string { -return utils::try_expression([sa] { return utils::net::sockaddr_ntop(sa); }).value_or("(n/a)"); + asio::io_context io_context; + + const auto resolve_hostname = [_context, , ]() -> nonstd::expected { +udp::resolver resolver(io_context); +std::error_code error_code; +auto resolved_query = resolver.resolve(udp::v4(), hostname, port, error_code); +if (error_code) + return nonstd::make_unexpected(error_code); +return resolved_query; + }; + + const auto debug_log_resolved_endpoint = [, = this->logger_](const udp::resolver::results_type& resolved_query) -> udp::endpoint { +if (logger->should_log(core::logging::LOG_LEVEL::debug)) + core::logging::LOG_DEBUG(logger) << "resolved " << hostname << " to: " << resolved_query->endpoint(); +return resolved_query->endpoint(); }; - const auto debug_log_resolved_names = [&, this](const addrinfo& names) -> decltype(auto) { -if (logger_->should_log(core::logging::LOG_LEVEL::debug)) { - std::vector names_vector; - for (const addrinfo* it = it; it = it->ai_next) { -names_vector.push_back(nonthrowing_sockaddr_ntop(it->ai_addr)); - } - logger_->log_debug("resolved \'%s\' to: %s", - hostname, - names_vector | ranges::views::join(',') | ranges::to()); -} -return names; + const auto send_data_to_endpoint = [_context, ](const udp::endpoint& endpoint) -> nonstd::expected { +std::error_code send_error; +udp::socket socket(io_context); +socket.open(udp::v4()); Review Comment: I've reworked this part (and also ListenTCP, ListenSyslog) in https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/aba9c5bb58f317143a6301e1552087c8f6d4a5fa -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] martinzink commented on pull request #1412: MINIFICPP-1923 Refactor PutUDP to use asio
martinzink commented on PR #1412: URL: https://github.com/apache/nifi-minifi-cpp/pull/1412#issuecomment-1255926012 I've moved from only IPv4 only mode to IPv6, I also modified ListenTCP and ListenSyslog processors aswell. This is effectively fixes a separate ticket aswell. So could you guys review the latest [commit](https://github.com/apache/nifi-minifi-cpp/pull/1412/commits/aba9c5bb58f317143a6301e1552087c8f6d4a5fa) aswell? @fgerlits @lordgamez -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-10535) MiNiFi C2 Service - Remove heartbeat, config request content type restriction
Ferenc Erdei created NIFI-10535: --- Summary: MiNiFi C2 Service - Remove heartbeat, config request content type restriction Key: NIFI-10535 URL: https://issues.apache.org/jira/browse/NIFI-10535 Project: Apache NiFi Issue Type: Improvement Components: MiNiFi Reporter: Ferenc Erdei h2. Background The MiNiFi C2 Server now supports the C2 protocol, but the /config/heartbeat endpoint is tight to the acceptValue which causes incorrect behavior when using MiNiFi CPP. h2. Task We should find a way to serve the /config/heartbeat endpoint with different accept header types than the /config endpoint. Eg.: request heartbeats in JSON and configuration content in YAML (as this is the only supported configuration format atm. by agents) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[GitHub] [nifi] turcsanyip commented on a diff in pull request #6379: NIFI-10463: Fix GetHubSpot incremental loading
turcsanyip commented on code in PR #6379: URL: https://github.com/apache/nifi/pull/6379#discussion_r977962708 ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/HubSpotObjectType.java: ## @@ -18,97 +18,107 @@ import org.apache.nifi.components.DescribedValue; +import static org.apache.nifi.processors.hubspot.IncrementalFieldType.HS_LAST_MODIFIED_DATE; +import static org.apache.nifi.processors.hubspot.IncrementalFieldType.LAST_MODIFIED_DATE; + public enum HubSpotObjectType implements DescribedValue { COMPANIES( "/crm/v3/objects/companies", "Companies", "In HubSpot, the companies object is a standard CRM object. Individual company records can be used to store information about businesses" + -" and organizations within company properties." +" and organizations within company properties.", +HS_LAST_MODIFIED_DATE ), CONTACTS( "/crm/v3/objects/contacts", "Contacts", "In HubSpot, contacts store information about individuals. From marketing automation to smart content, the lead-specific data found in" + -" contact records helps users leverage much of HubSpot's functionality." +" contact records helps users leverage much of HubSpot's functionality.", +LAST_MODIFIED_DATE ), DEALS( "/crm/v3/objects/deals", "Deals", "In HubSpot, a deal represents an ongoing transaction that a sales team is pursuing with a contact or company. It’s tracked through" + -" pipeline stages until won or lost." +" pipeline stages until won or lost.", +HS_LAST_MODIFIED_DATE ), FEEDBACK_SUBMISSIONS( Review Comment: Feedback Submissions API is currently in beta. I'd suggest removing it until it becomes GA. ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/resources/META-INF/NOTICE: ## @@ -0,0 +1,45 @@ +nifi-airtable-nar +Copyright 2014-2022 The Apache Software Foundation + +This product includes software developed at +The Apache Software Foundation (http://www.apache.org/). + +** +Apache Software License v2 +** + + (ASLv2) Apache Commons Lang +The following NOTICE information applies: + Apache Commons Lang + Copyright 2001-2015 The Apache Software Foundation + + This product includes software from the Spring Framework, + under the Apache License 2.0 (see: StringUtils.containsWhitespace()) Review Comment: I cannot see `Apache Commons Lang` dependency in the HubSpot bundle. If this is correct, please remove this entry. ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/GetHubSpot.java: ## @@ -75,6 +82,10 @@ @DefaultSettings(yieldDuration = "10 sec") public class GetHubSpot extends AbstractProcessor { +static final AllowableValue CREATE_DATE = new AllowableValue("createDate", "Create Date", "The time of the field was created"); +static final AllowableValue LAST_MODIFIED_DATE = new AllowableValue("lastModifiedDate", "Last Modified Date", +"The time of the field was last modified"); + Review Comment: Unused constants. ## nifi-nar-bundles/nifi-hubspot-bundle/nifi-hubspot-processors/src/main/java/org/apache/nifi/processors/hubspot/GetHubSpot.java: ## @@ -99,7 +110,39 @@ public class GetHubSpot extends AbstractProcessor { .description("The maximum number of results to request for each invocation of the Processor") .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .required(false) -.addValidator(StandardValidators.POSITIVE_INTEGER_VALIDATOR) +.addValidator(StandardValidators.createLongValidator(1, 100, true)) +.build(); + +static final PropertyDescriptor IS_INCREMENTAL = new PropertyDescriptor.Builder() +.name("is-incremental") +.displayName("Incremental Loading") +.description("The processor can incrementally load the queried objects so that each object is queried exactly once." + +" For each query, the processor queries objects which were created or modified after the previous run time" + +" but before the current time.") +.required(true) +.allowableValues("true", "false") +.defaultValue("false") +.build(); + +static final PropertyDescriptor INCREMENTAL_DELAY = new PropertyDescriptor.Builder() +.name("incremental-delay") +.displayName("Incremental Delay") +.description("The ending timestamp of the time window will be adjusted earlier by