[GitHub] [nifi] mtien-apache commented on pull request #5569: [NIFI-9423-NIFI-9429]: Show icon and tooltip for Parameters with leading and/or trailing whitespace
mtien-apache commented on pull request #5569: URL: https://github.com/apache/nifi/pull/5569#issuecomment-987593589 @mcgilman Thanks for starting a review! I've added more updates. Can you check again with the latest changes? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9449) Implement login redirects to specified URLs
Takeshi Kaneko created NIFI-9449: Summary: Implement login redirects to specified URLs Key: NIFI-9449 URL: https://issues.apache.org/jira/browse/NIFI-9449 Project: Apache NiFi Issue Type: Improvement Components: Core UI Reporter: Takeshi Kaneko If an unauthorized user accesses the URL of a process group (e.g. {{[https://nifi.example.com:8443/nifi/?processGroupId=344d9813-017b-1000--c6677e8b=|https://nifi.example.com/nifi/?processGroupId=344d9813-017b-1000--c6677e8b=]}} ), NiFi redirects to the login page. After the successful login, it redirects to the URL of the root process group (e.g. {{[https://nifi.example.com:8443/nifi/|https://nifi.example.com/nifi/]}} ), not the specified process group. I'd like to change the specification so that it redirects to a specified URL after the login. Currently, NiFi provides the following user authentication: * Single User * Lightweight Directory Access Protocol (LDAP) * Kerberos * OpenId Connect * SAML * Apache Knox I'm going to implement login redirects to specified URLs for all of the above. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5574: NIFI-9397 Extending JettyWebServerClient authorization possibilities with custom setup
exceptionfactory commented on a change in pull request #5574: URL: https://github.com/apache/nifi/pull/5574#discussion_r763564415 ## File path: nifi-nar-bundles/nifi-websocket-bundle/nifi-websocket-services-jetty/src/main/java/org/apache/nifi/websocket/jetty/JettyWebSocketClient.java ## @@ -145,6 +145,19 @@ .defaultValue("US-ASCII") .build(); +public static final PropertyDescriptor CUSTOM_AUTH = new PropertyDescriptor.Builder() +.name("custom-authorization") +.displayName("Custom Authorization") +.description( +"If set tgether with \"User Name\" and \"User Password\", instead of using Basic" + +" Authentication the value of the property will be assigned to the \"Authorization\" HTTP header.") Review comment: Instead of describing the behavior this way, what do you think about extending the `customValidate()` method to ensure that setting this property excludes setting `User Name` and `User Password`, and vice versa? Checking that both this property and the other credentials properties are not set would help avoid potential confusion in the component configuration. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] exceptionfactory commented on a change in pull request #5574: NIFI-9397 Extending JettyWebServerClient authorization possibilities with custom setup
exceptionfactory commented on a change in pull request #5574: URL: https://github.com/apache/nifi/pull/5574#discussion_r763566902 ## File path: nifi-nar-bundles/nifi-websocket-bundle/nifi-websocket-services-jetty/src/main/java/org/apache/nifi/websocket/jetty/JettyWebSocketClient.java ## @@ -145,6 +145,19 @@ .defaultValue("US-ASCII") .build(); +public static final PropertyDescriptor CUSTOM_AUTH = new PropertyDescriptor.Builder() +.name("custom-authorization") +.displayName("Custom Authorization") +.description( +"If set tgether with \"User Name\" and \"User Password\", instead of using Basic" + Review comment: For additional documentation, it would be helpful to include the official standard reference [RFC 7235 Section 4.2](https://datatracker.ietf.org/doc/html/rfc7235#section-4.2). ```suggestion "Configures a custom HTTP Authorization Header as described in RFC 7235 Section 4.2. Setting a custom Authorization Header excludes configuring the User Name and User Password properties for Basic Authentication." + ``` ## File path: nifi-nar-bundles/nifi-websocket-bundle/nifi-websocket-services-jetty/src/main/java/org/apache/nifi/websocket/jetty/JettyWebSocketClient.java ## @@ -145,6 +145,19 @@ .defaultValue("US-ASCII") .build(); +public static final PropertyDescriptor CUSTOM_AUTH = new PropertyDescriptor.Builder() +.name("custom-authorization") +.displayName("Custom Authorization") +.description( +"If set tgether with \"User Name\" and \"User Password\", instead of using Basic" + +" Authentication the value of the property will be assigned to the \"Authorization\" HTTP header.") Review comment: Instead of describing the behavior this way, what do you think about extending the `customValidate()` method to ensure that setting this property excludes setting `User Name` and `User Password`, and vice versa? Checking the both this property and the other credentials properties and not set would help avoid potential confusion in the component configuration. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] github-actions[bot] closed pull request #5230: NIFI-8924: Add the ability to use a record writer if listing strategy is set to 'No Tracking'
github-actions[bot] closed pull request #5230: URL: https://github.com/apache/nifi/pull/5230 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] markap14 commented on a change in pull request #5518: NIFI-9333 add Geohash functions to Expression Language
markap14 commented on a change in pull request #5518: URL: https://github.com/apache/nifi/pull/5518#discussion_r763495592 ## File path: nifi-commons/nifi-expression-language/pom.xml ## @@ -108,5 +108,10 @@ commons-codec 1.14 + +ch.hsr +geohash +1.4.0 Review comment: There shouldn't actually be a need to include it in the LICENSE/NOTICE file. The library is licensed as Apache License 2.0 and does not provide a NOTICE file. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #5578: NIFI-9093 GetSplunk Processor hangs
joewitt commented on pull request #5578: URL: https://github.com/apache/nifi/pull/5578#issuecomment-987294769 You leave the defaults because if there is no value supplied at all (like an old flow would have no value) then we'd read in the default. Going forward as people pull this in they'll get the values set simply by using this processor. Ultimately not required but setting a default means if users give us data we'll use it. If they dont we'll use the default. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ahmedshaaban1999 commented on pull request #5578: NIFI-9093 GetSplunk Processor hangs
ahmedshaaban1999 commented on pull request #5578: URL: https://github.com/apache/nifi/pull/5578#issuecomment-987290538 Hi joe, Are you talking about the default values used by SplunkSDK ? if there was no value provided explicitly then the processor waits indefinitely hence the bug. That is the reason I made both properties required and added default values for them However I get what you are saying, but if I am going to remove the required flags then I should remove the default values as well right ? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on pull request #5578: NIFI-9093 GetSplunk Processor hangs
joewitt commented on pull request #5578: URL: https://github.com/apache/nifi/pull/5578#issuecomment-987266416 Change makes sense. Please see the two comments changing required from true to false and ready to merge once done. Thanks -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on a change in pull request #5578: NIFI-9093 GetSplunk Processor hangs
joewitt commented on a change in pull request #5578: URL: https://github.com/apache/nifi/pull/5578#discussion_r763426629 ## File path: nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java ## @@ -105,6 +106,20 @@ .addValidator(StandardValidators.PORT_VALIDATOR) .defaultValue("8089") .build(); +public static final PropertyDescriptor CONNECT_TIMEOUT = new PropertyDescriptor.Builder() +.name("Connection Timeout") +.description("Max wait time for connection to the Splunk server.") +.required(true) +.defaultValue("5 secs") +.addValidator(StandardValidators.TIME_PERIOD_VALIDATOR) +.build(); +public static final PropertyDescriptor READ_TIMEOUT = new PropertyDescriptor.Builder() +.name("Read Timeout") +.description("Max wait time for response from the Splunk server.") +.required(true) Review comment: They should not be required and since we have a default they definitely don't need to be provided. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] joewitt commented on a change in pull request #5578: NIFI-9093 GetSplunk Processor hangs
joewitt commented on a change in pull request #5578: URL: https://github.com/apache/nifi/pull/5578#discussion_r763426571 ## File path: nifi-nar-bundles/nifi-splunk-bundle/nifi-splunk-processors/src/main/java/org/apache/nifi/processors/splunk/GetSplunk.java ## @@ -105,6 +106,20 @@ .addValidator(StandardValidators.PORT_VALIDATOR) .defaultValue("8089") .build(); +public static final PropertyDescriptor CONNECT_TIMEOUT = new PropertyDescriptor.Builder() +.name("Connection Timeout") +.description("Max wait time for connection to the Splunk server.") +.required(true) Review comment: They should not be required and since we have a default they definitely don't need to be provided. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence commented on pull request #5530: NIFI-9341 Adding record reader for CEF events
simonbence commented on pull request #5530: URL: https://github.com/apache/nifi/pull/5530#issuecomment-987258188 > I gave this a try (I didn't look at the code though) and this is a really nice improvement! I'd recommend updating this PR to also include the changes added in #. Once done, I'll run more tests with some data I have on my systems. Thank you very much for bringing this to my attention, this is a valuabe addition indeed! I encorporated this to the reader's functionality. Let me also highlight two smaller details: - With "standard extensions" the behaviour of the scema inference might be unexpected but this is because of the reader mimics the extension dictionary - In case of String typed extensions the parser looks to be works somewhat different. I added tests regarding and I hope you do not mind that I extended your test with this discovery as well. Again, thank you for your invested time! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ahmedshaaban1999 opened a new pull request #5578: NIFI-9093 GetSplunk Processor hangs
ahmedshaaban1999 opened a new pull request #5578: URL: https://github.com/apache/nifi/pull/5578 Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR Fixes bug NIFI-9093 GetSplunk Processor hangs by adding two new properties, a ConnectTimeout and a ReadTimeout In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [x] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [x] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [x] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [x] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] kevdoran commented on pull request #5458: NIFI-7865 amqp$header is splitted in the wrong way for "," and "}"
kevdoran commented on pull request #5458: URL: https://github.com/apache/nifi/pull/5458#issuecomment-987222571 I should be able to do a final round of review on this today or tomorrow, and assuming everything looks good, can merge. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] nandorsoma commented on pull request #5458: NIFI-7865 amqp$header is splitted in the wrong way for "," and "}"
nandorsoma commented on pull request #5458: URL: https://github.com/apache/nifi/pull/5458#issuecomment-987210197 > > Thanks for working through the feedback @sedadgn, the current version looks good. > > What do you think @kevdoran and @ottobackwards? > > Hello @nandorsoma, @ottobackwards and @kevdoran, Thank you for your feedback and reviews. We need this feature in our project. If everything works for you, can you merge? Thank you Hey @sedadgn ! Unfortunately I haven't got permission to merge, therefore we need to wait for a committer to do that. Will try to ping someone! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-9448) Potential IllegalStateException when S2S HTTP Client Shutdown
[ https://issues.apache.org/jira/browse/NIFI-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-9448: --- Status: Patch Available (was: In Progress) > Potential IllegalStateException when S2S HTTP Client Shutdown > - > > Key: NIFI-9448 > URL: https://issues.apache.org/jira/browse/NIFI-9448 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.15.0 >Reporter: David Handermann >Assignee: David Handermann >Priority: Major > > NiFi Site-to-Site communication over HTTP relies on > {{SiteToSiteRestApiClient}} to handle requests and responses using the Apache > HttpComponents library. The HttpComponents library maintains a connection > pool for processing HTTP transactions. > In the course of sending or receiving files, NiFi starts a background thread > to make periodic requests to the remote NiFi system in order to extend the > current transaction. The {{SiteToSiteRestApiClient}} stops the background > thread after completing a transaction. In some cases, the request to extend > the transaction can occur after the HttpComponents connection pool is > shutdown, resulting in an {{{}IllegalStateException{}}}. The background > thread treats all exceptions as failure conditions and then attempts to close > the S2S HTTP client itself. This causes subsequent retry requests using the > same {{SiteToSiteRestApiClient}} to fail with the same > {{IllegalStateException}} indicating that the connection pool is shutdown. > The behavior of the extend transaction command should be changed to avoid > closing the S2S HTTP client when encountering an > {{{}IllegalStateException{}}}. This approach will support the potential for > subsequent retries to work or fail based on existing timeout configuration > settings. > The following log messages provide a stack trace of the > {{IllegalStateException}} in the extend transaction command and subsequent > exception in the Remote Process Group Port connection. > {noformat} > WARN org.apache.nifi.remote.util.SiteToSiteRestApiClient: Failed to extend > transaction ttl > java.lang.IllegalStateException: Connection pool shut down > at org.apache.http.util.Asserts.check(Asserts.java:34) > at > org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:191) > at > org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:267) > at > org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176) > at > org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) > at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) > at > org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) > at > org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) > at > org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) > at > org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) > at > org.apache.nifi.remote.util.SiteToSiteRestApiClient.extendTransaction(SiteToSiteRestApiClient.java:1028) > at > org.apache.nifi.remote.util.SiteToSiteRestApiClient.lambda$startExtendingTtl$0(SiteToSiteRestApiClient.java:990) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > INFO org.apache.http.impl.execchain.RetryExec: I/O exception > (org.apache.http.NoHttpResponseException) caught when processing request: The > target server failed to respond > INFO org.apache.http.impl.execchain.RetryExec: Retrying request > ERROR org.apache.nifi.remote.StandardRemoteGroupPort: > RemoteGroupPort[name=REMOTE_PORT,targets=URL] failed to communicate with > remote NiFi instance due to java.lang.IllegalStateException: Connection pool > shut down > {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] exceptionfactory opened a new pull request #5577: NIFI-9448 Improve S2S HTTP Extend Transaction Exception Handling
exceptionfactory opened a new pull request #5577: URL: https://github.com/apache/nifi/pull/5577 Description of PR NIFI-9448 Improves Site-to-Site HTTP client processing for extend transaction exception handling. As part of sending or receive files over an HTTP S2S connection, the `SiteToSiteRestApiClient` starts a background command to request extension of the current transaction on a periodic basis. In situations when the underlying HTTP connection pool throws an `IllegalStateException`, the background command attempts to close the parent `SiteToSiteRestApiClient`. This behavior short-circuits potential retry attempts that might otherwise be allowed according to the configured timeout settings. These changes include refactoring the background command to a separate `ExtendTransactionCommand` class for easier testing. With the main `SiteToSiteRestApiClient` already wrapping an Apache HttpComponents `HttpClient` instance and connection pool, the refactored `ExtendTransactionCommand` reuses the same client and connection pool as opposed to creating a new instance of `SiteToSiteRestApiClient`. Additional changes include a null check in `PeerSelector` to avoid a potential `NullPointerException` when the Peer Persistence property is not configured. This pull request also removes the `SiteToSiteClientIT` as it does not provide significant value beyond the existing unit tests. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [X] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [X] Have you written or updated unit tests to verify your changes? - [X] Have you verified that the full build is successful on JDK 8? - [X] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] gresockj commented on pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
gresockj commented on pull request #5504: URL: https://github.com/apache/nifi/pull/5504#issuecomment-987165982 Ok, the latest commit appears to have fixed the unit test. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9447) Fix SNMP test failures related to available UDP ports
[ https://issues.apache.org/jira/browse/NIFI-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454218#comment-17454218 ] ASF subversion and git services commented on NIFI-9447: --- Commit 1eb4264e3469069a665ac87868515c8b973e3dc3 in nifi's branch refs/heads/main from Lehel [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=1eb4264 ] NIFI-9447: Fix SNMP related tests to find available UDP ports instead of TCP This closes #5576. Signed-off-by: Tamas Palfy > Fix SNMP test failures related to available UDP ports > - > > Key: NIFI-9447 > URL: https://issues.apache.org/jira/browse/NIFI-9447 > Project: Apache NiFi > Issue Type: Bug >Reporter: Lehel Boér >Assignee: Lehel Boér >Priority: Major > Time Spent: 0.5h > Remaining Estimate: 0h > > There are SNMP test failure on the pipeline due to unavailable ports. > NetworkUtils::availablePort is deprecated, the test should look for an > available UDP port. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] asfgit closed pull request #5576: NIFI-9447: Fix SNMP related tests to find UDP ports instead of TCP
asfgit closed pull request #5576: URL: https://github.com/apache/nifi/pull/5576 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] tpalfy commented on pull request #5576: NIFI-9447: Fix SNMP related tests to find UDP ports instead of TCP
tpalfy commented on pull request #5576: URL: https://github.com/apache/nifi/pull/5576#issuecomment-987140892 LGTM merging to main -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9448) Potential IllegalStateException when S2S HTTP Client Shutdown
David Handermann created NIFI-9448: -- Summary: Potential IllegalStateException when S2S HTTP Client Shutdown Key: NIFI-9448 URL: https://issues.apache.org/jira/browse/NIFI-9448 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.15.0 Reporter: David Handermann Assignee: David Handermann NiFi Site-to-Site communication over HTTP relies on {{SiteToSiteRestApiClient}} to handle requests and responses using the Apache HttpComponents library. The HttpComponents library maintains a connection pool for processing HTTP transactions. In the course of sending or receiving files, NiFi starts a background thread to make periodic requests to the remote NiFi system in order to extend the current transaction. The {{SiteToSiteRestApiClient}} stops the background thread after completing a transaction. In some cases, the request to extend the transaction can occur after the HttpComponents connection pool is shutdown, resulting in an {{{}IllegalStateException{}}}. The background thread treats all exceptions as failure conditions and then attempts to close the S2S HTTP client itself. This causes subsequent retry requests using the same {{SiteToSiteRestApiClient}} to fail with the same {{IllegalStateException}} indicating that the connection pool is shutdown. The behavior of the extend transaction command should be changed to avoid closing the S2S HTTP client when encountering an {{IllegalStateException}}. This approach will support the potential for subsequent retries to work or fail based on existing timeout configuration settings. The following log messages provide a stack trace of the {{IllegalStateException}} in the extend transaction command and subsequent exception in the Remote Process Group Port connection. {noformat} WARN org.apache.nifi.remote.util.SiteToSiteRestApiClient: Failed to extend transaction ttl java.lang.IllegalStateException: Connection pool shut down at org.apache.http.util.Asserts.check(Asserts.java:34) at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:191) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:267) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.nifi.remote.util.SiteToSiteRestApiClient.extendTransaction(SiteToSiteRestApiClient.java:1028) at org.apache.nifi.remote.util.SiteToSiteRestApiClient.lambda$startExtendingTtl$0(SiteToSiteRestApiClient.java:990) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-11-10 08:17:10,352 INFO org.apache.http.impl.execchain.RetryExec: I/O exception (org.apache.http.NoHttpResponseException) caught when processing request: The target server failed to respond INFO org.apache.http.impl.execchain.RetryExec: Retrying request ERROR org.apache.nifi.remote.StandardRemoteGroupPort: RemoteGroupPort[name=REMOTE_PORT,targets=URL] failed to communicate with remote NiFi instance due to java.lang.IllegalStateException: Connection pool shut down {noformat} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9448) Potential IllegalStateException when S2S HTTP Client Shutdown
[ https://issues.apache.org/jira/browse/NIFI-9448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Handermann updated NIFI-9448: --- Description: NiFi Site-to-Site communication over HTTP relies on {{SiteToSiteRestApiClient}} to handle requests and responses using the Apache HttpComponents library. The HttpComponents library maintains a connection pool for processing HTTP transactions. In the course of sending or receiving files, NiFi starts a background thread to make periodic requests to the remote NiFi system in order to extend the current transaction. The {{SiteToSiteRestApiClient}} stops the background thread after completing a transaction. In some cases, the request to extend the transaction can occur after the HttpComponents connection pool is shutdown, resulting in an {{{}IllegalStateException{}}}. The background thread treats all exceptions as failure conditions and then attempts to close the S2S HTTP client itself. This causes subsequent retry requests using the same {{SiteToSiteRestApiClient}} to fail with the same {{IllegalStateException}} indicating that the connection pool is shutdown. The behavior of the extend transaction command should be changed to avoid closing the S2S HTTP client when encountering an {{{}IllegalStateException{}}}. This approach will support the potential for subsequent retries to work or fail based on existing timeout configuration settings. The following log messages provide a stack trace of the {{IllegalStateException}} in the extend transaction command and subsequent exception in the Remote Process Group Port connection. {noformat} WARN org.apache.nifi.remote.util.SiteToSiteRestApiClient: Failed to extend transaction ttl java.lang.IllegalStateException: Connection pool shut down at org.apache.http.util.Asserts.check(Asserts.java:34) at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:191) at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.requestConnection(PoolingHttpClientConnectionManager.java:267) at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:176) at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185) at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89) at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111) at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:108) at org.apache.nifi.remote.util.SiteToSiteRestApiClient.extendTransaction(SiteToSiteRestApiClient.java:1028) at org.apache.nifi.remote.util.SiteToSiteRestApiClient.lambda$startExtendingTtl$0(SiteToSiteRestApiClient.java:990) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) INFO org.apache.http.impl.execchain.RetryExec: I/O exception (org.apache.http.NoHttpResponseException) caught when processing request: The target server failed to respond INFO org.apache.http.impl.execchain.RetryExec: Retrying request ERROR org.apache.nifi.remote.StandardRemoteGroupPort: RemoteGroupPort[name=REMOTE_PORT,targets=URL] failed to communicate with remote NiFi instance due to java.lang.IllegalStateException: Connection pool shut down {noformat} was: NiFi Site-to-Site communication over HTTP relies on {{SiteToSiteRestApiClient}} to handle requests and responses using the Apache HttpComponents library. The HttpComponents library maintains a connection pool for processing HTTP transactions. In the course of sending or receiving files, NiFi starts a background thread to make periodic requests to the remote NiFi system in order to extend the current transaction. The {{SiteToSiteRestApiClient}} stops the background thread after completing a transaction. In some cases, the request to extend the transaction can occur after the HttpComponents connection pool is shutdown, resulting in an {{{}IllegalStateException{}}}. The background thread treats all exceptions as failure conditions and then attempts to close the S2S HTTP client itself. This causes subsequent retry requests using the same
[GitHub] [nifi] joewitt commented on pull request #5576: NIFI-9447: Fix SNMP related tests to find UDP ports instead of TCP
joewitt commented on pull request #5576: URL: https://github.com/apache/nifi/pull/5576#issuecomment-987043088 +1 assuming the build check is green. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] Lehel44 opened a new pull request #5576: NIFI-9447: Fix SNMP related tests to find UDP ports instead of TCP
Lehel44 opened a new pull request #5576: URL: https://github.com/apache/nifi/pull/5576 … of TCP Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR https://issues.apache.org/jira/browse/NIFI-9447 In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-9447) Fix SNMP test failures related to available UDP ports
[ https://issues.apache.org/jira/browse/NIFI-9447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lehel Boér reassigned NIFI-9447: Assignee: Lehel Boér > Fix SNMP test failures related to available UDP ports > - > > Key: NIFI-9447 > URL: https://issues.apache.org/jira/browse/NIFI-9447 > Project: Apache NiFi > Issue Type: Bug >Reporter: Lehel Boér >Assignee: Lehel Boér >Priority: Major > > There are SNMP test failure on the pipeline due to unavailable ports. > NetworkUtils::availablePort is deprecated, the test should look for an > available UDP port. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (NIFI-9447) Fix SNMP test failures related to available UDP ports
Lehel Boér created NIFI-9447: Summary: Fix SNMP test failures related to available UDP ports Key: NIFI-9447 URL: https://issues.apache.org/jira/browse/NIFI-9447 Project: Apache NiFi Issue Type: Bug Reporter: Lehel Boér There are SNMP test failure on the pipeline due to unavailable ports. NetworkUtils::availablePort is deprecated, the test should look for an available UDP port. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9442) When connection is deleted and source is funnel, should require that components upstream of funnel are stopped
[ https://issues.apache.org/jira/browse/NIFI-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-9442: - Labels: load-balanced-connections (was: ) > When connection is deleted and source is funnel, should require that > components upstream of funnel are stopped > -- > > Key: NIFI-9442 > URL: https://issues.apache.org/jira/browse/NIFI-9442 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Labels: load-balanced-connections > Fix For: 1.16.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This is important for a case in which we have a cluster where two processors > (for example) are connected with a funnel in between. In this case, if a user > deletes the connection between the funnel and its destination, the web > request that is made will be done in two phases: (1) Verify that the request > is valid and (2) Delete the connection. But if we don't recursively ensure > that the upstream components are stopped, we could have all nodes in the > cluster verify the request is valid in the first phase. But before the second > phase occurs, one node may now have data within the Connection, so the second > phase (the delete) will fail. In that situation, the node's dataflow will > differ from the rest of the cluster, and the node will be kicked out of the > cluster. To avoid this, we simply ensure that the source is stopped, and if > the source is a funnel (which can't be stopped) that its sources are stopped. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9442) When connection is deleted and source is funnel, should require that components upstream of funnel are stopped
[ https://issues.apache.org/jira/browse/NIFI-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-9442: - Labels: (was: load-balanced-connections) > When connection is deleted and source is funnel, should require that > components upstream of funnel are stopped > -- > > Key: NIFI-9442 > URL: https://issues.apache.org/jira/browse/NIFI-9442 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.16.0 > > Time Spent: 20m > Remaining Estimate: 0h > > This is important for a case in which we have a cluster where two processors > (for example) are connected with a funnel in between. In this case, if a user > deletes the connection between the funnel and its destination, the web > request that is made will be done in two phases: (1) Verify that the request > is valid and (2) Delete the connection. But if we don't recursively ensure > that the upstream components are stopped, we could have all nodes in the > cluster verify the request is valid in the first phase. But before the second > phase occurs, one node may now have data within the Connection, so the second > phase (the delete) will fail. In that situation, the node's dataflow will > differ from the rest of the cluster, and the node will be kicked out of the > cluster. To avoid this, we simply ensure that the source is stopped, and if > the source is a funnel (which can't be stopped) that its sources are stopped. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (NIFI-4503) Connection to support load-balancing strategies
[ https://issues.apache.org/jira/browse/NIFI-4503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-4503. -- Resolution: Fixed > Connection to support load-balancing strategies > --- > > Key: NIFI-4503 > URL: https://issues.apache.org/jira/browse/NIFI-4503 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Haimo Liu >Priority: Major > > As an operator, I want to be able to create new list/fetch flows encapsulated > within a single process group, that automatically distribute the fetch > operations across the nodes in my NiFi cluster, so that I can manage each > flow independently from one another and without the need for orchestration > with remote process groups. > It would be great to add the ability for on any given connection have a user > be able to select ‘auto balance across cluster’ and it will automatically > take care of distributing the objects across the cluster. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] OlivierDupre opened a new pull request #5575: NIFI-9445: Fixed some minor formatting errors and typos in developer …
OlivierDupre opened a new pull request #5575: URL: https://github.com/apache/nifi/pull/5575 Description of PR A couple of misformatting and one typo on the code-formatting has been fixed. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [X] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [X] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9446) Update User Guide to explain component verification
Mark Payne created NIFI-9446: Summary: Update User Guide to explain component verification Key: NIFI-9446 URL: https://issues.apache.org/jira/browse/NIFI-9446 Project: Apache NiFi Issue Type: Task Components: Documentation Website Reporter: Mark Payne The User Guide needs to be updated to show how component verification can be used, to explain the concept, and to explain when it does and does not get triggered. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
ChrisSamo632 commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r763171291 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/wag/InvokeAWSGatewayApi.java ## @@ -380,4 +355,87 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro } } } + +@Override +public List verify(final ProcessContext context, final ComponentLog verificationLogger, final Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); + +final String method = context.getProperty(PROP_METHOD).getValue(); +final String endpoint = context.getProperty(PROP_AWS_GATEWAY_API_ENDPOINT).getValue(); +final String resource = context.getProperty(PROP_RESOURCE_NAME).getValue(); +try { +final GenericApiGatewayClient client = getConfiguration(context).getClient(); + +final GatewayResponse gatewayResponse = invokeGateway(client, context, null, null, attributes, verificationLogger); Review comment: Happy that this isn't an issue any longer (see https://github.com/apache/nifi/pull/5504#discussion_r763169549) - the change to have the Idempotent methods list was definitely worthwhile I think and I'm happy that the triggering of this check will be at the behest of the user that's configuring the processor (and so they should know whether their service can be "pinged" in this manner) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
ChrisSamo632 commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r763169549 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/dynamodb/GetDynamoDB.java ## @@ -100,42 +104,75 @@ } @Override -public void onTrigger(final ProcessContext context, final ProcessSession session) { -List flowFiles = session.get(context.getProperty(BATCH_SIZE).evaluateAttributeExpressions().asInteger()); -if (flowFiles == null || flowFiles.size() == 0) { -return; -} +public List verify(final ProcessContext context, final ComponentLog verificationLogger, final Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); -Map keysToFlowFileMap = new HashMap<>(); +final TableKeysAndAttributes tableKeysAndAttributes = getTableKeysAndAttributes(context, attributes); final String table = context.getProperty(TABLE).evaluateAttributeExpressions().getValue(); -TableKeysAndAttributes tableKeysAndAttributes = new TableKeysAndAttributes(table); - -final String hashKeyName = context.getProperty(HASH_KEY_NAME).evaluateAttributeExpressions().getValue(); -final String rangeKeyName = context.getProperty(RANGE_KEY_NAME).evaluateAttributeExpressions().getValue(); final String jsonDocument = context.getProperty(JSON_DOCUMENT).evaluateAttributeExpressions().getValue(); -for (FlowFile flowFile : flowFiles) { -final Object hashKeyValue = getValue(context, HASH_KEY_VALUE_TYPE, HASH_KEY_VALUE, flowFile); -final Object rangeKeyValue = getValue(context, RANGE_KEY_VALUE_TYPE, RANGE_KEY_VALUE, flowFile); +if (tableKeysAndAttributes.getPrimaryKeys().isEmpty()) { -if ( ! isHashKeyValueConsistent(hashKeyName, hashKeyValue, session, flowFile)) { -continue; -} +results.add(new ConfigVerificationResult.Builder() +.outcome(Outcome.SKIPPED) +.verificationStepName("Get DynamoDB Items") +.explanation(String.format("Skipped getting DynamoDB items because no primary keys would be included in retrieval")) +.build()); +} else { +try { +final DynamoDB dynamoDB = getDynamoDB(getConfiguration(context).getClient()); +int totalCount = 0; +int jsonDocumentCount = 0; -if ( ! isRangeKeyValueConsistent(rangeKeyName, rangeKeyValue, session, flowFile) ) { -continue; -} +BatchGetItemOutcome result = dynamoDB.batchGetItem(tableKeysAndAttributes); Review comment: I understand Verification better now thanks to a [Slack discussion](https://apachenifi.slack.com/archives/C0L9S92JY/p1638744006088000), so this is less of a concern as you're right that the user has the control over when this happens and against specific bits of data... so this check seems fine now, thanks for explaining! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (NIFI-9445) Fix some minor formatting issues in developer-guide
Olivier Dupré created NIFI-9445: --- Summary: Fix some minor formatting issues in developer-guide Key: NIFI-9445 URL: https://issues.apache.org/jira/browse/NIFI-9445 Project: Apache NiFi Issue Type: Improvement Components: Documentation Website Reporter: Olivier Dupré I have notice some minor formatting issues in the developer guide. Especially, some code wrapped formatting missing or mistyped. eg: @RequiresInstanceClassLoading, `cloneAncestorResources', ... -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] mcgilman commented on pull request #5569: [NIFI-9423-NIFI-9429]: Show icon and tooltip for Parameters with leading and/or trailing whitespace
mcgilman commented on pull request #5569: URL: https://github.com/apache/nifi/pull/5569#issuecomment-986902780 Will review... -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] gresockj commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
gresockj commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r763114909 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/dynamodb/GetDynamoDB.java ## @@ -100,42 +104,75 @@ } @Override -public void onTrigger(final ProcessContext context, final ProcessSession session) { -List flowFiles = session.get(context.getProperty(BATCH_SIZE).evaluateAttributeExpressions().asInteger()); -if (flowFiles == null || flowFiles.size() == 0) { -return; -} +public List verify(final ProcessContext context, final ComponentLog verificationLogger, final Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); -Map keysToFlowFileMap = new HashMap<>(); +final TableKeysAndAttributes tableKeysAndAttributes = getTableKeysAndAttributes(context, attributes); final String table = context.getProperty(TABLE).evaluateAttributeExpressions().getValue(); -TableKeysAndAttributes tableKeysAndAttributes = new TableKeysAndAttributes(table); - -final String hashKeyName = context.getProperty(HASH_KEY_NAME).evaluateAttributeExpressions().getValue(); -final String rangeKeyName = context.getProperty(RANGE_KEY_NAME).evaluateAttributeExpressions().getValue(); final String jsonDocument = context.getProperty(JSON_DOCUMENT).evaluateAttributeExpressions().getValue(); -for (FlowFile flowFile : flowFiles) { -final Object hashKeyValue = getValue(context, HASH_KEY_VALUE_TYPE, HASH_KEY_VALUE, flowFile); -final Object rangeKeyValue = getValue(context, RANGE_KEY_VALUE_TYPE, RANGE_KEY_VALUE, flowFile); +if (tableKeysAndAttributes.getPrimaryKeys().isEmpty()) { -if ( ! isHashKeyValueConsistent(hashKeyName, hashKeyValue, session, flowFile)) { -continue; -} +results.add(new ConfigVerificationResult.Builder() +.outcome(Outcome.SKIPPED) +.verificationStepName("Get DynamoDB Items") +.explanation(String.format("Skipped getting DynamoDB items because no primary keys would be included in retrieval")) +.build()); +} else { +try { +final DynamoDB dynamoDB = getDynamoDB(getConfiguration(context).getClient()); +int totalCount = 0; +int jsonDocumentCount = 0; -if ( ! isRangeKeyValueConsistent(rangeKeyName, rangeKeyValue, session, flowFile) ) { -continue; -} +BatchGetItemOutcome result = dynamoDB.batchGetItem(tableKeysAndAttributes); Review comment: Ok, I did some testing and I think this is still a reasonable implementation of the verification. When presented with the verification screen, the user is prompted to fill in a single hash key value and a single range key value (assuming they keep the default EL expressions for those fields). Since these will look up an item by primary key, the result of verification should be at most one item returned. However, the extra testing here helped me find a NPE and improve the verification messaging in cases where the key values are inconsistent with the table key configuration. I've pushed the changes with these improvements. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (NIFI-9440) Allow Bulletin level to be configurable for Controller Services
[ https://issues.apache.org/jira/browse/NIFI-9440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nissim Shiman reassigned NIFI-9440: --- Assignee: Nissim Shiman > Allow Bulletin level to be configurable for Controller Services > --- > > Key: NIFI-9440 > URL: https://issues.apache.org/jira/browse/NIFI-9440 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework, Core UI >Affects Versions: 1.15.0 >Reporter: Nissim Shiman >Assignee: Nissim Shiman >Priority: Major > > Processors have bulletin level configurable on the "Settings" tabs, but there > is no similar functionality exposed for controller services. > Allow controller services this same level of bulletin level control. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Created] (NIFI-9444) Add reconnect property to JettyWebsocketClientService
Lehel Boér created NIFI-9444: Summary: Add reconnect property to JettyWebsocketClientService Key: NIFI-9444 URL: https://issues.apache.org/jira/browse/NIFI-9444 Project: Apache NiFi Issue Type: Improvement Reporter: Lehel Boér -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] exceptionfactory commented on pull request #5541: NIFI-9397 Adding sensitive dynamic property support for JettyWebSocketClient
exceptionfactory commented on pull request #5541: URL: https://github.com/apache/nifi/pull/5541#issuecomment-986822829 You're welcome @simonbence! Thanks for the reply and I will look for your new PR with an alternative approach. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence opened a new pull request #5574: NIFI-9397 Extending JettyWebServerClient authorization possibilities with custom setup
simonbence opened a new pull request #5574: URL: https://github.com/apache/nifi/pull/5574 [NIFI-9397](https://issues.apache.org/jira/browse/NIFI-9397) Adding a new property in order to open the possibility of using custom authorization, like for example using api key as a manner of auth. Thank you for submitting a contribution to Apache NiFi. Please provide a short description of the PR here: Description of PR _Enables X functionality; fixes bug NIFI-._ In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with **NIFI-** where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically `main`)? - [ ] Is your initial contribution a single, squashed commit? _Additional commits in response to PR reviewer feedback should be made on this branch and pushed to allow change tracking. Do not `squash` or use `--force` when pushing to allow for clean monitoring of changes._ ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via `mvn -Pcontrib-check clean install` at the root `nifi` folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] Have you verified that the full build is successful on JDK 8? - [ ] Have you verified that the full build is successful on JDK 11? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the `LICENSE` file, including the main `LICENSE` file under `nifi-assembly`? - [ ] If applicable, have you updated the `NOTICE` file, including the main `NOTICE` file found under `nifi-assembly`? - [ ] If adding new Properties, have you added `.displayName` in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check GitHub Actions CI for build issues and submit an update to your PR as soon as possible. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Updated] (NIFI-9397) Add custom authorization to JettyWebSocketClient
[ https://issues.apache.org/jira/browse/NIFI-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Bence updated NIFI-9397: -- Description: Adding secure property which -in case of being set- is used as the value for Authorization header. By this, services using different kind of auth (like api key) can be reached using the controller service. -Adding sensitive dynamic property support in the same manner it was implemented in the [DBCPConnectionPool|https://issues.apache.org/jira/browse/NIFI-8047]- (This approach is not sercure as kindly highlighted by [~exceptionfactory]) was:Adding sensitive dynamic property support in the same manner it was implemented in the [DBCPConnectionPool|https://issues.apache.org/jira/browse/NIFI-8047] > Add custom authorization to JettyWebSocketClient > > > Key: NIFI-9397 > URL: https://issues.apache.org/jira/browse/NIFI-9397 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Simon Bence >Assignee: Simon Bence >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Adding secure property which -in case of being set- is used as the value for > Authorization header. By this, services using different kind of auth (like > api key) can be reached using the controller service. > -Adding sensitive dynamic property support in the same manner it was > implemented in the > [DBCPConnectionPool|https://issues.apache.org/jira/browse/NIFI-8047]- (This > approach is not sercure as kindly highlighted by [~exceptionfactory]) -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9397) Add custom authorization to JettyWebSocketClient
[ https://issues.apache.org/jira/browse/NIFI-9397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Simon Bence updated NIFI-9397: -- Summary: Add custom authorization to JettyWebSocketClient (was: Support Sensitive Dynamic Properties in JettyWebSocketClient) > Add custom authorization to JettyWebSocketClient > > > Key: NIFI-9397 > URL: https://issues.apache.org/jira/browse/NIFI-9397 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Reporter: Simon Bence >Assignee: Simon Bence >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Adding sensitive dynamic property support in the same manner it was > implemented in the > [DBCPConnectionPool|https://issues.apache.org/jira/browse/NIFI-8047] -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (NIFI-9443) Update NiFi Registry extension model based on NAR plugin 1.3.3
[ https://issues.apache.org/jira/browse/NIFI-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joe Gresock updated NIFI-9443: -- Fix Version/s: 1.16.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Update NiFi Registry extension model based on NAR plugin 1.3.3 > -- > > Key: NIFI-9443 > URL: https://issues.apache.org/jira/browse/NIFI-9443 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Fix For: 1.16.0 > > Time Spent: 20m > Remaining Estimate: 0h > > Update the extension manifest object model based on new fields available from > NAR plugin 1.3.3. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi] asfgit closed pull request #5570: NIFI-9443 Update extension manifest data model based on NAR plugin 1.3.3
asfgit closed pull request #5570: URL: https://github.com/apache/nifi/pull/5570 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Commented] (NIFI-9443) Update NiFi Registry extension model based on NAR plugin 1.3.3
[ https://issues.apache.org/jira/browse/NIFI-9443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17454027#comment-17454027 ] ASF subversion and git services commented on NIFI-9443: --- Commit 0f027743d1bbf0abd94e3b3a953e180e9d40176c in nifi's branch refs/heads/main from Bryan Bende [ https://gitbox.apache.org/repos/asf?p=nifi.git;h=0f02774 ] NIFI-9443 Update NAR plugin to 1.3.3 and update data model for extension manifest to capture new fields Signed-off-by: Joe Gresock This closes #5570. > Update NiFi Registry extension model based on NAR plugin 1.3.3 > -- > > Key: NIFI-9443 > URL: https://issues.apache.org/jira/browse/NIFI-9443 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Update the extension manifest object model based on new fields available from > NAR plugin 1.3.3. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1223: MINIFICPP-1223 Only reload script file in ExecutePythonScript when requested in property
lordgamez commented on a change in pull request #1223: URL: https://github.com/apache/nifi-minifi-cpp/pull/1223#discussion_r762975275 ## File path: PROCESSORS.md ## @@ -421,8 +421,9 @@ In the list below, the names of required properties appear in bold. Any other pr | Name | Default Value | Allowable Values | Description | | - | - | - | - | -|Script File|||Path to script file to execute. Only one of Script File or Script Body may be used| +|**Reload on Script Change**|false||If true and Script File property is used, then script file will be reloaded if it has changed, otherwise the first loaded version will be used at all times.| Review comment: The PythonScriptEngine's eval function was not called after reading the file's content. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] szaszm commented on a change in pull request #1223: MINIFICPP-1223 Only reload script file in ExecutePythonScript when requested in property
szaszm commented on a change in pull request #1223: URL: https://github.com/apache/nifi-minifi-cpp/pull/1223#discussion_r762962576 ## File path: PROCESSORS.md ## @@ -421,8 +421,9 @@ In the list below, the names of required properties appear in bold. Any other pr | Name | Default Value | Allowable Values | Description | | - | - | - | - | -|Script File|||Path to script file to execute. Only one of Script File or Script Body may be used| +|**Reload on Script Change**|false||If true and Script File property is used, then script file will be reloaded if it has changed, otherwise the first loaded version will be used at all times.| Review comment: Could you elaborate on why was it never reloaded? I thought it was reloaded whenever the last write time increased. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] gresockj commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
gresockj commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r762962443 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/dynamodb/GetDynamoDB.java ## @@ -100,42 +104,75 @@ } @Override -public void onTrigger(final ProcessContext context, final ProcessSession session) { -List flowFiles = session.get(context.getProperty(BATCH_SIZE).evaluateAttributeExpressions().asInteger()); -if (flowFiles == null || flowFiles.size() == 0) { -return; -} +public List verify(final ProcessContext context, final ComponentLog verificationLogger, final Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); -Map keysToFlowFileMap = new HashMap<>(); +final TableKeysAndAttributes tableKeysAndAttributes = getTableKeysAndAttributes(context, attributes); final String table = context.getProperty(TABLE).evaluateAttributeExpressions().getValue(); -TableKeysAndAttributes tableKeysAndAttributes = new TableKeysAndAttributes(table); - -final String hashKeyName = context.getProperty(HASH_KEY_NAME).evaluateAttributeExpressions().getValue(); -final String rangeKeyName = context.getProperty(RANGE_KEY_NAME).evaluateAttributeExpressions().getValue(); final String jsonDocument = context.getProperty(JSON_DOCUMENT).evaluateAttributeExpressions().getValue(); -for (FlowFile flowFile : flowFiles) { -final Object hashKeyValue = getValue(context, HASH_KEY_VALUE_TYPE, HASH_KEY_VALUE, flowFile); -final Object rangeKeyValue = getValue(context, RANGE_KEY_VALUE_TYPE, RANGE_KEY_VALUE, flowFile); +if (tableKeysAndAttributes.getPrimaryKeys().isEmpty()) { -if ( ! isHashKeyValueConsistent(hashKeyName, hashKeyValue, session, flowFile)) { -continue; -} +results.add(new ConfigVerificationResult.Builder() +.outcome(Outcome.SKIPPED) +.verificationStepName("Get DynamoDB Items") +.explanation(String.format("Skipped getting DynamoDB items because no primary keys would be included in retrieval")) +.build()); +} else { +try { +final DynamoDB dynamoDB = getDynamoDB(getConfiguration(context).getClient()); +int totalCount = 0; +int jsonDocumentCount = 0; -if ( ! isRangeKeyValueConsistent(rangeKeyName, rangeKeyValue, session, flowFile) ) { -continue; -} +BatchGetItemOutcome result = dynamoDB.batchGetItem(tableKeysAndAttributes); Review comment: I'll see if I can decompose it more, but I think a single batch request is as small as we can do while still providing meaningful feedback during verification. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] gresockj commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
gresockj commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r762950327 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/FetchS3Object.java ## @@ -148,6 +153,37 @@ return problems; } +@Override +public List verify(ProcessContext context, ComponentLog verificationLogger, Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); + +final String bucket = context.getProperty(BUCKET).evaluateAttributeExpressions(attributes).getValue(); +final String key = context.getProperty(KEY).evaluateAttributeExpressions(attributes).getValue(); + +final AmazonS3 client = getConfiguration(context).getClient(); +final GetObjectRequest request = createGetObjectRequest(context, attributes); + +try (final S3Object s3Object = client.getObject(request)) { Review comment: Ah, I see what you mean with `HeadObject` -- that is what we want, then. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi-minifi-cpp] lordgamez commented on a change in pull request #1223: MINIFICPP-1223 Only reload script file in ExecutePythonScript when requested in property
lordgamez commented on a change in pull request #1223: URL: https://github.com/apache/nifi-minifi-cpp/pull/1223#discussion_r762864051 ## File path: PROCESSORS.md ## @@ -421,8 +421,9 @@ In the list below, the names of required properties appear in bold. Any other pr | Name | Default Value | Allowable Values | Description | | - | - | - | - | -|Script File|||Path to script file to execute. Only one of Script File or Script Body may be used| +|**Reload on Script Change**|false||If true and Script File property is used, then script file will be reloaded if it has changed, otherwise the first loaded version will be used at all times.| Review comment: My only concern was the backward compatibility especially for native python processors as previously even though it looked like we reloaded the script actually it was never reloaded. But on second thought I don't think it would be a problem and we still haven't released version 1.0 so I updated it in 3a8c9b43245739352f3fef8602dac1592f0eafdd -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence commented on pull request #5541: NIFI-9397 Adding sensitive dynamic property support for JettyWebSocketClient
simonbence commented on pull request #5541: URL: https://github.com/apache/nifi/pull/5541#issuecomment-986576507 Thanks for your invested time @exceptionfactory ! Your concerns are valid and based on your reasoning I will send in an other PR soonish with a different, more secure approach. This PR will be closed. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] simonbence closed pull request #5541: NIFI-9397 Adding sensitive dynamic property support for JettyWebSocketClient
simonbence closed pull request #5541: URL: https://github.com/apache/nifi/pull/5541 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #5550: NIFI-9391: Modified MergeRecord to process FlowFiles within a loop in…
turcsanyip commented on a change in pull request #5550: URL: https://github.com/apache/nifi/pull/5550#discussion_r762805421 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MergeRecord.java ## @@ -323,56 +323,64 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory } } -final ProcessSession session = sessionFactory.createSession(); -final List flowFiles = session.get(FlowFileFilters.newSizeBasedFilter(250, DataUnit.KB, 250)); -if (getLogger().isDebugEnabled()) { -final List ids = flowFiles.stream().map(ff -> "id=" + ff.getId()).collect(Collectors.toList()); -getLogger().debug("Pulled {} FlowFiles from queue: {}", new Object[] {ids.size(), ids}); -} +while (isScheduled()) { +final ProcessSession session = sessionFactory.createSession(); +final List flowFiles = session.get(FlowFileFilters.newSizeBasedFilter(250, DataUnit.KB, 250)); +if (flowFiles.isEmpty()) { +break; +} +if (getLogger().isDebugEnabled()) { +final List ids = flowFiles.stream().map(ff -> "id=" + ff.getId()).collect(Collectors.toList()); +getLogger().debug("Pulled {} FlowFiles from queue: {}", ids.size(), ids); +} -final String mergeStrategy = context.getProperty(MERGE_STRATEGY).getValue(); -final boolean block; -if (MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { -block = true; -} else if (context.getProperty(CORRELATION_ATTRIBUTE_NAME).isSet()) { -block = true; -} else { -block = false; -} +final String mergeStrategy = context.getProperty(MERGE_STRATEGY).getValue(); +final boolean block; +if (MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { +block = true; +} else if (context.getProperty(CORRELATION_ATTRIBUTE_NAME).isSet()) { +block = true; +} else { +block = false; +} Review comment: @Lehel44 This code block has not changed now (except indentation) so I would not modify it. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #5550: NIFI-9391: Modified MergeRecord to process FlowFiles within a loop in…
turcsanyip commented on a change in pull request #5550: URL: https://github.com/apache/nifi/pull/5550#discussion_r762804091 ## File path: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/MergeRecord.java ## @@ -323,56 +323,64 @@ public void onTrigger(final ProcessContext context, final ProcessSessionFactory } } -final ProcessSession session = sessionFactory.createSession(); -final List flowFiles = session.get(FlowFileFilters.newSizeBasedFilter(250, DataUnit.KB, 250)); -if (getLogger().isDebugEnabled()) { -final List ids = flowFiles.stream().map(ff -> "id=" + ff.getId()).collect(Collectors.toList()); -getLogger().debug("Pulled {} FlowFiles from queue: {}", new Object[] {ids.size(), ids}); -} +while (isScheduled()) { +final ProcessSession session = sessionFactory.createSession(); +final List flowFiles = session.get(FlowFileFilters.newSizeBasedFilter(250, DataUnit.KB, 250)); +if (flowFiles.isEmpty()) { +break; +} +if (getLogger().isDebugEnabled()) { +final List ids = flowFiles.stream().map(ff -> "id=" + ff.getId()).collect(Collectors.toList()); +getLogger().debug("Pulled {} FlowFiles from queue: {}", ids.size(), ids); +} -final String mergeStrategy = context.getProperty(MERGE_STRATEGY).getValue(); -final boolean block; -if (MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { -block = true; -} else if (context.getProperty(CORRELATION_ATTRIBUTE_NAME).isSet()) { -block = true; -} else { -block = false; -} +final String mergeStrategy = context.getProperty(MERGE_STRATEGY).getValue(); +final boolean block; +if (MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { +block = true; +} else if (context.getProperty(CORRELATION_ATTRIBUTE_NAME).isSet()) { +block = true; +} else { +block = false; +} -try { -for (final FlowFile flowFile : flowFiles) { -try { -binFlowFile(context, flowFile, session, manager, block); -} catch (final Exception e) { -getLogger().error("Failed to bin {} due to {}", new Object[] {flowFile, e}); -session.transfer(flowFile, REL_FAILURE); +try { +for (final FlowFile flowFile : flowFiles) { +try { +binFlowFile(context, flowFile, session, manager, block); +} catch (final Exception e) { +getLogger().error("Failed to bin {} due to {}", flowFile, e); +session.transfer(flowFile, REL_FAILURE); +} } +} finally { +session.commitAsync(); } -} finally { -session.commitAsync(); -} -// If there is no more data queued up, or strategy is defragment, complete any bin that meets our minimum threshold -// Otherwise, run one more cycle to process queued FlowFiles to add more fragment into available bins. -int completedBins = 0; -if (flowFiles.isEmpty() || MERGE_STRATEGY_DEFRAGMENT.getValue().equals(mergeStrategy)) { +// Complete any bins that have reached their expiration date try { -completedBins += manager.completeFullEnoughBins(); +manager.completeExpiredBins(); } catch (final Exception e) { -getLogger().error("Failed to merge FlowFiles to create new bin due to " + e, e); +getLogger().error("Failed to merge FlowFiles to create new bin due to {}", e); Review comment: @markap14 Thanks for catching it. Modified the log statements. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] turcsanyip commented on a change in pull request #5550: NIFI-9391: Modified MergeRecord to process FlowFiles within a loop in…
turcsanyip commented on a change in pull request #5550: URL: https://github.com/apache/nifi/pull/5550#discussion_r762803566 ## File path: nifi-mock/src/main/java/org/apache/nifi/util/StandardProcessorTestRunner.java ## @@ -221,28 +221,18 @@ public void run(final int iterations, final boolean stopOnFinish, final boolean } catch (final InterruptedException e1) { } -int finishedCount = 0; -boolean unscheduledRun = false; for (final Future future : futures) { try { final Throwable thrown = future.get(); // wait for the result if (thrown != null) { throw new AssertionError(thrown); } - -if (++finishedCount == 1) { -unscheduledRun = true; -unSchedule(); -} } catch (final Exception e) { } } -if (!unscheduledRun) { -unSchedule(); -} - if (stopOnFinish) { +unSchedule(); Review comment: @markap14 I reveretd the change and added `stopOnFinish` checks to existing `if` statements. Could you please check this version? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
ChrisSamo632 commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r762795882 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/wag/InvokeAWSGatewayApi.java ## @@ -380,4 +355,87 @@ public void onTrigger(ProcessContext context, ProcessSession session) throws Pro } } } + +@Override +public List verify(final ProcessContext context, final ComponentLog verificationLogger, final Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); + +final String method = context.getProperty(PROP_METHOD).getValue(); +final String endpoint = context.getProperty(PROP_AWS_GATEWAY_API_ENDPOINT).getValue(); +final String resource = context.getProperty(PROP_RESOURCE_NAME).getValue(); +try { +final GenericApiGatewayClient client = getConfiguration(context).getClient(); + +final GatewayResponse gatewayResponse = invokeGateway(client, context, null, null, attributes, verificationLogger); Review comment: My concern here is still that a `GET` may not be idempotent - can the verification step be turned off? If my API Gateway (being a GET) results in other things happening downstream that may impact my system, I might not want this being executed until my NiFi Flow has first done other stuff upstream (e.g. inersted data into a datastore that the API Gateway then does something with ... without that data being present then it could cause my system problems) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
ChrisSamo632 commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r762794563 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/dynamodb/GetDynamoDB.java ## @@ -100,42 +104,75 @@ } @Override -public void onTrigger(final ProcessContext context, final ProcessSession session) { -List flowFiles = session.get(context.getProperty(BATCH_SIZE).evaluateAttributeExpressions().asInteger()); -if (flowFiles == null || flowFiles.size() == 0) { -return; -} +public List verify(final ProcessContext context, final ComponentLog verificationLogger, final Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); -Map keysToFlowFileMap = new HashMap<>(); +final TableKeysAndAttributes tableKeysAndAttributes = getTableKeysAndAttributes(context, attributes); final String table = context.getProperty(TABLE).evaluateAttributeExpressions().getValue(); -TableKeysAndAttributes tableKeysAndAttributes = new TableKeysAndAttributes(table); - -final String hashKeyName = context.getProperty(HASH_KEY_NAME).evaluateAttributeExpressions().getValue(); -final String rangeKeyName = context.getProperty(RANGE_KEY_NAME).evaluateAttributeExpressions().getValue(); final String jsonDocument = context.getProperty(JSON_DOCUMENT).evaluateAttributeExpressions().getValue(); -for (FlowFile flowFile : flowFiles) { -final Object hashKeyValue = getValue(context, HASH_KEY_VALUE_TYPE, HASH_KEY_VALUE, flowFile); -final Object rangeKeyValue = getValue(context, RANGE_KEY_VALUE_TYPE, RANGE_KEY_VALUE, flowFile); +if (tableKeysAndAttributes.getPrimaryKeys().isEmpty()) { -if ( ! isHashKeyValueConsistent(hashKeyName, hashKeyValue, session, flowFile)) { -continue; -} +results.add(new ConfigVerificationResult.Builder() +.outcome(Outcome.SKIPPED) +.verificationStepName("Get DynamoDB Items") +.explanation(String.format("Skipped getting DynamoDB items because no primary keys would be included in retrieval")) +.build()); +} else { +try { +final DynamoDB dynamoDB = getDynamoDB(getConfiguration(context).getClient()); +int totalCount = 0; +int jsonDocumentCount = 0; -if ( ! isRangeKeyValueConsistent(rangeKeyName, rangeKeyValue, session, flowFile) ) { -continue; -} +BatchGetItemOutcome result = dynamoDB.batchGetItem(tableKeysAndAttributes); Review comment: Shame that `GetTable` does that, but oh well Could we instead limit the pull of data to a single item then maybe, i.e. if we have to try fetching something, then fetch as little as possible in order to reduce both cost and time taken for the verification steps -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [nifi] ChrisSamo632 commented on a change in pull request #5504: NIFI-9353: Adding Config Verification to AWS Processors
ChrisSamo632 commented on a change in pull request #5504: URL: https://github.com/apache/nifi/pull/5504#discussion_r762793801 ## File path: nifi-nar-bundles/nifi-aws-bundle/nifi-aws-processors/src/main/java/org/apache/nifi/processors/aws/s3/FetchS3Object.java ## @@ -148,6 +153,37 @@ return problems; } +@Override +public List verify(ProcessContext context, ComponentLog verificationLogger, Map attributes) { +final List results = new ArrayList<>(super.verify(context, verificationLogger, attributes)); + +final String bucket = context.getProperty(BUCKET).evaluateAttributeExpressions(attributes).getValue(); +final String key = context.getProperty(KEY).evaluateAttributeExpressions(attributes).getValue(); + +final AmazonS3 client = getConfiguration(context).getClient(); +final GetObjectRequest request = createGetObjectRequest(context, attributes); + +try (final S3Object s3Object = client.getObject(request)) { Review comment: A quick look at the [AWS docs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) suggests that if one can `GetObject` then they will be able to `HeadObject` Particularly as the user may need to specify a large object to be retrieved from S3 by the processor, doing this more times than is necessary seems like a waste and unnecessary extra cost. I know that my use of this Processor often results in pulling many objects that range from very small to much larger, I'd be wanting to turn this step off if it meant doubling up on pulling some of the larger files over the internet! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@nifi.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org