[GitHub] nifi issue #3174: NIFI-5820 NiFi built on Java 1.8 can run on Java 9/10/11
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3174 Rebased this against current master, but some additional updates need to be made along with some more local testing. Setting this back to WIP while I make these changes... ---
[GitHub] nifi issue #3174: [WIP] NIFI-5820 NiFi built on Java 1.8 can run on Java 9/1...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3174 Thanks for taking a look at the PR, @joewitt. That warning is due to accessing, via reflection, the `pid` method on the Process API, which was added in Java 9. The code that does this was added by NIFI-5175, to allow NiFI built on Java 1.8 to run on Java 9. There's a comment in the code detailing why the use of reflection is necessary. Please see https://github.com/apache/nifi/blob/master/nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/util/OSUtils.java#L111. The warning is expected, and when we have a minimum requirement of Java 11, we can refactor OSUtils, or probably remove the class entirely, since the Process API (as of Java 9) provides a platform independent way to get a PID. I don't think we'll need to have the methods in OSUtils for getting the PID based on which platform on which NiFi is running. ---
[GitHub] nifi pull request #3174: [WIP] NIFI-5820 NiFi built on Java 1.8 can run on J...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/3174 [WIP] NIFI-5820 NiFi built on Java 1.8 can run on Java 9/10/11 Updated RunNiFi.java to add libs need to run on Java 11 when it is the detected runtime java version and grant access to the necessary module when running on Java 9 or 10 Added dependencies/includes/excludes to nifi-assembly configurations for enabling NiFi to run on Java 11 Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5820 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3174.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3174 commit 4859444b6b503a86953d0a6b06be07f081357552 Author: Jeff Storck Date: 2018-11-15T23:38:02Z NIFI-5820 NiFi built on Java 1.8 can run on Java 9/10/11 Updated RunNiFi.java to add libs need to run on Java 11 when it is the detected runtime java version and grant access to the necessary module when running on Java 9 or 10 Added dependencies/includes/excludes to nifi-assembly configurations for enabling NiFi to run on Java 11 ---
[GitHub] nifi issue #3129: NIFI-5748 Fixed proxy header support to use X-Forwarded-Ho...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3129 @mcgilman @kevdoran @alopresto @thenatog The PR is ready for review. https://github.com/jtstorck/proxy-nifi-docker can be used to test the PR, and there are instructions for starting it up and where the proxies are hosting NiFi. ---
[GitHub] nifi issue #3129: [WIP] NIFI-5748 Fixed proxy header support to use X-Forwar...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3129 https://github.com/jtstorck/proxy-nifi-docker can be used to test this PR. There's an issue in NiFi with the handling of X-Forwarded-Host when Knox is proxying NiFi, which doesn't currently account for the port being present in that header. I'll update the code to handle this case, and update the PR. ---
[GitHub] nifi issue #3129: [WIP] NIFI-5748 Fixed proxy header support to use X-Forwar...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3129 I'll be adding some docker-compose content for testing this PR. ---
[GitHub] nifi pull request #3129: [WIP] NIFI-5748 Fixed proxy header support to use X...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/3129 [WIP] NIFI-5748 Fixed proxy header support to use X-Forwarded-Host instead ⦠â¦of X-ForwardedServer Added support for the context path header used by Traefik when proxying a service (X-Forwarded-Prefix) Added tests to ApplicationResourceTest for X-Forwarded-Context and X-Forwarded-Prefix Updated administration doc to include X-Forwarded-Prefix Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5748 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3129.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3129 commit 49aecd1127132e2c4cb12639cb9d66b14dee60d0 Author: Jeff Storck Date: 2018-10-29T17:29:28Z NIFI-5748 Fixed proxy header support to use X-Forwarded-Host instead of X-ForwardedServer Added support for the context path header used by Traefik when proxying a service (X-Forwarded-Prefix) Added tests to ApplicationResourceTest for X-Forwarded-Context and X-Forwarded-Prefix Updated administration doc to include X-Forwarded-Prefix ---
[GitHub] nifi-site pull request #32: NIFI-5773 Added some steps and details to the re...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi-site/pull/32 NIFI-5773 Added some steps and details to the release process Fixed several formatting problems with lists and bullet points Removed some extraneous mentions of "NIFI-" in front of ${JIRA_TICKET} You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi-site NIFI-5773 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-site/pull/32.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #32 commit 8dad7b08671b997632199f15f01909916ab36078 Author: Jeff Storck Date: 2018-10-31T19:48:05Z NIFI-5773 Added some steps and details to the release process Fixed several formatting problems with lists and bullet points Removed some extraneous mentions of "NIFI-" in front of ${JIRA_TICKET} ---
[GitHub] nifi issue #3102: NIFI-5737: Removing need client auth property as cluster c...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3102 Based on @markap14 giving a +1 on the JIRA (https://issues.apache.org/jira/browse/NIFI-5737) and @thenatog's testing, I'll merge this so that it will be part of NiFi 1.8.0 RC3. ---
[GitHub] nifi issue #3092: NIFI-5525 - CSVRecordReader fails with StringIndexOutOfBou...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3092 @patricker I'm hoping to get this PR merged in for NiFi 1.8.0 RC3 today. Could you please check the newest changes and merge if you are a +1? ---
[GitHub] nifi issue #3097: Revert "NIFI-4558 - Set JKS as the default keystore type a...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3097 +1, merged to master. I verified that the changes have been reverted, and that the full build with tests and contrib-check is successful. ---
[GitHub] nifi issue #3089: NIFI-5653 Added default NiFi and Embedded Zookeeper port t...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3089 +1 LGTM. Merging! Thanks for this documentation, @andrewmlim! ---
[GitHub] nifi pull request #3089: NIFI-5653 Added default NiFi and Embedded Zookeeper...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3089#discussion_r226051004 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -73,9 +73,38 @@ When NiFi first starts up, the following files and directories are created: See the <> section of this guide for more information about configuring NiFi repositories and configuration files. +== Port Configuration + +=== NiFi +The following table lists the default ports used by NiFi and the corresponding property in the _nifi.properties_ file. + +[options="header,footer"] +|== +| Function| Property | Default Value +|HTTP Port| `nifi.web.http.port` | `8080` +|HTTPS Port* | `nifi.web.https.port` | `9443` +|Remote Input Socket Port*| `nifi.remote.input.socket.port` | `10443` +|Cluster Node Protocol Port* | `nifi.cluster.node.protocol.port` | `11443` +|Cluster Node Load Balancing Port | `nifi.cluster.node.load.balance.port` | `6342` +|Web HTTP Forwarding Port | `nifi.web.http.port.forwarding` | blank +|== + +NOTE: The ports marked with an asterisk (*) have property values that are blank by default in _nifi.properties_. The values shown in the table are the default values for these ports when <> is used to generate _nifi.properties_ for a secured NiFi instance. The default Certificate Authority Port used by TLS Toolkit is `8443`. + +=== Embedded Zookeeper +The following table lists the default ports used by an <> and the corresponding property in the _zookeeper.properties_ file. + +[options="header,footer"] +|== +| Function | Property | Default Value +|Zookeeper Client Port | `clientPort` | `2181` +|Zookeeper Server Quorum and Leader Election Ports | `server.1` | `localhost:2888:3888` --- End diff -- This looks good, with `blank` being changed to _`none`_. ---
[GitHub] nifi pull request #3089: NIFI-5653 Added default NiFi and Embedded Zookeeper...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3089#discussion_r226048817 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -73,9 +73,38 @@ When NiFi first starts up, the following files and directories are created: See the <> section of this guide for more information about configuring NiFi repositories and configuration files. +== Port Configuration + +=== NiFi +The following table lists the default ports used by NiFi and the corresponding property in the _nifi.properties_ file. + +[options="header,footer"] +|== +| Function| Property | Default Value +|HTTP Port| `nifi.web.http.port` | `8080` +|HTTPS Port* | `nifi.web.https.port` | `9443` +|Remote Input Socket Port*| `nifi.remote.input.socket.port` | `10443` +|Cluster Node Protocol Port* | `nifi.cluster.node.protocol.port` | `11443` +|Cluster Node Load Balancing Port | `nifi.cluster.node.load.balance.port` | `6342` +|Web HTTP Forwarding Port | `nifi.web.http.port.forwarding` | blank +|== + +NOTE: The ports marked with an asterisk (*) have property values that are blank by default in _nifi.properties_. The values shown in the table are the default values for these ports when <> is used to generate _nifi.properties_ for a secured NiFi instance. The default Certificate Authority Port used by TLS Toolkit is `8443`. --- End diff -- `empty` is probably not correct here. It could imply an empty string, which would not be the case for a property set like `nifi.web.http.port.forwarding=` as it is by default in nifi.properties. `blank` is probably the better option to use in the descriptions, but maybe _`null`_ or _`none`_ in the port lists? ---
[GitHub] nifi pull request #3089: NIFI-5653 Added default NiFi and Embedded Zookeeper...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3089#discussion_r226036575 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -73,9 +73,38 @@ When NiFi first starts up, the following files and directories are created: See the <> section of this guide for more information about configuring NiFi repositories and configuration files. +== Port Configuration + +=== NiFi +The following table lists the default ports used by NiFi and the corresponding property in the _nifi.properties_ file. + +[options="header,footer"] +|== +| Function| Property | Default Value +|HTTP Port| `nifi.web.http.port` | `8080` +|HTTPS Port* | `nifi.web.https.port` | `9443` +|Remote Input Socket Port*| `nifi.remote.input.socket.port` | `10443` +|Cluster Node Protocol Port* | `nifi.cluster.node.protocol.port` | `11443` +|Cluster Node Load Balancing Port | `nifi.cluster.node.load.balance.port` | `6342` +|Web HTTP Forwarding Port | `nifi.web.http.port.forwarding` | blank +|== + +NOTE: The ports marked with an asterisk (*) have property values that are blank by default in _nifi.properties_. The values shown in the table are the default values for these ports when <> is used to generate _nifi.properties_ for a secured NiFi instance. The default Certificate Authority Port used by TLS Toolkit is `8443`. + +=== Embedded Zookeeper +The following table lists the default ports used by an <> and the corresponding property in the _zookeeper.properties_ file. + +[options="header,footer"] +|== +| Function | Property | Default Value +|Zookeeper Client Port | `clientPort` | `2181` +|Zookeeper Server Quorum and Leader Election Ports | `server.1` | `localhost:2888:3888` --- End diff -- In zookeeper.properties, the default values are commented: ```properties # server.1=nifi-node1-hostname:2888:3888 # server.2=nifi-node2-hostname:2888:3888 # server.3=nifi-node3-hostname:2888:3888 ``` Technically this means that the default values for `server.1` to `server.N` are empty. The note below references that examples are commented out, which is good, but I think the port list should indicate that the `server.N` properties are empty by default. ---
[GitHub] nifi pull request #3089: NIFI-5653 Added default NiFi and Embedded Zookeeper...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3089#discussion_r226037655 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -73,9 +73,38 @@ When NiFi first starts up, the following files and directories are created: See the <> section of this guide for more information about configuring NiFi repositories and configuration files. +== Port Configuration + +=== NiFi +The following table lists the default ports used by NiFi and the corresponding property in the _nifi.properties_ file. + +[options="header,footer"] +|== +| Function| Property | Default Value +|HTTP Port| `nifi.web.http.port` | `8080` +|HTTPS Port* | `nifi.web.https.port` | `9443` +|Remote Input Socket Port*| `nifi.remote.input.socket.port` | `10443` +|Cluster Node Protocol Port* | `nifi.cluster.node.protocol.port` | `11443` +|Cluster Node Load Balancing Port | `nifi.cluster.node.load.balance.port` | `6342` +|Web HTTP Forwarding Port | `nifi.web.http.port.forwarding` | blank +|== + +NOTE: The ports marked with an asterisk (*) have property values that are blank by default in _nifi.properties_. The values shown in the table are the default values for these ports when <> is used to generate _nifi.properties_ for a secured NiFi instance. The default Certificate Authority Port used by TLS Toolkit is `8443`. --- End diff -- ```suggestion NOTE: The ports marked with an asterisk (*) have property values that are empty by default in _nifi.properties_. The values shown in the table are the default values for these ports when <> is used to generate _nifi.properties_ for a secured NiFi instance. The default Certificate Authority Port used by TLS Toolkit is `8443`. ``` ---
[GitHub] nifi pull request #3089: NIFI-5653 Added default NiFi and Embedded Zookeeper...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3089#discussion_r226037414 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -73,9 +73,38 @@ When NiFi first starts up, the following files and directories are created: See the <> section of this guide for more information about configuring NiFi repositories and configuration files. +== Port Configuration + +=== NiFi +The following table lists the default ports used by NiFi and the corresponding property in the _nifi.properties_ file. + +[options="header,footer"] +|== +| Function| Property | Default Value +|HTTP Port| `nifi.web.http.port` | `8080` +|HTTPS Port* | `nifi.web.https.port` | `9443` +|Remote Input Socket Port*| `nifi.remote.input.socket.port` | `10443` +|Cluster Node Protocol Port* | `nifi.cluster.node.protocol.port` | `11443` +|Cluster Node Load Balancing Port | `nifi.cluster.node.load.balance.port` | `6342` +|Web HTTP Forwarding Port | `nifi.web.http.port.forwarding` | blank --- End diff -- Can we represent no default value here other than "blank"? ---
[GitHub] nifi pull request #3080: NIFI-5701 Add documentation for Load Balancing conn...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3080#discussion_r225686661 --- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc --- @@ -986,13 +991,11 @@ automatically be 'cloned', and a copy will be sent to each of those Connections. Settings -The Settings Tab provides the ability to configure the Connection's name, FlowFile expiration, Back Pressure thresholds, and -Prioritization: +The Settings Tab provides the ability to configure the Connection's Name, FlowFile Expiration, Back Pressure Thresholds, Load Balance Strategy and Prioritization: --- End diff -- I would prefer tab being lowercased, since it's not part of the title of the actual tab. ---
[GitHub] nifi pull request #3080: NIFI-5701 Add documentation for Load Balancing conn...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3080#discussion_r225654613 --- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc --- @@ -636,21 +637,24 @@ For example: For additional information and examples, see the link:http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/crontrigger.html[Chron Trigger Tutorial^] in the Quartz documentation. += Concurrent Tasks Next, the Scheduling Tab provides a configuration option named 'Concurrent Tasks'. This controls how many threads the Processor will use. Said a different way, this controls how many FlowFiles should be processed by this Processor at the same time. Increasing this value will typically allow the Processor to handle more data in the same amount of time. However, it does this by using system resources that then are not usable by other Processors. This essentially provides a relative weighting of Processors -- it controls how much of the system's resources should be allocated to this Processor instead of other Processors. This field is available for most Processors. There are, however, some types of Processors that can only be scheduled with a single Concurrent task. += Run Schedule The 'Run Schedule' dictates how often the Processor should be scheduled to run. The valid values for this field depend on the selected Scheduling Strategy (see above). If using the Event driven Scheduling Strategy, this field is not available. When using the Timer driven Scheduling Strategy, this value is a time duration specified by a number followed by a time unit. For example, `1 second` or `5 mins`. The default value of `0 sec` means that the Processor should run as often as possible as long as it has data to process. This is true for any time duration of 0, regardless of the time unit (i.e., `0 sec`, `0 mins`, `0 days`). For an explanation of values that are applicable for the CRON driven Scheduling Strategy, see the description of the CRON driven Scheduling Strategy itself. -When configured for clustering, an Execution setting will be available. This setting is used to determine which node(s) the Processor will be += Execution +The Execution setting is used to determine which node(s) the Processor will be --- End diff -- ```suggestion The Execution setting is used to determine on which node(s) the Processor will be ``` ---
[GitHub] nifi pull request #3080: NIFI-5701 Add documentation for Load Balancing conn...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3080#discussion_r225673660 --- Diff: nifi-docs/src/main/asciidoc/user-guide.adoc --- @@ -986,13 +991,11 @@ automatically be 'cloned', and a copy will be sent to each of those Connections. Settings -The Settings Tab provides the ability to configure the Connection's name, FlowFile expiration, Back Pressure thresholds, and -Prioritization: +The Settings Tab provides the ability to configure the Connection's Name, FlowFile Expiration, Back Pressure Thresholds, Load Balance Strategy and Prioritization: --- End diff -- ```suggestion The Settings tab provides the ability to configure the Connection's Name, FlowFile Expiration, Back Pressure Thresholds, Load Balance Strategy and Prioritization: ``` ---
[GitHub] nifi pull request #3071: NIFI-5696 Update references to default value for ni...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/3071 NIFI-5696 Update references to default value for nifi.cluster.node.lo⦠â¦ad.load.balance.port Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [n/a] Have you written or updated unit tests to verify your changes? - [n/a] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [n/a] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [n/a] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [n/a] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [n/a] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5696 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3071.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3071 commit 8c5329eb64fe14a55b12e0af08783bdcbc6a7c0b Author: Jeff Storck Date: 2018-10-12T20:57:15Z NIFI-5696 Update references to default value for nifi.cluster.node.load.load.balance.port ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r224164285 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -2393,19 +2395,53 @@ When the DFM makes changes to the dataflow, the node that receives the request t nodes and waits for each node to respond, indicating that it has made the change on its local flow. -*Dealing with Disconnected Nodes* + +=== Managing Nodes + + Disconnect Nodes + +A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems. + +To manually disconnect a node, select the "Disconnect" icon (image:iconDisconnect.png["Disconnect Icon"]) from the node's row. + +image::disconnected-node-cluster-mgt.png["Disconnected Node in Cluster Management UI"] + +A disconnected node can be connected (image:iconConnect.png["Connect Icon"]), offloaded (image:iconOffload.png["Offload Icon"]) or deleted (image:iconDelete.png["Delete Icon"]). --- End diff -- +1 ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r224160990 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -2393,19 +2395,53 @@ When the DFM makes changes to the dataflow, the node that receives the request t nodes and waits for each node to respond, indicating that it has made the change on its local flow. -*Dealing with Disconnected Nodes* + +=== Managing Nodes + + Disconnect Nodes + +A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems. + +To manually disconnect a node, select the "Disconnect" icon (image:iconDisconnect.png["Disconnect Icon"]) from the node's row. + +image::disconnected-node-cluster-mgt.png["Disconnected Node in Cluster Management UI"] + +A disconnected node can be connected (image:iconConnect.png["Connect Icon"]), offloaded (image:iconOffload.png["Offload Icon"]) or deleted (image:iconDelete.png["Delete Icon"]). --- End diff -- Maybe it would be better to be less explicit: NOTE: Not all nodes in a "Disconnected" state can be offloaded. If the node is disconnected and unreachable, the offload request can not be received by the node to start the offloading. Additionally, offloading may be interrupted or prevented due to firewall rules. ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r224150506 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -2393,19 +2395,53 @@ When the DFM makes changes to the dataflow, the node that receives the request t nodes and waits for each node to respond, indicating that it has made the change on its local flow. -*Dealing with Disconnected Nodes* + +=== Managing Nodes + + Disconnect Nodes + +A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems. + +To manually disconnect a node, select the "Disconnect" icon (image:iconDisconnect.png["Disconnect Icon"]) from the node's row. + +image::disconnected-node-cluster-mgt.png["Disconnected Node in Cluster Management UI"] + +A disconnected node can be connected (image:iconConnect.png["Connect Icon"]), offloaded (image:iconOffload.png["Offload Icon"]) or deleted (image:iconDelete.png["Delete Icon"]). --- End diff -- I misspoke on the load balance port being blocked. That would prevent other nodes from sending flowfiles through load balancing to that particular node. If the disconnected node is unreachable, it would not be able to receive the offload request. I agree with moving the error scenarios to the end of the section, or to a separate error handling/troubleshooting section. ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r223919361 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -3939,8 +3976,7 @@ to the cluster. It provides an additional layer of security. This value is blank |`nifi.cluster.flow.election.max.candidates`|Specifies the number of Nodes required in the cluster to cause early election of Flows. This allows the Nodes in the cluster to avoid having to wait a long time before starting processing if we reach at least this number of nodes in the cluster. |`nifi.cluster.load.balance.port`|Specifies the port to listen on for incoming connections for load balancing data across the cluster. The default value is `6342`. --- End diff -- @markap14 Given the difference of the two values that are used for the default `nifi.cluster.load.balance.port` property, should one value be used in both places, or was the difference intentional? ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r223919080 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -2393,19 +2395,53 @@ When the DFM makes changes to the dataflow, the node that receives the request t nodes and waits for each node to respond, indicating that it has made the change on its local flow. -*Dealing with Disconnected Nodes* + +=== Managing Nodes + + Disconnect Nodes + +A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems. + +To manually disconnect a node, select the "Disconnect" icon (image:iconDisconnect.png["Disconnect Icon"]) from the node's row. + +image::disconnected-node-cluster-mgt.png["Disconnected Node in Cluster Management UI"] + +A disconnected node can be connected (image:iconConnect.png["Connect Icon"]), offloaded (image:iconOffload.png["Offload Icon"]) or deleted (image:iconDelete.png["Delete Icon"]). + + Offload Nodes + +Flowfiles that remain on a disconnected node can be rebalanced to other active nodes in the cluster via offloading. In the Cluster Management dialog, select the "Offload" icon (image:iconOffload.png["Offload Icon"]) for a Disconnected node. This will stop all processors, terminate all processors, stop transmitting on all remote process groups and rebalance flowfiles to the other connected nodes in the cluster. + +image::offloading-node-cluster-mgt.png["Offloading Node in Cluster Management UI"] + +Nodes that remain in "Offloading" state due to errors encountered (out of memory, no network connection, etc.) can be reconnected to the cluster by restarting NiFi on the node. Offloaded nodes can be either reconnected to the cluster (by selecting Connect or restarting NiFi on the node) or deleted from the cluster. + +image::offloaded-node-cluster-mgt.png["Offloaded Node in Cluster Management UI"] + + Delete Nodes + +There are cases where a DFM may wish to continue making changes to the flow, even though a node is not connected to the cluster. In this case, the DFM may elect to delete the node from the cluster entirely. In the Cluster Management dialog, select the "Delete" icon (image:iconDelete.png["Delete Icon"]) for a Disconnected or Offloaded node. Once deleted, the node cannot be rejoined to the cluster until it has been restarted. + + Decommission Nodes + +The steps to decommission a node and remove it from a cluster are as follows: + +1. Disconnect the node. +2. Once disconnect completes, offload the node. +3. Once offload completes, delete the node. +4. Once the delete request has finished, stop/remove the NiFi service on the host. + + NiFi Toolkit Node Commands -A DFM may manually disconnect a node from the cluster. But if a node becomes disconnected for any other reason (such as due to lack of heartbeat), -the Cluster Coordinator will show a bulletin on the User Interface. The DFM will not be able to make any changes to the dataflow until the issue -of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any -new changes may be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working; -this may happen for a few reasons, including that the node is unable to communicate with the Cluster Coordinator due to network problems. +As an alternative to the UI, the following NiFi Toolkit CLI commands can be used for retrieving a single node, retrieving a list of nodes, and connecting/disconnecting/offloading/deleting nodes: -There are cases where a DFM may wish to continue making changes to the flow, even though a node is not connected to the cluster. -In this case, they DFM may elect to remove the node from the cluster entirely through the Cluster Management dialog. Once removed, -the node cannot be rejoined to the c
[GitHub] nifi issue #2884: NIFI-3993 Updated the ZooKeeper version to 3.4.10
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2884 @HorizonNet No objections. Feel free to close the PR. ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r223914705 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -3939,8 +3976,7 @@ to the cluster. It provides an additional layer of security. This value is blank |`nifi.cluster.flow.election.max.candidates`|Specifies the number of Nodes required in the cluster to cause early election of Flows. This allows the Nodes in the cluster to avoid having to wait a long time before starting processing if we reach at least this number of nodes in the cluster. |`nifi.cluster.load.balance.port`|Specifies the port to listen on for incoming connections for load balancing data across the cluster. The default value is `6342`. --- End diff -- This wasn't part of your PR, but I noticed that the default value of 6432 for nifi.cluster.load.balance.port is not technically correct. There's maven filtering occurring in nifi-framework/nifi-resources/pom.xml that set the property during build-time to 7430. The resulting nifi.properties in nifi-assembly/target/nifi-1.8.0-SNAPSHOT-bin/nifi-1.8.0-SNAPSHOT/conf has: ```properties nifi.cluster.load.balance.port=7430 ``` The following code (from nifi-properties/src/main/java/org/apache/nifi/util/NiFiProperties.java) that reads this property will use the default of 6432 if the property is missing from nifi.properties: ```java public InetSocketAddress getClusterLoadBalanceAddress() { try { String address = getProperty(LOAD_BALANCE_ADDRESS); if (StringUtils.isBlank(address)) { address = getProperty(CLUSTER_NODE_ADDRESS); } if (StringUtils.isBlank(address)) { address = "localhost"; } final int port = getIntegerProperty(LOAD_BALANCE_PORT, DEFAULT_LOAD_BALANCE_PORT); return InetSocketAddress.createUnresolved(address, port); } catch (final Exception e) { throw new RuntimeException("Invalid load balance address/port due to: " + e, e); } } ``` ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r223912939 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -2393,19 +2395,53 @@ When the DFM makes changes to the dataflow, the node that receives the request t nodes and waits for each node to respond, indicating that it has made the change on its local flow. -*Dealing with Disconnected Nodes* + +=== Managing Nodes + + Disconnect Nodes + +A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems. + +To manually disconnect a node, select the "Disconnect" icon (image:iconDisconnect.png["Disconnect Icon"]) from the node's row. + +image::disconnected-node-cluster-mgt.png["Disconnected Node in Cluster Management UI"] + +A disconnected node can be connected (image:iconConnect.png["Connect Icon"]), offloaded (image:iconOffload.png["Offload Icon"]) or deleted (image:iconDelete.png["Delete Icon"]). + + Offload Nodes + +Flowfiles that remain on a disconnected node can be rebalanced to other active nodes in the cluster via offloading. In the Cluster Management dialog, select the "Offload" icon (image:iconOffload.png["Offload Icon"]) for a Disconnected node. This will stop all processors, terminate all processors, stop transmitting on all remote process groups and rebalance flowfiles to the other connected nodes in the cluster. + +image::offloading-node-cluster-mgt.png["Offloading Node in Cluster Management UI"] + +Nodes that remain in "Offloading" state due to errors encountered (out of memory, no network connection, etc.) can be reconnected to the cluster by restarting NiFi on the node. Offloaded nodes can be either reconnected to the cluster (by selecting Connect or restarting NiFi on the node) or deleted from the cluster. + +image::offloaded-node-cluster-mgt.png["Offloaded Node in Cluster Management UI"] + + Delete Nodes + +There are cases where a DFM may wish to continue making changes to the flow, even though a node is not connected to the cluster. In this case, the DFM may elect to delete the node from the cluster entirely. In the Cluster Management dialog, select the "Delete" icon (image:iconDelete.png["Delete Icon"]) for a Disconnected or Offloaded node. Once deleted, the node cannot be rejoined to the cluster until it has been restarted. + + Decommission Nodes + +The steps to decommission a node and remove it from a cluster are as follows: + +1. Disconnect the node. +2. Once disconnect completes, offload the node. +3. Once offload completes, delete the node. +4. Once the delete request has finished, stop/remove the NiFi service on the host. + + NiFi Toolkit Node Commands -A DFM may manually disconnect a node from the cluster. But if a node becomes disconnected for any other reason (such as due to lack of heartbeat), -the Cluster Coordinator will show a bulletin on the User Interface. The DFM will not be able to make any changes to the dataflow until the issue -of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any -new changes may be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working; -this may happen for a few reasons, including that the node is unable to communicate with the Cluster Coordinator due to network problems. +As an alternative to the UI, the following NiFi Toolkit CLI commands can be used for retrieving a single node, retrieving a list of nodes, and connecting/disconnecting/offloading/deleting nodes: -There are cases where a DFM may wish to continue making changes to the flow, even though a node is not connected to the cluster. -In this case, they DFM may elect to remove the node from the cluster entirely through the Cluster Management dialog. Once removed, -the node cannot be rejoined to the clust
[GitHub] nifi issue #3055: NIFI-5600: Fixing columns in queue listing and component s...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3055 Just tested this PR after cherry-picking the commit into a branch off of the offloading PR (PR 3010), and it looks good on the queue listing and in provenance; the node on which the files are queued is displayed. Opening the provenance UI from a queue listing for a specific flowfile, and opening the provenance UI from the hamburger menu also shows the node. ---
[GitHub] nifi pull request #3056: NIFI-5659 Add documentation for Offloading Nodes fu...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3056#discussion_r223855961 --- Diff: nifi-docs/src/main/asciidoc/administration-guide.adoc --- @@ -2393,19 +2395,53 @@ When the DFM makes changes to the dataflow, the node that receives the request t nodes and waits for each node to respond, indicating that it has made the change on its local flow. -*Dealing with Disconnected Nodes* + +=== Managing Nodes + + Disconnect Nodes + +A DFM may manually disconnect a node from the cluster. A node may also become disconnected for other reasons, such as due to a lack of heartbeat. The Cluster Coordinator will show a bulletin on the User Interface when a node is disconnected. The DFM will not be able to make any changes to the dataflow until the issue of the disconnected node is resolved. The DFM or the Administrator will need to troubleshoot the issue with the node and resolve it before any new changes can be made to the dataflow. However, it is worth noting that just because a node is disconnected does not mean that it is not working. This may happen for a few reasons, for example when the node is unable to communicate with the Cluster Coordinator due to network problems. + +To manually disconnect a node, select the "Disconnect" icon (image:iconDisconnect.png["Disconnect Icon"]) from the node's row. + +image::disconnected-node-cluster-mgt.png["Disconnected Node in Cluster Management UI"] + +A disconnected node can be connected (image:iconConnect.png["Connect Icon"]), offloaded (image:iconOffload.png["Offload Icon"]) or deleted (image:iconDelete.png["Delete Icon"]). --- End diff -- For a node thatâs disconnected due to lack of heartbeat, it can't offload in all disconnected scenarios. If it gets disconnected because the node died, obviously it wonât be able to be offloaded, but if the operation canât reach the node to start the offloading, it wonât be able to start it and the UI should reflect that error. If it is disconnected due to firewall issues, that might effect offloading as well, if the load balancing port is also blocked by the firewall. ---
[GitHub] nifi issue #3055: NIFI-5600: Fixing columns in queue listing and component s...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3055 Reviewing... ---
[GitHub] nifi issue #2947: NIFI-5516: Implement Load-Balanced Connections
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2947 @markap14 Based on @ijokarumawak's +1, and my testing of this PR along with the changes in my PR for node offloading, I'm a +1 as well. Will merge this ASAP! Thanks for this awesome contribution! ---
[GitHub] nifi issue #2971: NIFI-5557: handling expired ticket by rollback and penaliz...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2971 +1, merged to master. Had to resolve some conflicts in my commit for PutHDFSTest after rebasing to master. Thanks for your contribution, @ekovacs! ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r220204503 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -389,16 +380,24 @@ public void process(InputStream in) throws IOException { session.transfer(putFlowFile, REL_SUCCESS); } catch (final Throwable t) { -if (tempDotCopyFile != null) { -try { -hdfs.delete(tempDotCopyFile, false); -} catch (Exception e) { -getLogger().error("Unable to remove temporary file {} due to {}", new Object[]{tempDotCopyFile, e}); -} + Optional causeOptional = findCause(t, GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor()); --- End diff -- My previous comment was a bit ambiguous, I apologize. Having this logic in this catch for all Throwables is fine, but you could move this bit into a separate catch(IOException e) block of this try/catch. ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r220206853 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -389,16 +380,24 @@ public void process(InputStream in) throws IOException { session.transfer(putFlowFile, REL_SUCCESS); } catch (final Throwable t) { -if (tempDotCopyFile != null) { -try { -hdfs.delete(tempDotCopyFile, false); -} catch (Exception e) { -getLogger().error("Unable to remove temporary file {} due to {}", new Object[]{tempDotCopyFile, e}); -} + Optional causeOptional = findCause(t, GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor()); +if (causeOptional.isPresent()) { + getLogger().warn(String.format("An error occured while connecting to HDFS. " --- End diff -- This could be changed to: ```java getLogger().warn("An error occured while connecting to HDFS. Rolling back session, and penalizing flow file {}", new Object[] {putFlowFile.getAttribute(CoreAttributes.UUID.key()), causeOptional.get()}); ``` ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r220008088 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/NonLocalPartitionPartitioner.java --- @@ -0,0 +1,59 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.controller.queue.clustered.partition; + +import org.apache.nifi.controller.repository.FlowFileRecord; + +import java.util.concurrent.atomic.AtomicLong; + +/** + * PReturns remote partitions when queried for a partition; never returns the {@link LocalQueuePartition}. + */ +public class NonLocalPartitionPartitioner implements FlowFilePartitioner { +private final AtomicLong counter = new AtomicLong(0L); + +@Override +public QueuePartition getPartition(final FlowFileRecord flowFile, final QueuePartition[] partitions, final QueuePartition localPartition) { +QueuePartition remotePartition = null; +for (int i = 0, numPartitions = partitions.length; i < numPartitions; i++) { +final long count = counter.getAndIncrement(); --- End diff -- Very good catch! I've updated the partitioner to use a startIndex rather than the result of counter.getAndIncrement() each iteration. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r22790 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/NonLocalPartitionPartitioner.java --- @@ -0,0 +1,59 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.nifi.controller.queue.clustered.partition; + +import org.apache.nifi.controller.repository.FlowFileRecord; + +import java.util.concurrent.atomic.AtomicLong; + +/** + * PReturns remote partitions when queried for a partition; never returns the {@link LocalQueuePartition}. --- End diff -- Fixed. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r22647 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/SocketLoadBalancedFlowFileQueue.java --- @@ -204,6 +206,19 @@ public synchronized void setLoadBalanceStrategy(final LoadBalanceStrategy strate setFlowFilePartitioner(partitioner); } +@Override +public void decommissionQueue() { +if (clusterCoordinator == null) { +// Not clustered, so don't change partitions +return; +} + +// TODO set decommissioned boolean --- End diff -- It can! ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r22525 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/StandardFlowService.java --- @@ -662,6 +682,39 @@ private void handleReconnectionRequest(final ReconnectionRequestMessage request) } } +private void handleDecommissionRequest(final DecommissionMessage request) throws InterruptedException { +logger.info("Received decommission request message from manager with explanation: " + request.getExplanation()); +decommission(request.getExplanation()); +} + +private void decommission(final String explanation) throws InterruptedException { +writeLock.lock(); +try { + +logger.info("Decommissioning node due to " + explanation); + +// mark node as decommissioning +controller.setConnectionStatus(new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONING, DecommissionCode.DECOMMISSIONED, explanation)); +// request to stop all processors on node +controller.stopAllProcessors(); --- End diff -- Done. Also, all RPGs will have stopTransmitting() called on them. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219612683 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/StandardFlowService.java --- @@ -662,6 +682,39 @@ private void handleReconnectionRequest(final ReconnectionRequestMessage request) } } +private void handleDecommissionRequest(final DecommissionMessage request) throws InterruptedException { +logger.info("Received decommission request message from manager with explanation: " + request.getExplanation()); --- End diff -- I replaced all occurrences of "from manager" with "from cluster coordinator". ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219607209 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/replication/ThreadPoolRequestReplicator.java --- @@ -180,6 +181,15 @@ public AsyncClusterResponse replicate(NiFiUser user, String method, URI uri, Obj } } +final List decommissioning = stateMap.get(NodeConnectionState.DECOMMISSIONING); --- End diff -- I agree. If requests were replicated to nodes other than decommissioned nodes, then the decommissioned node would be out of sync with the rest of the cluster and would not be able to rejoin the cluster. I added a check for the decommissioned state. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219602056 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/test/java/org/apache/nifi/cluster/coordination/heartbeat/TestAbstractHeartbeatMonitor.java --- @@ -244,11 +245,26 @@ public synchronized void finishNodeConnection(NodeIdentifier nodeId) { statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.CONNECTED)); } +@Override +public synchronized void finishNodeDecommission(NodeIdentifier nodeId) { +statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONED)); +} + +@Override +public synchronized void requestNodeDecommission(NodeIdentifier nodeId, DecommissionCode decommissionCode, String explanation) { +statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DECOMMISSIONED)); +} + @Override public synchronized void requestNodeDisconnect(NodeIdentifier nodeId, DisconnectionCode disconnectionCode, String explanation) { statuses.put(nodeId, new NodeConnectionStatus(nodeId, NodeConnectionState.DISCONNECTED)); } +//@Override --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219601868 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -841,6 +900,34 @@ void notifyOthersOfNodeStatusChange(final NodeConnectionStatus updatedStatus, fi senderListener.notifyNodeStatusChange(nodesToNotify, message); } +private void decommissionAsynchronously(final DecommissionMessage request, final int attempts, final int retrySeconds) { +final Thread decommissionThread = new Thread(new Runnable() { +@Override +public void run() { +final NodeIdentifier nodeId = request.getNodeId(); + +for (int i = 0; i < attempts; i++) { +try { +senderListener.decommission(request); +reportEvent(nodeId, Severity.INFO, "Node was decommissioned due to " + request.getExplanation()); +return; +} catch (final Exception e) { +logger.error("Failed to notify {} that it has been decommissioned from the cluster due to {}", request.getNodeId(), request.getExplanation()); --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599828 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -821,7 +878,9 @@ void notifyOthersOfNodeStatusChange(final NodeConnectionStatus updatedStatus, fi // Otherwise, get the active coordinator (or wait for one to become active) and then notify the coordinator. final Set nodesToNotify; if (notifyAllNodes) { -nodesToNotify = getNodeIdentifiers(NodeConnectionState.CONNECTED, NodeConnectionState.CONNECTING); +// TODO notify all nodes --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599522 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -526,6 +579,10 @@ public void removeNode(final NodeIdentifier nodeId, final String userDn) { storeState(); } +private void onNodeDecommissioned(final NodeIdentifier nodeId) { +eventListeners.stream().forEach(listener -> listener.onNodeDecommissioned(nodeId)); +} + private void onNodeRemoved(final NodeIdentifier nodeId) { eventListeners.stream().forEach(listener -> listener.onNodeRemoved(nodeId)); --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599484 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -526,6 +579,10 @@ public void removeNode(final NodeIdentifier nodeId, final String userDn) { storeState(); } +private void onNodeDecommissioned(final NodeIdentifier nodeId) { +eventListeners.stream().forEach(listener -> listener.onNodeDecommissioned(nodeId)); --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219599165 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/node/NodeClusterCoordinator.java --- @@ -494,6 +539,14 @@ public void requestNodeDisconnect(final NodeIdentifier nodeId, final Disconnecti disconnectAsynchronously(request, 10, 5); } +//@Override --- End diff -- Done. ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/3010#discussion_r219598024 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster-protocol/src/main/java/org/apache/nifi/cluster/coordination/ClusterCoordinator.java --- @@ -72,6 +91,16 @@ */ void requestNodeDisconnect(NodeIdentifier nodeId, DisconnectionCode disconnectionCode, String explanation); +///** --- End diff -- Done. ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r219380028 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -266,6 +271,16 @@ public Object run() { throw new IOException(configuredRootDirPath.toString() + " could not be created"); } changeOwner(context, hdfs, configuredRootDirPath, flowFile); +} catch (IOException e) { + boolean tgtExpired = hasCause(e, GSSException.class, gsse -> GSSException.NO_CRED == gsse.getMajor()); + if (tgtExpired) { +getLogger().error(String.format("An error occured while connecting to HDFS. Rolling back session, and penalizing flow file %s", --- End diff -- The exception be logged here, in addition to the flowfile UUID. It might be useful to have the stack trace and exception class available in the log, and we shouldn't suppress/omit the actual GSSException from the logging. It might also be a good idea to log this at the "warn" level, so that the user can choose to not have these show as bulletins on the processor in the UI. Since the flowfile is being rolled back, and hadoop-client will implicitly acquire a new ticket, I don't think this should show as an error. @mcgilman, @bbende, do either of you have a preference here? ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r219377632 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -266,6 +271,16 @@ public Object run() { throw new IOException(configuredRootDirPath.toString() + " could not be created"); } changeOwner(context, hdfs, configuredRootDirPath, flowFile); +} catch (IOException e) { --- End diff -- Thanks for changing this to use GSSException.getMajor(). I haven't tested a ticket expiration occurring during the execution of a call to ugi.doAs (as opposed to the ticket having expired before ugi.doAs is invoked), but I think it would be a good idea to move this catch block to the top level try/catch block of the PrivelegedExceptionAction passed to ugi.doAs(). ---
[GitHub] nifi issue #3010: [WIP] NIFI-5585
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3010 While testing the reconnecting of a decommissioned node to the cluster, the issue detailed in [NIFI-5619](https://issues.apache.org/jira/browse/NIFI-5619) was encountered. ---
[GitHub] nifi issue #3010: [WIP] NIFI-5585
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3010 While testing the reconnection of a decommissioned node, with ~24,000 files split between the two nodes, an error occurred after reconnecting the decomissioned node and attempting to drop all the flowfiles in the queue, from both nodes: ``` 2018-09-20 15:25:22,594 ERROR [Drop FlowFiles for Connection cbbf2971-0165-1000--94b81269] o.a.n.c.q.c.SocketLoadBalancedFlowFileQueue Failed to drop FlowFiles for org.apache.nifi.controller.queue.clustered.SocketLoadBalancedFlowFileQueue@469caf69 java.lang.IllegalArgumentException: null at org.apache.nifi.controller.queue.QueueSize.(QueueSize.java:31) at org.apache.nifi.controller.queue.QueueSize.add(QueueSize.java:67) at org.apache.nifi.controller.queue.clustered.SocketLoadBalancedFlowFileQueue.adjustSize(SocketLoadBalancedFlowFileQueue.java:514) at org.apache.nifi.controller.queue.clustered.SocketLoadBalancedFlowFileQueue.dropFlowFiles(SocketLoadBalancedFlowFileQueue.java:903) at org.apache.nifi.controller.queue.AbstractFlowFileQueue$2.run(AbstractFlowFileQueue.java:285) at java.lang.Thread.run(Thread.java:748) ``` The decommission operation was successful, all flowfiles were moved from the decommissioned node to the other node. After reconnecting the decommissioned node, I couldn't clear the flowfile queue. After restarting the cluster (both nodes), the queue showed as empty. ---
[GitHub] nifi issue #2988: NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PAT...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2988 @nalewis Can you run the specific NiFiGroovyTest via maven successfully? ---
[GitHub] nifi pull request #3010: [WIP] NIFI-5585
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/3010 [WIP] NIFI-5585 Please refer to https://issues.apache.org/jira/browse/NIFI-5585 for a description of use-cases for decommissioning nodes. This PR is based off of work that is being done via https://issues.apache.org/jira/browse/NIFI-5516. This is a work-in-progress PR. Nodes can be decommissioned, the flowfiles on the decommissioning node get moved to other nodes that are still connected to the cluster. A node can be decommissioned by first disconnecting it using the cluster node table's "Disconnect" icon, and then clicking on the "Decommission" icon. Some things that still need to be done in this PR: - Unit/integration tests need to be added - On the decommissioned node's UI, the status should represent its state. Currently, a node that is being decommissioned will show as "Disconnected" on the node's UI. - Upgrading FontAwesome from 4.6.1 to 4.7 to use an icon for the "Decommission" action other than fa-sun-o, most likely window-close-o - There are various TODO markers in the code, where further testing needs to be done. Also, some follow-on JIRAs will be created based off of some of the TODOs. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5585 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3010.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3010 commit fcc0fed53df4a65ce1e7275d67865d47b61c3bc1 Author: Mark Payne Date: 2018-06-14T15:57:21Z Refactoring StandardFlowFileQueue to have an AbstractFlowFileQueue Refactored more into AbstractFlowFileQueue Added documentation, cleaned up code some Refactored FlowFileQueue so that there is SwappablePriorityQueue Several unit tests written Added REST API Endpoint to allow PUT to update connection to use load balancing or not. When enabling load balancing, though, I saw the queue size go from 9 to 18. Then was only able to process 9 FlowFiles. Bug fixes Code refactoring Added integration tests, bug fixes Refactored clients to use NIO Bug fixes. Appears to finally be working with NIO Client! commit 85606cbc200c49a0590c6979aa438addc42f8266 Author: Mark Payne Date: 2018-07-27T16:40:14Z NIFI-5516: Refactored some code from NioAsyncLoadBalanceClient to LoadBalanceSession Bug fixes and allowed load balancing socket connections to be reused Implemented ability to compress Nothing, Attributes, or Content + Attributes when performing load-balancing Added flag to ConnectionDTO to indicate Load Balance Status Updated Diagnostics DTO for connections Store state about cluster topology in NodeClusterCoordinator so that the state is known upon restart Code cleanup Fixed checkstyle and unit tests commit d5d9a8ffedf080cfdfb6d8c0c83d498fd3c25022 Author: Mark Payne Date: 2018-09-06T13:09:08Z NIFI-5516: Updating logic for Cluster Node Firewall so that the node's identity comes from its certificate, not from whatever it says it is. commit d5a3286252410e8ed2085ba2521e8f0053290bc9 Author: Mark Payne Date: 2018-09-10T21:06:05Z NIFI-5516: FIxed missing License headers commit 5379a7cd2fe5620faf51cf8c95c8e6d78cc7a982 Author: Jeff Storck Date: 2018-09-18T21:09:13Z NIFI-5585 Added capability to decommission a node that is disconnected from the cluster. ---
[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2947#discussion_r218443074 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/RoundRobinPartitioner.java --- @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.queue.clustered.partition; + +import org.apache.nifi.controller.repository.FlowFileRecord; + +import java.util.concurrent.atomic.AtomicLong; + +public class RoundRobinPartitioner implements FlowFilePartitioner { +private final AtomicLong counter = new AtomicLong(0L); + +@Override +public QueuePartition getPartition(final FlowFileRecord flowFile, final QueuePartition[] partitions, final QueuePartition localPartition) { +final long count = counter.getAndIncrement(); --- End diff -- I think it was Bill Gates that said we'd never need more then 640KB of RAM. :) ---
[GitHub] nifi pull request #2947: [WIP] NIFI-5516: Implement Load-Balanced Connection...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2947#discussion_r218223624 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/queue/clustered/partition/RoundRobinPartitioner.java --- @@ -0,0 +1,44 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.nifi.controller.queue.clustered.partition; + +import org.apache.nifi.controller.repository.FlowFileRecord; + +import java.util.concurrent.atomic.AtomicLong; + +public class RoundRobinPartitioner implements FlowFilePartitioner { +private final AtomicLong counter = new AtomicLong(0L); + +@Override +public QueuePartition getPartition(final FlowFileRecord flowFile, final QueuePartition[] partitions, final QueuePartition localPartition) { +final long count = counter.getAndIncrement(); --- End diff -- This counter should probably wrap back to 0 once it reaches max long? ---
[GitHub] nifi issue #2947: [WIP] NIFI-5516: Implement Load-Balanced Connections
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2947 @markap14 I created [PR 5600](https://issues.apache.org/jira/browse/NIFI-5600) to add node information to the flowfile queue display. ---
[GitHub] nifi issue #2947: [WIP] NIFI-5516: Implement Load-Balanced Connections
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2947 @markap14 When listing a queue that is load balanced between two nodes in a cluster, I'm seeing duplicate "Position" IDs. I gather they are unique per node and being represented in the UI as they are reported from each node. Unique Position IDs should be provided, right? I don't think we'd necessarily want to add a column for which node the flowfile is on, but that's an additional option. https://user-images.githubusercontent.com/19271493/45577899-5b161800-b84c-11e8-9928-b78e6ac7a8fa.png;> ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r216031690 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -269,13 +272,15 @@ public Object run() { } changeOwner(context, hdfs, configuredRootDirPath, flowFile); } catch (IOException e) { -if (!Strings.isNullOrEmpty(e.getMessage()) && e.getMessage().contains(String.format("Couldn't setup connection for %s", ugi.getUserName( { - getLogger().error(String.format("An error occured while connecting to HDFS. Rolling back session, and penalizing flowfile %s", - flowFile.getAttribute(CoreAttributes.UUID.key(; - session.rollback(true); -} else { - throw e; -} + boolean tgtExpired = hasCause(e, GSSException.class, gsse -> "Failed to find any Kerberos tgt".equals(gsse.getMinorString())); + if (tgtExpired) { +getLogger().error(String.format("An error occured while connecting to HDFS. Rolling back session, and penalizing flow file %s", + putFlowFile.getAttribute(CoreAttributes.UUID.key(; +session.rollback(true); + } else { +getLogger().error("Failed to access HDFS due to {}", new Object[]{e}); +session.transfer(session.penalize(putFlowFile), REL_FAILURE); --- End diff -- @ekovacs I don't think we need to penalize on the transfer to failure here. ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r216037639 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -269,13 +272,15 @@ public Object run() { } changeOwner(context, hdfs, configuredRootDirPath, flowFile); } catch (IOException e) { -if (!Strings.isNullOrEmpty(e.getMessage()) && e.getMessage().contains(String.format("Couldn't setup connection for %s", ugi.getUserName( { - getLogger().error(String.format("An error occured while connecting to HDFS. Rolling back session, and penalizing flowfile %s", - flowFile.getAttribute(CoreAttributes.UUID.key(; - session.rollback(true); -} else { - throw e; -} + boolean tgtExpired = hasCause(e, GSSException.class, gsse -> "Failed to find any Kerberos tgt".equals(gsse.getMinorString())); --- End diff -- @ekovacs After seeing the use of getMinorString here, I looked at GSSException, and it looks like there's some error codes that could be used to detect the actual cause, rather than string matching. Are getMajor and getMinor returning ints when these exceptions happen? ---
[GitHub] nifi issue #2988: NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PAT...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2988 @nalewis We'd have to investigate if the cause for the TestMinimalLockingWriteAheadLog.testRecoverFileThatHasTrailingNULBytesAndTruncation test failing is the same as what's documented in [NIFI-5344](https://issues.apache.org/jira/browse/NIFI-5344) or if it's a new issue. ---
[GitHub] nifi issue #2988: NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PAT...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2988 Regarding this PR, on my Windows 7 desktop, the NiFiGroovyTest in nifi-runtime passes: ``` [INFO] --- [INFO] T E S T S [INFO] --- [INFO] Running org.apache.nifi.NiFiGroovyTest [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.103 s - in org.apache.nifi.NiFiGroovyTest [INFO] [INFO] Results: [INFO] [INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0 [INFO] [INFO] [INFO] BUILD SUCCESS [INFO] ``` Given that the changes made in this PR are only in the nifi-runtime module, **I think this PR can be merged**. The full build, skipping tests and no contrib-check (`mvn clean install -T C2 -DskipTests`) is successful. However, the full build with contrib-check on, and tests skipped (`mvn clean install -T C2 -Pcontrib-check`), or running Apache RAT exclusively (`mvn apache-rat:check`) fails the RAT check in nifi-poi-processors. ``` Files with unapproved licenses: C:/Users/Jeff/dev/git-repos/nifi/nifi-nar-bundles/nifi-poi-bundle/nifi-poi-processors/src/test/resources/Unsupported.xls ``` Also, some tests are failing when I'm running the full build. For instance, in the nifi-file-authorizer module, some FileAuthorizer tests fail, but they pass if I run those tests by invoking the tests in nifi-file-authorizer specifically (from the nifi-file-authorizer module, running `mvn test -Dtest=FileAuthorizerTest`). There are also Toolkit CLI tests that are failing: ``` [ERROR] testWhenAllDescriptionsAreEmpty(org.apache.nifi.toolkit.cli.impl.result.writer.TestDynamicTableWriter) Time elapsed: 0 s <<< FAILURE! org.junit.ComparisonFailure: expected:<[ # Name IdDescription - --- --- 1 Bucket 1 12345-12345-12345-12345-12345-12345 (empty) 2 Bucket 2 12345-12345-12345-12345-12345-12345 (empty) ] > but was:<[ # Name IdDescription - --- --- 1 Bucket 1 12345-12345-12345-12345-12345-12345 (empty) 2 Bucket 2 12345-12345-12345-12345-12345-12345 (empty) ] > ``` The output looks the same to the eye, but the actual value tested against the expected value fails because the EOL characters are different. In TestDynamicTableWriter.java, the expected values in the tests use "\n" for EOL, and if they are changed to "\r\n", the test passes on Windows. JIRAs will have to be filed to go through the tests and updated them to use File.separator instead of explicitly using "\n". @nalewis [PR 2819](https://github.com/apache/nifi/pull/2819) was merged to master on June 28, 2018, and is this PR is branched off of master from yesterday. Those changes are already incorporated. What version of Windows are you running? ---
[GitHub] nifi issue #2988: NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PAT...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2988 I created a Windows 10 VM via VirtualBox on my mac. With a fresh install of: - java ``` java version "1.8.0_181" Java(TM) SE Runtime Environment (build 1.8.0_181-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode) ``` - maven ``` Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T11:33:14-07:00) Maven home: C:\Users\Jeff\dev\apache-maven-3.5.4 Java version: 1.8.0_181, vendor: Oracle Corporation, runtime: C:\Program Files\Java\jre1.8.0_181 Default locale: en_US, platform encoding: Cp1252 OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows" ``` I'm running into a build issue, with some failing tests: ``` [ERROR] TestStandardProcessSession.testBatchQueuedHaveSameQueuedTime:1690 Queued times should not be equal.. Actual: 1536116092889 ``` and quite a few `TestFileSystemRepository` tests due to issues in setup/shutdown like the following: ``` [ERROR] org.apache.nifi.controller.repository.TestFileSystemRepository.testBogusFile(org.apache.nifi.controller.repository.TestFileSystemRepository) [ERROR] Run 1: TestFileSystemRepository.setup:84 Unable to delete target\content_repository\1\1536116075082-1 expected null, but was: [ERROR] Run 2: TestFileSystemRepository.shutdown:94 NullPointer ``` This is probably due something related to running Windows in a VM and how I created it and/or the virtual disk. Tomorrow morning I'll try building on my Windows desktop to see if the same failures occur. ---
[GitHub] nifi issue #2988: NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PAT...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2988 I'll be testing this on my Windows machine tonight, but wanted to get the PR up ASAP for those that want to test this. ---
[GitHub] nifi pull request #2988: NIFI-5574 Removed usage of Paths.get() due to TEST_...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2988 NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PATH being res⦠â¦olved to a string from a URI, which results in platform-specific path information (C:\) when tests are run on Windows. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5574 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2988.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2988 commit c1a0ddca321a4ee7ae61ff853a9610d679ce28cc Author: Jeff Storck Date: 2018-09-04T21:36:20Z NIFI-5574 Removed usage of Paths.get() due to TEST_RES_PATH being resolved to a string from a URI, which results in platform-specific path information (C:\) when tests are run on Windows. ---
[GitHub] nifi pull request #2971: NIFI-5557: handling expired ticket by rollback and ...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2971#discussion_r214136451 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/PutHDFS.java --- @@ -266,6 +268,13 @@ public Object run() { throw new IOException(configuredRootDirPath.toString() + " could not be created"); } changeOwner(context, hdfs, configuredRootDirPath, flowFile); +} catch (IOException e) { +if (!Strings.isNullOrEmpty(e.getMessage()) && e.getMessage().contains(String.format("Couldn't setup connection for %s", ugi.getUserName( { --- End diff -- @ekovacs I think we should be more selective in this check. I don't think there's a better way to detect this error scenario than string matching at this point, but the exception stack should be inspected to see if you can find the GSSException as the root cause: `Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) ` If you iterate through the causes when PutHDFS encounters an IOException, and see that GSSException, we can do a penalize with a session rollback. Otherwise, we'd want to pass the flowfile to the failure relationship. ---
[GitHub] nifi issue #2971: NIFI-5557: handling expired ticket by rollback and penaliz...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2971 Reviewing... ---
[GitHub] nifi issue #2947: [WIP] NIFI-5516: Implement Load-Balanced Connections
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2947 Reviewing this WIP PR to help get it ready to merge to master when NiFi Registry is released. ---
[GitHub] nifi issue #2937: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2937 @ottobackwards @zenfenan I added additional details to give some examples on how to use the filters. Please let me know if you think more detail is needed. I appreciate the review! ---
[GitHub] nifi issue #2937: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2937 @ottobackwards @zenfenan I'll break out my rusty HTML skills and try to write up some extra documentation with examples/usecases. Hopefully in an hour or two I'll have an update for the PR. Thanks for the reviews! ---
[GitHub] nifi issue #2937: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2937 @ottobackwards Regarding documentation for the filter modes, descriptions have been created for the allowable values. Do these descriptions not seem adequate for the functionality of each mode? ---
[GitHub] nifi pull request #2937: NIFI-4434 Fixed recursive listing with a custom reg...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2937#discussion_r208072541 --- Diff: nifi-nar-bundles/nifi-hadoop-bundle/nifi-hdfs-processors/src/main/java/org/apache/nifi/processors/hadoop/ListHDFS.java --- @@ -462,11 +523,15 @@ private String getPerms(final FsAction action) { private PathFilter createPathFilter(final ProcessContext context) { final Pattern filePattern = Pattern.compile(context.getProperty(FILE_FILTER).getValue()); --- End diff -- @ottobackwards The FILE_FILTER property does not currently support expression language. The processor could be updated to enable EL for the property, but that is outside the scope of this PR. ---
[GitHub] nifi issue #2937: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2937 [PR 2930](https://github.com/apache/nifi/pull/2930) was closed due to the branch in my fork being removed before adding the new filter-mode-based changes. @bbende @ottobackwards, this PR implements the use cases discussed in the previous PR: - filename only - filename and directory name - full path ---
[GitHub] nifi pull request #2937: NIFI-4434 Fixed recursive listing with a custom reg...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2937 NIFI-4434 Fixed recursive listing with a custom regex filter. Filter modes are now supported to perform listings based on directory and file names, file-names only, and full path. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-4434 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2937.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2937 commit 6f525bf2b84603f10fd52141e7bff6af68c61f6f Author: Jeff Storck Date: 2018-08-01T17:13:40Z NIFI-4434 Fixed recursive listing with a custom regex filter. Filter modes are now supported to perform listings based on directory and file names, file-names only, and full path. ---
[GitHub] nifi issue #2930: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2930 Accidently deleted my remote branch while updating this PR. It doesn't look like I can reopen this PR. I'll create another PR with the updated code. ---
[GitHub] nifi pull request #2930: NIFI-4434 Fixed recursive listing with a custom reg...
Github user jtstorck closed the pull request at: https://github.com/apache/nifi/pull/2930 ---
[GitHub] nifi issue #2930: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2930 I'll update the PR so that the three modes are available, and make sure the default mode keeps the current behavior. I'll add tests for the two new modes. Thanks for the input, @bbende and @ottobackwards! ---
[GitHub] nifi issue #2930: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2930 @ottobackwards That's a good combination of solutions! We need to decide if we want to put the full weight of complex regex on the user, or if it's more simple all-around by toggling between file-only or file-and-directory mode. I could see that some power-users may want the filter to be applied to the full path to be able to match only certain subdirectory trees. No reason we can't offer three modes: - filename only - filename and directory name - full path ---
[GitHub] nifi issue #2930: NIFI-4434 Fixed recursive listing with a custom regex filt...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2930 @bbende @joewitt This PR changes the default behavior of how the filter is applied during a listing, which might require manual migration efforts for some users. We could add a property to be able to toggle the application of the filter to directory and file names or filenames only, with the default being directory and file names. This solution would preserve the current behavior, and allow users to "opt-in" to having recursive listings retrieve all files regardless of directory names. There would not be an issue with migration for current users that depend on the current behavior. We could also go down the route of allowing the filter to be applied to the entire path. That gives the user maximum flexibility on how the filter would work, but requires more regex knowledge and is potentially harder for users to write the filter they want. This would also require manual migration, but it might be the best long-term solution. The tooltip on the filter property could be updated to have an example regex that would provide the default functionality that users could use as a starting point for custom filters. Any thoughts on either of these solutions? ---
[GitHub] nifi pull request #2930: NIFI-4434 Fixed recursive listing with a custom reg...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2930 NIFI-4434 Fixed recursive listing with a custom regex filter. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-4434 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2930.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2930 commit 26957eba6701d7d96e0a891031ddb05c4ff7598c Author: Jeff Storck Date: 2018-08-01T17:13:40Z NIFI-4434 Fixed recursive listing with a custom regex filter. ---
[GitHub] nifi issue #2884: NIFI-3993 Updated the ZooKeeper version to 3.4.10
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2884 @HorizonNet I'm not familiar with the issue described in NIFI-3993, or what scenarios cause it to happen. I did read [the referenced JIRA](https://issues.apache.org/jira/browse/ZOOKEEPER-2044), which states that it's a cosmetic error, so we'd need to investigate further to determine its impact on NiFi. ---
[GitHub] nifi issue #2884: NIFI-3993 Updated the ZooKeeper version to 3.4.10
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2884 @joewitt I brought up the upgrade a while ago, but many of the services/components we integrate with are using Zookeeper 3.4.6. Originally I wanted to use a newer version that did not explicitly depend on log4j, because it was causing some issues with the NiFi Toolkit. Since we were able to work around the dependency issue, we didn't upgrade the version. ---
[GitHub] nifi issue #2821: NIFI-5341 Enabled groovy tests in nifi-runtime
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2821 @joewitt @alopresto This PR modifies code that was worked on by the two of you. I think there will be some discussion of the fixes, but I wanted to get a PR up ASAP to start the review process. Of interest, there's some exception handling changes in NiFi.java. NiFiGroovyTest needs to be able to invoke the main method, and since it's a unit test, lib/bootstrap isn't available. Creating a temp directory for the test didn't work out very well, based on varying PWDs between surefire and the IDE, so I thought this would be an acceptable approach. ---
[GitHub] nifi pull request #2821: NIFI-5341 Enabled groovy tests in nifi-runtime
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2821 NIFI-5341 Enabled groovy tests in nifi-runtime Fixed tests in NiFiGroovyTest in the nifi-runtime module Updated NiFi.createBootstrapClassLoader to log a warning if lib/bootstrap does not exist rather than throwing a FileNotFoundException, since it already catches MalformedUrlException if there's an issue adding one of the bootstrap JARs to the bootstrap classpath Explicitly handling InvocationTargetException in NiFi.initializeProperties to unwrap the cause and rewrap as an IllegalArgumentException to propogate the real cause of the underlying exception thrown by NiFiPropertiesLoader Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X] Is your initial contribution a single, squashed commit? ### For code changes: - [X] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [X] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5341 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2821.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2821 commit 174e213dc83cdbfa9e449c0b3b57f05673bc2167 Author: Jeff Storck Date: 2018-06-27T02:33:19Z NIFI-5341 Enabled groovy tests in nifi-runtime Fixed tests in NiFiGroovyTest in the nifi-runtime module Updated NiFi.createBootstrapClassLoader to log a warning if lib/bootstrap does not exist rather than throwing a FileNotFoundException, since it already catches MalformedUrlException if there's an issue adding one of the bootstrap JARs to the bootstrap classpath Explicitly handling InvocationTargetException in NiFi.initializeProperties to unwrap the cause and rewrap as an IllegalArgumentException to propogate the real cause of the underlying exception thrown by NiFiPropertiesLoader ---
[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r195304620 --- Diff: nifi-docker/dockermaven/Dockerfile --- @@ -26,23 +26,33 @@ ARG NIFI_BINARY ENV NIFI_BASE_DIR /opt/nifi ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION - -# Setup NiFi user -RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: -f1` \ -&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \ -&& mkdir -p $NIFI_HOME/conf/templates \ -&& chown -R nifi:nifi $NIFI_BASE_DIR +ENV NIFI_PID_DIR=${NIFI_HOME}/run +ENV NIFI_LOG_DIR=${NIFI_HOME}/logs ADD $NIFI_BINARY $NIFI_BASE_DIR -RUN chown -R nifi:nifi $NIFI_HOME +# Setup NiFi user and create necessary directories +RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \ +&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \ +&& mkdir -p ${NIFI_HOME}/conf/templates \ +&& mkdir -p $NIFI_BASE_DIR/data \ +&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \ +&& mkdir -p $NIFI_BASE_DIR/content_repository \ +&& mkdir -p $NIFI_BASE_DIR/provenance_repository \ +&& mkdir -p $NIFI_LOG_DIR \ +&& chown -R nifi:nifi ${NIFI_BASE_DIR} \ +&& apt-get update \ +&& apt-get install -y jq xmlstarlet procps USER nifi -# Web HTTP Port & Remote Site-to-Site Ports -EXPOSE 8080 8181 +# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile +RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh + +# Web HTTP(s) & Socket Site-to-Site Ports +EXPOSE 8080 8443 1 -WORKDIR $NIFI_HOME +WORKDIR ${NIFI_HOME} # Startup NiFi ENTRYPOINT ["bin/nifi.sh"] -CMD ["run"] +CMD ["run"] --- End diff -- I tried to use Ctrl-C after NiFi was successfully up and running to kill the container. I had to open a new shell and use docker kill to bring it down. Not a big deal, I still think that's due to docker and having started the container in a non-interactive and non-detached method. For the logging, I'm not sure if there's a reason why the two docker modules have different wrapper scripts. You could check with @apiri, but most likely it'd be good to bring them in line. It doesn't have to be done in this PR, though it'd be nice to get this into the NiFi 1.7.0 release if no one disagrees with it. ---
[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r195240352 --- Diff: nifi-docker/dockermaven/integration-test.sh --- @@ -0,0 +1,35 @@ +#!/bin/bash --- End diff -- This script is missing the license, can you please add it? ---
[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r195262129 --- Diff: nifi-docker/dockermaven/Dockerfile --- @@ -26,23 +26,33 @@ ARG NIFI_BINARY ENV NIFI_BASE_DIR /opt/nifi ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION - -# Setup NiFi user -RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: -f1` \ -&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \ -&& mkdir -p $NIFI_HOME/conf/templates \ -&& chown -R nifi:nifi $NIFI_BASE_DIR +ENV NIFI_PID_DIR=${NIFI_HOME}/run +ENV NIFI_LOG_DIR=${NIFI_HOME}/logs ADD $NIFI_BINARY $NIFI_BASE_DIR -RUN chown -R nifi:nifi $NIFI_HOME +# Setup NiFi user and create necessary directories +RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \ +&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \ +&& mkdir -p ${NIFI_HOME}/conf/templates \ +&& mkdir -p $NIFI_BASE_DIR/data \ +&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \ +&& mkdir -p $NIFI_BASE_DIR/content_repository \ +&& mkdir -p $NIFI_BASE_DIR/provenance_repository \ +&& mkdir -p $NIFI_LOG_DIR \ +&& chown -R nifi:nifi ${NIFI_BASE_DIR} \ +&& apt-get update \ +&& apt-get install -y jq xmlstarlet procps USER nifi -# Web HTTP Port & Remote Site-to-Site Ports -EXPOSE 8080 8181 +# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile +RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh + +# Web HTTP(s) & Socket Site-to-Site Ports +EXPOSE 8080 8443 1 -WORKDIR $NIFI_HOME +WORKDIR ${NIFI_HOME} # Startup NiFi ENTRYPOINT ["bin/nifi.sh"] -CMD ["run"] +CMD ["run"] --- End diff -- Creating a container with: `docker run -p 8080:8080 apache/nifi:1.7.0-SNAPSHOT-dockermaven` results in NiFi starting successfully, but I'm unable to control-c out of the container. I'm not a docker expert, but I would expect that hitting control-c would kill the container. Although, since I didn't run it in interactive mode, this is probably a docker thing, not specific to this Dockerfile. Starting the container with: `docker run -d -p 8080:8080 apache/nifi:1.7.0-SNAPSHOT-dockermaven` and then issuing: `docker logs ` I see the nifi-bootstrap output, but not the nifi-app.log and nifi-user.log output. Would it be preferable to have this behavior? ---
[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r192208160 --- Diff: nifi-docker/dockerhub/Dockerfile --- @@ -25,28 +25,37 @@ ARG GID=1000 ARG NIFI_VERSION=1.7.0 ARG MIRROR=https://archive.apache.org/dist -ENV NIFI_BASE_DIR /opt/nifi +ENV NIFI_BASE_DIR /opt/nifi ENV NIFI_HOME=${NIFI_BASE_DIR}/nifi-${NIFI_VERSION} \ NIFI_BINARY_URL=/nifi/${NIFI_VERSION}/nifi-${NIFI_VERSION}-bin.tar.gz +ENV NIFI_PID_DIR=${NIFI_HOME}/run +ENV NIFI_LOG_DIR=${NIFI_HOME}/logs ADD sh/ /opt/nifi/scripts/ -# Setup NiFi user +# Setup NiFi user and create necessary directories RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \ && useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \ && mkdir -p ${NIFI_HOME}/conf/templates \ +&& mkdir -p $NIFI_BASE_DIR/data \ +&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \ +&& mkdir -p $NIFI_BASE_DIR/content_repository \ +&& mkdir -p $NIFI_BASE_DIR/provenance_repository \ +&& mkdir -p $NIFI_LOG_DIR \ && chown -R nifi:nifi ${NIFI_BASE_DIR} \ && apt-get update \ -&& apt-get install -y jq xmlstarlet +&& apt-get install -y jq xmlstarlet procps USER nifi # Download, validate, and expand Apache NiFi binary. RUN curl -fSL ${MIRROR}/${NIFI_BINARY_URL} -o ${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz \ -&& echo "$(curl https://archive.apache.org/dist/${NIFI_BINARY_URL}.sha256) *${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \ +&& echo "$(curl ${MIRROR}/${NIFI_BINARY_URL}.sha256) *${NIFI_BASE_DIR}/nifi-${NIFI_VERSION}-bin.tar.gz" | sha256sum -c - \ --- End diff -- @pepov I would suggest removing the MIRROR build arg from this line and reverting back to the apache archive, since from what @apiri has told me, only the Apache archive will host the SHA files to verify the archive. A mirror will not contain those. Also, there's a caveat with using a mirror. If you're not building a version that still exists on the mirror (which should be current and current-1), the build will fail, if that version has been removed/rolled off from the mirror. ---
[GitHub] nifi issue #2746: NIFI-5247 NiFi toolkit signal handling changes, Dockerfile...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2746 Contrib build passes, and was able to reproduce your example usages of invoking the toolkit and observing the exit codes. +1, merging to master. Thanks for your contribution, @pepov! ---
[GitHub] nifi pull request #2746: NIFI-5247 NiFi toolkit signal handling changes, Doc...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2746#discussion_r192240524 --- Diff: nifi-toolkit/nifi-toolkit-assembly/docker/tests/exit-codes.sh --- @@ -0,0 +1,35 @@ +#!/bin/bash --- End diff -- This file needs to be added to the excludes for the apache-rat-plugin in nifi-toolkit-assembly. ---
[GitHub] nifi pull request #2746: NIFI-5247 NiFi toolkit signal handling changes, Doc...
Github user jtstorck commented on a diff in the pull request: https://github.com/apache/nifi/pull/2746#discussion_r192240688 --- Diff: nifi-toolkit/nifi-toolkit-assembly/docker/tests/tls-toolkit.sh --- @@ -0,0 +1,17 @@ +#!/bin/bash --- End diff -- This file needs to be added to the excludes for the apache-rat-plugin in nifi-toolkit-assembly. ---
[GitHub] nifi pull request #2708: NIFI-5175 Updated NiFi compiled on Java 1.8 to run ...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2708 NIFI-5175 Updated NiFi compiled on Java 1.8 to run on Java 9 The bootstrap process (RunNiFi) detects Java 9 and adds "--add-modules=java.xml.bind" to the command to start NiFi Updated OSUtils to detect Java 9 and reflectively invoke the Process.pid() method to get the PID of the NiFi process Added java debug variable to nifi.sh to allow debugging of the bootstrap process (RunNiFi) Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5175 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2708.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2708 commit 1c71a1cce7745ed476441230a7e2e18e2a78192a Author: Jeff Storck <jtswork@...> Date: 2018-02-12T20:58:35Z NIFI-5175 Updated NiFi compiled on Java 1.8 to run on Java 9 The bootstrap process (RunNiFi) detects Java 9 and adds "--add-modules=java.xml.bind" to the command to start NiFi Updated OSUtils to detect Java 9 and reflectively invoke the Process.pid() method to get the PID of the NiFi process Added java debug variable to nifi.sh to allow debugging of the bootstrap process (RunNiFi) ---
[GitHub] nifi issue #2667: NIFI-5134 Explicitly requesting UGI to relogin before atte...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2667 @mattyb149, @markap14, would you please review this PR? ---
[GitHub] nifi pull request #2667: NIFI-5134 Explicitly requesting UGI to relogin befo...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2667 NIFI-5134 Explicitly requesting UGI to relogin before attempting to g⦠â¦et a DB connection in HiveConnectionPool Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X] Is your initial contribution a single, squashed commit? ### For code changes: - [X] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-5134 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2667.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2667 commit b0779545526205c475babc23931e92d177ad556d Author: Jeff Storck <jtswork@...> Date: 2018-04-30T14:39:12Z NIFI-5134 Explicitly requesting UGI to relogin before attempting to get a DB connection in HiveConnectionPool ---
[GitHub] nifi issue #2582: NIFI-4923 Updated nifi-hadoop-libraries-nar, nifi-hdfs-pro...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2582 @MikeThomsen I did run against HDP 2.6. Using client configs downloaded from a secure HDP 2.6 cluster, I was able to test Put/List/Fetch/DeleteHDFS, including TDE directories. ---
[GitHub] nifi pull request #2582: NIFI-4923 Updated nifi-hadoop-libraries-nar, nifi-h...
GitHub user jtstorck opened a pull request: https://github.com/apache/nifi/pull/2582 NIFI-4923 Updated nifi-hadoop-libraries-nar, nifi-hdfs-processors, an⦠â¦d nifi-hadoop-utils dependency on hadoop-client from 2.7.3 to 3.0.0 Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jtstorck/nifi NIFI-4923 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2582.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2582 ---
[GitHub] nifi issue #2512: NIFI-4936 pushed down version declarations to lowest appro...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2512 +1 regarding the HDFS component POM changes. After building with this PR, I tested Put/List/DeleteHDFS processors with TDE paths successfully. ---
[GitHub] nifi issue #2475: NIFI-4872 Added annotation for specifying scenarios in whi...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2475 @markap14 I will add descriptions to the annotations for the processors you mentioned. Thanks for the extra info! We can do additional PRs to add more descriptions as needed, but I think the default descriptions are good to at least mark the current processors that might cause resource issues. As @joewitt mentioned, the annotation doesn't need to be used just to convey that a component might use a lot of a particular resource. It can also include descriptions on how to best utilize the resources, or indicate that the component uses very little of a type of resource and can parallelized to a high degree without degrading system performance. I can agree that currently, there aren't many components that would use the DISK or NETWORK SystemResource type when referring to how a single flowfile would affect them, but there may be in the future. I think it's a good idea to keep all four types in the enumeration. ---
[GitHub] nifi issue #2482: NIFI-4894: Ensuring that any proxy paths are retained when...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/2482 @mcgilman @scottyaslan I was able to reproduce the bug in master while proxying with Knox by creating a HandleHttpRequest processor with a StandardHttpContextMap controller service. Attempting to enable the service through the UI resulted in the 404. After applying the PR to master and restarting, I was able to go through the same steps and successfully enable the StandardHttpContextMap controller service. +1 LGTM! ---