[GitHub] nifi issue #2537: fix printing indefinite log errors
Github user bdesert commented on the issue: https://github.com/apache/nifi/pull/2537 @mattyb149 , can you please review the changes? ---
[GitHub] nifi pull request #2537: fix printing indefinite log errors
GitHub user bdesert opened a pull request: https://github.com/apache/nifi/pull/2537 fix printing indefinite log errors Added fix to avoid printing indefinite errors to logs in customValidate after first failed validation until any property is modified. - Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bdesert/nifi NIFI-4968-ISP-fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2537.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2537 commit e105705da30ac72551433ddf2027d90a590ae73a Author: EdDate: 2018-03-13T05:49:04Z fix printing indefinite log errors After first failure in customValidate, stop printing logs until any property is changed ---
[jira] [Created] (NIFI-4968) InvokeScriptedProcessor Crashing NiFi cluster
Ed Berezitsky created NIFI-4968: --- Summary: InvokeScriptedProcessor Crashing NiFi cluster Key: NIFI-4968 URL: https://issues.apache.org/jira/browse/NIFI-4968 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.1.0 Reporter: Ed Berezitsky Assignee: Ed Berezitsky InvokeScriptedProcessor with Groovy Engine crashes a cluster when Groovy script doesn't compile. Also, it prints errors into log non-stop, until deleted. Bug found in NiFi 1.1, but reproducible including 1.5. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396509#comment-16396509 ] ASF GitHub Bot commented on NIFI-4325: -- Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 I'm seeing NPE and FlowFile handling exceptions. I didn't do anything special, just using the example query. I'll attach a template. Errors from the logs below. > 2018-03-12 23:58:11,386 ERROR [Timer-Driven Process Thread-10] o.a.n.p.e.JsonQueryElasticsearch JsonQueryElasticsearch[id=1d5d83d1-0162-1000-cdf2-92fa9e61ef42] Error processing flowfile.: java.lang.NullPointerException java.lang.NullPointerException: null at org.apache.nifi.processors.elasticsearch.JsonQueryElasticsearch.onTrigger(JsonQueryElasticsearch.java:248) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2018-03-12 23:58:11,389 ERROR [Timer-Driven Process Thread-10] o.a.n.p.e.JsonQueryElasticsearch JsonQueryElasticsearch[id=1d5d83d1-0162-1000-cdf2-92fa9e61ef42] JsonQueryElasticsearch[id=1d5d83d1-0162-1000-cdf2-92fa9e61ef42] failed to process due to org.apache.nifi.processor.exception.FlowFileHandlingException: StandardFlowFileRecord[uuid=77ad5fff-d4b6-4e11-8d8c-b07ac1d0a6cb,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520913052434-1, container=default, section=1], offset=30, length=228],offset=226,name=46172028496405,size=2] transfer relationship not specified; rolling back session: {} org.apache.nifi.processor.exception.FlowFileHandlingException: StandardFlowFileRecord[uuid=77ad5fff-d4b6-4e11-8d8c-b07ac1d0a6cb,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520913052434-1, container=default, section=1], offset=30, length=228],offset=226,name=46172028496405,size=2] transfer relationship not specified at org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:251) at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:321) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396511#comment-16396511 ] ASF GitHub Bot commented on NIFI-4325: -- Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Template for FlowFile handling exception. [FlowFile_Handling_Exception.txt](https://github.com/apache/nifi/files/1805420/FlowFile_Handling_Exception.txt) > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 I'm seeing NPE and FlowFile handling exceptions. I didn't do anything special, just using the example query. I'll attach a template. Errors from the logs below. > 2018-03-12 23:58:11,386 ERROR [Timer-Driven Process Thread-10] o.a.n.p.e.JsonQueryElasticsearch JsonQueryElasticsearch[id=1d5d83d1-0162-1000-cdf2-92fa9e61ef42] Error processing flowfile.: java.lang.NullPointerException java.lang.NullPointerException: null at org.apache.nifi.processors.elasticsearch.JsonQueryElasticsearch.onTrigger(JsonQueryElasticsearch.java:248) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 2018-03-12 23:58:11,389 ERROR [Timer-Driven Process Thread-10] o.a.n.p.e.JsonQueryElasticsearch JsonQueryElasticsearch[id=1d5d83d1-0162-1000-cdf2-92fa9e61ef42] JsonQueryElasticsearch[id=1d5d83d1-0162-1000-cdf2-92fa9e61ef42] failed to process due to org.apache.nifi.processor.exception.FlowFileHandlingException: StandardFlowFileRecord[uuid=77ad5fff-d4b6-4e11-8d8c-b07ac1d0a6cb,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520913052434-1, container=default, section=1], offset=30, length=228],offset=226,name=46172028496405,size=2] transfer relationship not specified; rolling back session: {} org.apache.nifi.processor.exception.FlowFileHandlingException: StandardFlowFileRecord[uuid=77ad5fff-d4b6-4e11-8d8c-b07ac1d0a6cb,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520913052434-1, container=default, section=1], offset=30, length=228],offset=226,name=46172028496405,size=2] transfer relationship not specified at org.apache.nifi.controller.repository.StandardProcessSession.checkpoint(StandardProcessSession.java:251) at org.apache.nifi.controller.repository.StandardProcessSession.commit(StandardProcessSession.java:321) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:28) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) ---
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Template for FlowFile handling exception. [FlowFile_Handling_Exception.txt](https://github.com/apache/nifi/files/1805420/FlowFile_Handling_Exception.txt) ---
[jira] [Commented] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396417#comment-16396417 ] ASF subversion and git services commented on NIFI-4885: --- Commit d78d95ad6f31614745bef4c26ba8e9f4c3b27dd2 in nifi's branch refs/heads/master from [~joewitt] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=d78d95a ] NIFI-4885 fixing checkstyle issue > More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > Fix For: 1.6.0 > > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396380#comment-16396380 ] ASF GitHub Bot commented on NIFI-4325: -- Github user JPercivall commented on a diff in the pull request: https://github.com/apache/nifi/pull/2113#discussion_r173996495 --- Diff: nifi-assembly/pom.xml --- @@ -445,6 +445,24 @@ language governing permissions and limitations under the License. --> 1.6.0-SNAPSHOT nar + +org.apache.nifi + nifi-elasticsearch-client-service-api-nar --- End diff -- My build is failing due to this. I don't see this artifact created anywhere. > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2113: NIFI-4325 Added new processor that uses the JSON DS...
Github user JPercivall commented on a diff in the pull request: https://github.com/apache/nifi/pull/2113#discussion_r173996495 --- Diff: nifi-assembly/pom.xml --- @@ -445,6 +445,24 @@ language governing permissions and limitations under the License. --> 1.6.0-SNAPSHOT nar + +org.apache.nifi + nifi-elasticsearch-client-service-api-nar --- End diff -- My build is failing due to this. I don't see this artifact created anywhere. ---
[GitHub] nifi issue #2150: NIFI-3402: Added etag support to InvokeHTTP
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2150 @pvillard31 @m-hogue Think we can close the loop on this one? ---
[jira] [Commented] (NIFI-3402) Add ETag Support to InvokeHTTP
[ https://issues.apache.org/jira/browse/NIFI-3402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396367#comment-16396367 ] ASF GitHub Bot commented on NIFI-3402: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2150 @pvillard31 @m-hogue Think we can close the loop on this one? > Add ETag Support to InvokeHTTP > -- > > Key: NIFI-3402 > URL: https://issues.apache.org/jira/browse/NIFI-3402 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Brandon DeVries >Assignee: Michael Hogue >Priority: Trivial > > Unlike GetHTTP, when running in "source" mode InvokeHTTP doesn't support > ETags. It will pull from a URL as often as it is scheduled to run. When > running with an input relationship, it would potentially make sense to not > use the ETag. But at least in "source" mode, it seems like it should at > least be an option. > To maintain backwards compatibility and support the non-"source" usage, I'd > suggest creating a new "Use ETag" property that defaults to false... -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4962) FlattenJson processor add unexpected backslash after flatten
[ https://issues.apache.org/jira/browse/NIFI-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deon Huang updated NIFI-4962: - Description: Flatten Json will unexpectedly prefix slash with a backslash. This cause data miss match problem after flatten when json value contains url or other slash character. Detail example are in the attachments. Input, output example {"col3":["http://localhost:8080/nifi31","http://localhost:8080/nifi32"]} after flatten- {"col3":["http:\/\/localhost:8080\/nifi31","http:\/\/localhost:8080\/nifi32"]} was: Flatten Json will unexpectedly prefix slash with a backslash. This cause data miss match problem after flatten when json value contains url or other slash character. Detail example are in the attachments. Input, output example {"col3":["http://localhost:8080/nifi31","http://localhost:8080/nifi32"]} after flatten {"col3":["http:\/\/localhost:8080\/nifi31","http:\/\/localhost:8080\/nifi32"]} > FlattenJson processor add unexpected backslash after flatten > > > Key: NIFI-4962 > URL: https://issues.apache.org/jira/browse/NIFI-4962 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > Fix For: 1.6.0 > > Attachments: 34BFD277-F8FA-4ACE-BE1F-536B0BD6D10A.png, > 35E5F7FD-DD79-4556-8BF0-021EEF948520.png, > CEECDC79-7CA2-468C-A057-7A3264EB46BB.png > > > Flatten Json will unexpectedly prefix slash with a backslash. > This cause data miss match problem after flatten when json value contains url > or other slash character. > Detail example are in the attachments. > > Input, output example > {"col3":["http://localhost:8080/nifi31","http://localhost:8080/nifi32"]} > after flatten- > {"col3":["http:\/\/localhost:8080\/nifi31","http:\/\/localhost:8080\/nifi32"]} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4962) FlattenJson processor add unexpected backslash after flatten
[ https://issues.apache.org/jira/browse/NIFI-4962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deon Huang updated NIFI-4962: - Description: Flatten Json will unexpectedly prefix slash with a backslash. This cause data miss match problem after flatten when json value contains url or other slash character. Detail example are in the attachments. Input, output example {"col3":["http://localhost:8080/nifi31","http://localhost:8080/nifi32"]} after flatten {"col3":["http:\/\/localhost:8080\/nifi31","http:\/\/localhost:8080\/nifi32"]} was: Flatten Json will unexpectedly prefix slash with a backslash. This cause data miss match problem after flatten when json value contains url or other slash character. Detail example are in the attachments. > FlattenJson processor add unexpected backslash after flatten > > > Key: NIFI-4962 > URL: https://issues.apache.org/jira/browse/NIFI-4962 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > Fix For: 1.6.0 > > Attachments: 34BFD277-F8FA-4ACE-BE1F-536B0BD6D10A.png, > 35E5F7FD-DD79-4556-8BF0-021EEF948520.png, > CEECDC79-7CA2-468C-A057-7A3264EB46BB.png > > > Flatten Json will unexpectedly prefix slash with a backslash. > This cause data miss match problem after flatten when json value contains url > or other slash character. > Detail example are in the attachments. > > Input, output example > {"col3":["http://localhost:8080/nifi31","http://localhost:8080/nifi32"]} > after flatten > {"col3":["http:\/\/localhost:8080\/nifi31","http:\/\/localhost:8080\/nifi32"]} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396358#comment-16396358 ] ASF GitHub Bot commented on NIFI-4325: -- Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Looks like the master build is failing with some check style issues, hence why your builds failed. I'll work on fixing them. > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Looks like the master build is failing with some check style issues, hence why your builds failed. I'll work on fixing them. ---
[jira] [Commented] (NIFI-4743) Suppress Nulls for PutElasticsearchHttpRecord
[ https://issues.apache.org/jira/browse/NIFI-4743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396333#comment-16396333 ] ASF GitHub Bot commented on NIFI-4743: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2501 @pvillard31 If you have a min this is a small one that'd be good to have in 1.6. @robertrbruno wrote 95% of it, but I took his patch from Jira and made it into a PR. We've both kicked the tires, and it seems solid. > Suppress Nulls for PutElasticsearchHttpRecord > - > > Key: NIFI-4743 > URL: https://issues.apache.org/jira/browse/NIFI-4743 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Robert Bruno >Assignee: Mike Thomsen >Priority: Minor > Attachments: NullSuppression.java, PutElasticsearchHttpRecord.java > > > Would be useful for PutElasticsearchHttpRecord to allow you to suppress NULL > values in the JSON that is inserted into ES much like the JsonRecordSetWriter > allows you to do. Perhaps PutElasticsearchHttpRecord could some how make use > of JsonRecordSetWriter so it would inherit this functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2501: NIFI-4743 Added configurable null suppression to PutElasti...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2501 @pvillard31 If you have a min this is a small one that'd be good to have in 1.6. @robertrbruno wrote 95% of it, but I took his patch from Jira and made it into a PR. We've both kicked the tires, and it seems solid. ---
[GitHub] nifi issue #2530: NIFI-4800 Expose the flattenMode as property in FlattenJSO...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2530 +1 LGTM. ---
[jira] [Commented] (NIFI-4800) Expose the flattenMode as property in FlattenJSON processor
[ https://issues.apache.org/jira/browse/NIFI-4800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396328#comment-16396328 ] ASF GitHub Bot commented on NIFI-4800: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2530 +1 LGTM. > Expose the flattenMode as property in FlattenJSON processor > --- > > Key: NIFI-4800 > URL: https://issues.apache.org/jira/browse/NIFI-4800 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Alois Gruber >Assignee: Deon Huang >Priority: Trivial > > the flattening class supports 3 different modes, which cannot be selected in > the processor. Especially the flattening of arrays would be helpful -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Reviewing ---
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396326#comment-16396326 ] ASF GitHub Bot commented on NIFI-4325: -- Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Reviewing > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4967) CSVRecordReader does not read header with specific formats
[ https://issues.apache.org/jira/browse/NIFI-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4967: - Status: Patch Available (was: Open) > CSVRecordReader does not read header with specific formats > -- > > Key: NIFI-4967 > URL: https://issues.apache.org/jira/browse/NIFI-4967 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > When using a specific CSV format (example Microsoft Excel) in the CSV reader > (with schema defined from header, and Apache Commons CSV parser), the CSV > reader is not correctly initialized and the header is null leading to a NPE > which is not clearly exposed. Instead the following exception can be seen: > {noformat} > 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] > o.a.nifi.processors.standard.QueryRecord > QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] Processor > Administratively Yielded for 1 sec due to processing failure > 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] > o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding > QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] due to uncaught > Exception: java.lang.IllegalStateException: > StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520889888555-181, > container=default, section=181], offset=604078, > length=37421],offset=0,name=865467214336135,size=37421] already in use for an > active callback or an InputStream created by ProcessSession.read(FlowFile) > has not been closed > 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] > o.a.n.c.t.ContinuallyRunProcessorTask > java.lang.IllegalStateException: > StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520889888555-181, > container=default, section=181], offset=604078, > length=37421],offset=0,name=865467214336135,size=37421] already in use for an > active callback or an InputStream created by ProcessSession.read(FlowFile) > has not been closed > at > org.apache.nifi.controller.repository.StandardProcessSession.validateRecordState(StandardProcessSession.java:3060) > at > org.apache.nifi.controller.repository.StandardProcessSession.validateRecordState(StandardProcessSession.java:3055) > at > org.apache.nifi.controller.repository.StandardProcessSession.transfer(StandardProcessSession.java:1854) > at > org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:378) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4967) CSVRecordReader does not read header with specific formats
[ https://issues.apache.org/jira/browse/NIFI-4967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396202#comment-16396202 ] ASF GitHub Bot commented on NIFI-4967: -- GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/2536 NIFI-4967 - CSVRecordReader does not read header with specific formats Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-4967 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2536.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2536 commit c567233bd7a76c741ae1b4bd50e3fcbfe9ebf766 Author: Pierre VillardDate: 2018-03-12T22:44:06Z NIFI-4967 - CSVRecordReader does not read header with specific formats > CSVRecordReader does not read header with specific formats > -- > > Key: NIFI-4967 > URL: https://issues.apache.org/jira/browse/NIFI-4967 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > When using a specific CSV format (example Microsoft Excel) in the CSV reader > (with schema defined from header, and Apache Commons CSV parser), the CSV > reader is not correctly initialized and the header is null leading to a NPE > which is not clearly exposed. Instead the following exception can be seen: > {noformat} > 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] > o.a.nifi.processors.standard.QueryRecord > QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] Processor > Administratively Yielded for 1 sec due to processing failure > 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] > o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding > QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] due to uncaught > Exception: java.lang.IllegalStateException: > StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520889888555-181, > container=default, section=181], offset=604078, > length=37421],offset=0,name=865467214336135,size=37421] already in use for an > active callback or an InputStream created by ProcessSession.read(FlowFile) > has not been closed > 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] > o.a.n.c.t.ContinuallyRunProcessorTask > java.lang.IllegalStateException: > StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520889888555-181, > container=default, section=181], offset=604078, > length=37421],offset=0,name=865467214336135,size=37421] already in use for an > active callback or an InputStream created by ProcessSession.read(FlowFile) > has not been closed >
[GitHub] nifi pull request #2536: NIFI-4967 - CSVRecordReader does not read header wi...
GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/2536 NIFI-4967 - CSVRecordReader does not read header with specific formats Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-4967 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2536.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2536 commit c567233bd7a76c741ae1b4bd50e3fcbfe9ebf766 Author: Pierre VillardDate: 2018-03-12T22:44:06Z NIFI-4967 - CSVRecordReader does not read header with specific formats ---
[jira] [Created] (NIFI-4967) CSVRecordReader does not read header with specific formats
Pierre Villard created NIFI-4967: Summary: CSVRecordReader does not read header with specific formats Key: NIFI-4967 URL: https://issues.apache.org/jira/browse/NIFI-4967 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.5.0 Reporter: Pierre Villard Assignee: Pierre Villard When using a specific CSV format (example Microsoft Excel) in the CSV reader (with schema defined from header, and Apache Commons CSV parser), the CSV reader is not correctly initialized and the header is null leading to a NPE which is not clearly exposed. Instead the following exception can be seen: {noformat} 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] o.a.nifi.processors.standard.QueryRecord QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] Processor Administratively Yielded for 1 sec due to processing failure 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] o.a.n.c.t.ContinuallyRunProcessorTask Administratively Yielding QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] due to uncaught Exception: java.lang.IllegalStateException: StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520889888555-181, container=default, section=181], offset=604078, length=37421],offset=0,name=865467214336135,size=37421] already in use for an active callback or an InputStream created by ProcessSession.read(FlowFile) has not been closed 2018-03-12 23:34:30,427 WARN [Timer-Driven Process Thread-5] o.a.n.c.t.ContinuallyRunProcessorTask java.lang.IllegalStateException: StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520889888555-181, container=default, section=181], offset=604078, length=37421],offset=0,name=865467214336135,size=37421] already in use for an active callback or an InputStream created by ProcessSession.read(FlowFile) has not been closed at org.apache.nifi.controller.repository.StandardProcessSession.validateRecordState(StandardProcessSession.java:3060) at org.apache.nifi.controller.repository.StandardProcessSession.validateRecordState(StandardProcessSession.java:3055) at org.apache.nifi.controller.repository.StandardProcessSession.transfer(StandardProcessSession.java:1854) at org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:378) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4966) JacksonCSVRecordReader - NPE with some CSV formats
[ https://issues.apache.org/jira/browse/NIFI-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4966: - Status: Patch Available (was: Open) > JacksonCSVRecordReader - NPE with some CSV formats > -- > > Key: NIFI-4966 > URL: https://issues.apache.org/jira/browse/NIFI-4966 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > With some CSV formats (not the default one), we can get the following: > {noformat} > 2018-03-12 22:49:49,460 ERROR [Timer-Driven Process Thread-7] > o.a.nifi.processors.standard.QueryRecord > QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] Failed to determine > Record Schema from > StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520889888555-181, > container=default, section=181], offset=604078, > length=37421],offset=0,name=865467214336135,size=37421]; routing to failure: > java.lang.NullPointerException > java.lang.NullPointerException: null > at > com.fasterxml.jackson.databind.DeserializationConfig.withFeatures(DeserializationConfig.java:520) > at > com.fasterxml.jackson.databind.ObjectReader.withFeatures(ObjectReader.java:501) > at > org.apache.nifi.csv.JacksonCSVRecordReader.(JacksonCSVRecordReader.java:117) > at > org.apache.nifi.csv.CSVReader.createRecordReader(CSVReader.java:136) > at sun.reflect.GeneratedMethodAccessor520.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:89) > at com.sun.proxy.$Proxy206.createRecordReader(Unknown Source) > at > org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:265) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4966) JacksonCSVRecordReader - NPE with some CSV formats
[ https://issues.apache.org/jira/browse/NIFI-4966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16396162#comment-16396162 ] ASF GitHub Bot commented on NIFI-4966: -- GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/2535 NIFI-4966 - JacksonCSVRecordReader - NPE with some CSV formats Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-4966 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2535.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2535 commit f12dcf9cdaf834487eb1051545d1e21a6fbb6cd1 Author: Pierre VillardDate: 2018-03-12T22:20:07Z NIFI-4966 - JacksonCSVRecordReader - NPE with some CSV formats > JacksonCSVRecordReader - NPE with some CSV formats > -- > > Key: NIFI-4966 > URL: https://issues.apache.org/jira/browse/NIFI-4966 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > > With some CSV formats (not the default one), we can get the following: > {noformat} > 2018-03-12 22:49:49,460 ERROR [Timer-Driven Process Thread-7] > o.a.nifi.processors.standard.QueryRecord > QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] Failed to determine > Record Schema from > StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520889888555-181, > container=default, section=181], offset=604078, > length=37421],offset=0,name=865467214336135,size=37421]; routing to failure: > java.lang.NullPointerException > java.lang.NullPointerException: null > at > com.fasterxml.jackson.databind.DeserializationConfig.withFeatures(DeserializationConfig.java:520) > at > com.fasterxml.jackson.databind.ObjectReader.withFeatures(ObjectReader.java:501) > at > org.apache.nifi.csv.JacksonCSVRecordReader.(JacksonCSVRecordReader.java:117) > at > org.apache.nifi.csv.CSVReader.createRecordReader(CSVReader.java:136) > at sun.reflect.GeneratedMethodAccessor520.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:89) > at com.sun.proxy.$Proxy206.createRecordReader(Unknown Source) > at > org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:265) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at >
[GitHub] nifi pull request #2535: NIFI-4966 - JacksonCSVRecordReader - NPE with some ...
GitHub user pvillard31 opened a pull request: https://github.com/apache/nifi/pull/2535 NIFI-4966 - JacksonCSVRecordReader - NPE with some CSV formats Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/pvillard31/nifi NIFI-4966 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2535.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2535 commit f12dcf9cdaf834487eb1051545d1e21a6fbb6cd1 Author: Pierre VillardDate: 2018-03-12T22:20:07Z NIFI-4966 - JacksonCSVRecordReader - NPE with some CSV formats ---
[jira] [Created] (NIFI-4966) JacksonCSVRecordReader - NPE with some CSV formats
Pierre Villard created NIFI-4966: Summary: JacksonCSVRecordReader - NPE with some CSV formats Key: NIFI-4966 URL: https://issues.apache.org/jira/browse/NIFI-4966 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.5.0 Reporter: Pierre Villard Assignee: Pierre Villard With some CSV formats (not the default one), we can get the following: {noformat} 2018-03-12 22:49:49,460 ERROR [Timer-Driven Process Thread-7] o.a.nifi.processors.standard.QueryRecord QueryRecord[id=4428e3a1-cf73-377f-150d-98d404785786] Failed to determine Record Schema from StandardFlowFileRecord[uuid=c5f428f0-0fa8-4660-b0df-6974bbd82f47,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520889888555-181, container=default, section=181], offset=604078, length=37421],offset=0,name=865467214336135,size=37421]; routing to failure: java.lang.NullPointerException java.lang.NullPointerException: null at com.fasterxml.jackson.databind.DeserializationConfig.withFeatures(DeserializationConfig.java:520) at com.fasterxml.jackson.databind.ObjectReader.withFeatures(ObjectReader.java:501) at org.apache.nifi.csv.JacksonCSVRecordReader.(JacksonCSVRecordReader.java:117) at org.apache.nifi.csv.CSVReader.createRecordReader(CSVReader.java:136) at sun.reflect.GeneratedMethodAccessor520.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:89) at com.sun.proxy.$Proxy206.createRecordReader(Unknown Source) at org.apache.nifi.processors.standard.QueryRecord.onTrigger(QueryRecord.java:265) at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1123) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (MINIFICPP-425) Fix travis linter errors
marco polo created MINIFICPP-425: Summary: Fix travis linter errors Key: MINIFICPP-425 URL: https://issues.apache.org/jira/browse/MINIFICPP-425 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: marco polo Assignee: marco polo Fix travis linter errors -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395889#comment-16395889 ] ASF GitHub Bot commented on NIFI-4325: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2113 Rebased and building. @mattyb149 I changed the functionality to match the GetMongo functionality you reviewed recently. Had to spend a while getting the build to work again. > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2113 Rebased and building. @mattyb149 I changed the functionality to match the GetMongo functionality you reviewed recently. Had to spend a while getting the build to work again. ---
[jira] [Commented] (MINIFICPP-424) Update CAInfo manually so user does not have to update the cert bundle on their machine
[ https://issues.apache.org/jira/browse/MINIFICPP-424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395882#comment-16395882 ] ASF GitHub Bot commented on MINIFICPP-424: -- GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/276 MINIFICPP-424: Manually specify CAFile so users do not need to update… … the cert bundle on their local machine Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-424 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/276.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #276 commit 8f9d8c58c7203aa62f7b7e2795f2b459af231ba5 Author: Marc ParisiDate: 2018-03-12T20:55:39Z MINIFICPP-424: Manually specify CAFile so users do not need to update the cert bundle on their local machine > Update CAInfo manually so user does not have to update the cert bundle on > their machine > --- > > Key: MINIFICPP-424 > URL: https://issues.apache.org/jira/browse/MINIFICPP-424 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: marco polo >Assignee: marco polo >Priority: Major > > Update CAInfo manually so user does not have to update the cert bundle on > their machine. We should do this for the user's convenience. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #276: MINIFICPP-424: Manually specify CAFile so...
GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/276 MINIFICPP-424: Manually specify CAFile so users do not need to update⦠⦠the cert bundle on their local machine Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-424 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/276.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #276 commit 8f9d8c58c7203aa62f7b7e2795f2b459af231ba5 Author: Marc ParisiDate: 2018-03-12T20:55:39Z MINIFICPP-424: Manually specify CAFile so users do not need to update the cert bundle on their local machine ---
[jira] [Created] (MINIFICPP-424) Update CAInfo manually so user does not have to update the cert bundle on their machine
marco polo created MINIFICPP-424: Summary: Update CAInfo manually so user does not have to update the cert bundle on their machine Key: MINIFICPP-424 URL: https://issues.apache.org/jira/browse/MINIFICPP-424 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: marco polo Assignee: marco polo Update CAInfo manually so user does not have to update the cert bundle on their machine. We should do this for the user's convenience. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4958) Travis job will be successful when Maven build fails + add atlas profile
[ https://issues.apache.org/jira/browse/NIFI-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395831#comment-16395831 ] ASF GitHub Bot commented on NIFI-4958: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2529 > Travis job will be successful when Maven build fails + add atlas profile > > > Key: NIFI-4958 > URL: https://issues.apache.org/jira/browse/NIFI-4958 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.6.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.6.0 > > > With > [NIFI-4936|https://github.com/apache/nifi/commit/c71409fb5d0a3aef95b05fca9538258d2e2fb907], > the output of the build has been reduced but we loose the output code of the > Maven build command. The profile to build atlas bundle is also missing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4958) Travis job will be successful when Maven build fails + add atlas profile
[ https://issues.apache.org/jira/browse/NIFI-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joseph Witt updated NIFI-4958: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Travis job will be successful when Maven build fails + add atlas profile > > > Key: NIFI-4958 > URL: https://issues.apache.org/jira/browse/NIFI-4958 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.6.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.6.0 > > > With > [NIFI-4936|https://github.com/apache/nifi/commit/c71409fb5d0a3aef95b05fca9538258d2e2fb907], > the output of the build has been reduced but we loose the output code of the > Maven build command. The profile to build atlas bundle is also missing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2529: NIFI-4958 - Fix Travis job status + atlas profile
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2529 ---
[jira] [Commented] (NIFI-4958) Travis job will be successful when Maven build fails + add atlas profile
[ https://issues.apache.org/jira/browse/NIFI-4958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395829#comment-16395829 ] ASF subversion and git services commented on NIFI-4958: --- Commit 9158c19123f29130a62d3d1ea059377cf2ad1e6f in nifi's branch refs/heads/master from [~pvillard] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=9158c19 ] NIFI-4958 - This closes #2529. Fix Travis job status + atlas profile Signed-off-by: joewitt> Travis job will be successful when Maven build fails + add atlas profile > > > Key: NIFI-4958 > URL: https://issues.apache.org/jira/browse/NIFI-4958 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.6.0 >Reporter: Pierre Villard >Assignee: Pierre Villard >Priority: Major > Fix For: 1.6.0 > > > With > [NIFI-4936|https://github.com/apache/nifi/commit/c71409fb5d0a3aef95b05fca9538258d2e2fb907], > the output of the build has been reduced but we loose the output code of the > Maven build command. The profile to build atlas bundle is also missing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFIREG-151) Add tutorial videos to Registry home page
Andrew Lim created NIFIREG-151: -- Summary: Add tutorial videos to Registry home page Key: NIFIREG-151 URL: https://issues.apache.org/jira/browse/NIFIREG-151 Project: NiFi Registry Issue Type: Improvement Reporter: Andrew Lim I created three NiFi Registry video tutorials that I felt could be of help if posted to the Registry home page. The three videos: * Getting Started with Apache NiFi Registry * Setting Up a Secure Apache NiFi Registry * Setting Up a Secure NiFi to Integrate with a Secure NiFi Registry -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFIREG-151) Add tutorial videos to Registry home page
[ https://issues.apache.org/jira/browse/NIFIREG-151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Lim reassigned NIFIREG-151: -- Assignee: Andrew Lim > Add tutorial videos to Registry home page > - > > Key: NIFIREG-151 > URL: https://issues.apache.org/jira/browse/NIFIREG-151 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Andrew Lim >Assignee: Andrew Lim >Priority: Minor > > I created three NiFi Registry video tutorials that I felt could be of help if > posted to the Registry home page. The three videos: > * Getting Started with Apache NiFi Registry > * Setting Up a Secure Apache NiFi Registry > * Setting Up a Secure NiFi to Integrate with a Secure NiFi Registry -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4965) UI - Bulletin Board - Unable to find the specified component
Pierre Villard created NIFI-4965: Summary: UI - Bulletin Board - Unable to find the specified component Key: NIFI-4965 URL: https://issues.apache.org/jira/browse/NIFI-4965 Project: Apache NiFi Issue Type: Bug Components: Core UI Affects Versions: 1.5.0 Reporter: Pierre Villard When defining a CS at a process group level, if bulletins are generated by this CS, then the link (on the UUID) from the bulletin board won't be working: it'll get the user into the corresponding parent process group of the CS but will display an "Unable to find the specified component" error message. Also (could be a separate JIRA)... when defining CS/Reporting tasks in controller settings menu. If bulletins are generated, there is no link on the UUID. Could be worth having a link opening the controller settings view on the corresponding tab (CS or reporting task). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
[ https://issues.apache.org/jira/browse/NIFI-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395804#comment-16395804 ] ASF GitHub Bot commented on NIFI-4849: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2468 @mcgilman I have pushed a new commit that addresses above conversation. > Add REST Endpoint for gathering Processor Diagnostics information > - > > Key: NIFI-4849 > URL: https://issues.apache.org/jira/browse/NIFI-4849 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > We need to add a REST endpoint that will use the appropriate resources to > gather the Processor Diagnostics information. Information to return should > include things like: > * Processor config > * Processor status > * Garbage Collection info > * Repo Sizes > * Connection info for connections whose source or destination is the > processor > * Controller Services that the processor is referencing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2468: NIFI-4849: Implemented REST Endpoint and associated backen...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2468 @mcgilman I have pushed a new commit that addresses above conversation. ---
[jira] [Created] (NIFI-4964) Add bulk lookup feature in LookupRecord
Pierre Villard created NIFI-4964: Summary: Add bulk lookup feature in LookupRecord Key: NIFI-4964 URL: https://issues.apache.org/jira/browse/NIFI-4964 Project: Apache NiFi Issue Type: Improvement Components: Extensions Reporter: Pierre Villard When having a flow file with a large number of records it would be much more efficient to parse the whole flow file once to list all the coordinates to look for, then call a new method (lookupAll?) in the lookup service to get all the results, and then parse the file one more time to update the records. It should be added in the CS description/annotations that this approach could hold in memory a large number of objects but could result in better performances for lookup services accessing external systems (Mongo, HBase, etc). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4864) Additional Resources property pointing at a directory won't find new JARs
[ https://issues.apache.org/jira/browse/NIFI-4864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395794#comment-16395794 ] ASF GitHub Bot commented on NIFI-4864: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2470 @zenfenan this seems to be working well, I had a few minor changes I posted here: https://github.com/bbende/nifi/commits/NIFI-4864 If you are good with that last commit I made then I will go ahead and merge this. To summarize my changes... - Changed to using StringUtils.eqausl(oldFingerprintg, newFingerprint) because its possible old fingerprint is null or empty and we would still want to replace it with the new one if we have a new one - Made the reload method synchronized - Removed a the has/get/set fingerprint methods from the interface to try and keep all the fingerprint logic inside of AbstractConfiguredComponent > Additional Resources property pointing at a directory won't find new JARs > - > > Key: NIFI-4864 > URL: https://issues.apache.org/jira/browse/NIFI-4864 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 >Reporter: Bryan Bende >Assignee: Sivaprasanna Sethuraman >Priority: Minor > > If you have a processor/Controller Service/Reporting Task that has a property > with dynamicallyModifiesClasspath(true) and you set the value to a directory, > the resources in that directory will only be calculated when that property > changes. This means if you added JARs to the directory later, and stopped and > started your processor, those new JARs still won't be available. You would > have to change the property to a new directory, or back and forth to some > other directory, to force a recalculation. > The setProperties method in AbstractConfiguredComponent is where it looks at > incoming property changes and determines if any were for classpath related > properties and then calls reload accordingly. > We would need to consider the case where setProperties is never even being > called, someone just stops and starts the processor and would want to pick up > any new JARs added. > A possible solution might be to computer some kind of hash/fingerprint of the > URLs each time reload is called, and then when starting the processor we > could recompute the fingerprint and compare it to the previous one. If they > are different then we call reload before starting the component. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2470: NIFI-4864 Fixing additional resources property pointing to...
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2470 @zenfenan this seems to be working well, I had a few minor changes I posted here: https://github.com/bbende/nifi/commits/NIFI-4864 If you are good with that last commit I made then I will go ahead and merge this. To summarize my changes... - Changed to using StringUtils.eqausl(oldFingerprintg, newFingerprint) because its possible old fingerprint is null or empty and we would still want to replace it with the new one if we have a new one - Made the reload method synchronized - Removed a the has/get/set fingerprint methods from the interface to try and keep all the fingerprint logic inside of AbstractConfiguredComponent ---
[jira] [Commented] (NIFI-4961) Allow data to be set on MockFlowFile
[ https://issues.apache.org/jira/browse/NIFI-4961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395773#comment-16395773 ] ASF GitHub Bot commented on NIFI-4961: -- Github user kai5263499 commented on the issue: https://github.com/apache/nifi/pull/2533 Yes, I have a grpc processor that allows binary data to be passed back and forth and I want to be able to write a unit test to verify the processor returns valid protobufs. I didn't see a way to do that with the current MockFlowFile class where setData is protected > Allow data to be set on MockFlowFile > > > Key: NIFI-4961 > URL: https://issues.apache.org/jira/browse/NIFI-4961 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Glenn Widner >Priority: Minor > > While working on tests for a custom processor I noticed that the setData > method is private which makes it hard to test that my processor is handling > FlowFile content properly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2533: NIFI-4961 Allow data to be set on MockFlowFile
Github user kai5263499 commented on the issue: https://github.com/apache/nifi/pull/2533 Yes, I have a grpc processor that allows binary data to be passed back and forth and I want to be able to write a unit test to verify the processor returns valid protobufs. I didn't see a way to do that with the current MockFlowFile class where setData is protected ---
[jira] [Commented] (NIFI-4961) Allow data to be set on MockFlowFile
[ https://issues.apache.org/jira/browse/NIFI-4961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395769#comment-16395769 ] ASF GitHub Bot commented on NIFI-4961: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2533 Hi @kai5263499, I'm curious, do you have an example of unit test you want to implement that requires access to this method? > Allow data to be set on MockFlowFile > > > Key: NIFI-4961 > URL: https://issues.apache.org/jira/browse/NIFI-4961 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Glenn Widner >Priority: Minor > > While working on tests for a custom processor I noticed that the setData > method is private which makes it hard to test that my processor is handling > FlowFile content properly. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2533: NIFI-4961 Allow data to be set on MockFlowFile
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2533 Hi @kai5263499, I'm curious, do you have an example of unit test you want to implement that requires access to this method? ---
[jira] [Commented] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395757#comment-16395757 ] ASF GitHub Bot commented on NIFI-4885: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2515 > More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > Fix For: 1.6.0 > > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-4953) FetchHBaseRow filling logs with unnecessary error messages
[ https://issues.apache.org/jira/browse/NIFI-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-4953. -- Resolution: Fixed Fix Version/s: 1.6.0 > FetchHBaseRow filling logs with unnecessary error messages > -- > > Key: NIFI-4953 > URL: https://issues.apache.org/jira/browse/NIFI-4953 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Ed Berezitsky >Assignee: Ed Berezitsky >Priority: Major > Fix For: 1.6.0 > > > FetchHbaseRow prints error messages into logs when rowkey is not found. Such > messages generate a lot of logs while unnecessary, and affect log-based > monitoring. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2515: NIFI-4885: Granular component restrictions
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2515 ---
[GitHub] nifi pull request #2527: FetchHBaseRow - log level and displayName
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2527 ---
[jira] [Commented] (NIFI-4953) FetchHBaseRow filling logs with unnecessary error messages
[ https://issues.apache.org/jira/browse/NIFI-4953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395756#comment-16395756 ] ASF subversion and git services commented on NIFI-4953: --- Commit 373cf090a46e03bca49335b9df7d5de0bd94a086 in nifi's branch refs/heads/master from [~Berezitsky] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=373cf09 ] NIFI-4953 - FetchHBaseRow - update log level for "not found" to DEBUG instead of ERROR Signed-off-by: Pierre VillardThis closes #2527. > FetchHBaseRow filling logs with unnecessary error messages > -- > > Key: NIFI-4953 > URL: https://issues.apache.org/jira/browse/NIFI-4953 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Ed Berezitsky >Assignee: Ed Berezitsky >Priority: Major > > FetchHbaseRow prints error messages into logs when rowkey is not found. Such > messages generate a lot of logs while unnecessary, and affect log-based > monitoring. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
[ https://issues.apache.org/jira/browse/NIFI-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395695#comment-16395695 ] ASF GitHub Bot commented on NIFI-4849: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2468 I think at this point, after looking at the data model for JVMDiagnosticsSnapshotDTO we can probably just break this into 3 distinct fields: JVMSystemDiagnosticsSnapshotDTO, JVMControllerDiagnosticsDTO, JVMFlowDiagnosticsDTO. Then, it makes the filtering and the permissions (and likely the merging) a lot cleaner. Will head down that path and see if that cleans things up. Thanks for the review so far @mcgilman! > Add REST Endpoint for gathering Processor Diagnostics information > - > > Key: NIFI-4849 > URL: https://issues.apache.org/jira/browse/NIFI-4849 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > We need to add a REST endpoint that will use the appropriate resources to > gather the Processor Diagnostics information. Information to return should > include things like: > * Processor config > * Processor status > * Garbage Collection info > * Repo Sizes > * Connection info for connections whose source or destination is the > processor > * Controller Services that the processor is referencing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2468: NIFI-4849: Implemented REST Endpoint and associated backen...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2468 I think at this point, after looking at the data model for JVMDiagnosticsSnapshotDTO we can probably just break this into 3 distinct fields: JVMSystemDiagnosticsSnapshotDTO, JVMControllerDiagnosticsDTO, JVMFlowDiagnosticsDTO. Then, it makes the filtering and the permissions (and likely the merging) a lot cleaner. Will head down that path and see if that cleans things up. Thanks for the review so far @mcgilman! ---
[jira] [Commented] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
[ https://issues.apache.org/jira/browse/NIFI-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395691#comment-16395691 ] ASF GitHub Bot commented on NIFI-4849: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173906446 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java --- @@ -4506,6 +4515,123 @@ public ComponentHistoryDTO getComponentHistory(final String componentId) { return history; } +private ControllerServiceEntity createControllerServiceEntity(final String serviceId, final NiFiUser user) { +final ControllerServiceNode serviceNode = controllerServiceDAO.getControllerService(serviceId); +return createControllerServiceEntity(serviceNode, Collections.emptySet(), user); +} + +@Override +public ProcessorDiagnosticsEntity getProcessorDiagnostics(final String id) { +final ProcessorNode processor = processorDAO.getProcessor(id); +final ProcessorStatus processorStatus = controllerFacade.getProcessorStatus(id); + +// Generate Processor Diagnostics +final NiFiUser user = NiFiUserUtils.getNiFiUser(); +final ProcessorDiagnosticsDTO dto = controllerFacade.getProcessorDiagnostics(processor, processorStatus, bulletinRepository, serviceId -> createControllerServiceEntity(serviceId, user)); + +// Filter anything out of diagnostics that the user is not authorized to see. +final List jvmDiagnosticsSnaphots = new ArrayList<>(); +final JVMDiagnosticsDTO jvmDiagnostics = dto.getJvmDiagnostics(); +jvmDiagnosticsSnaphots.add(jvmDiagnostics.getAggregateSnapshot()); + +// filter controller-related information +final boolean canReadController = authorizableLookup.getController().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadController) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { +snapshot.setMaxEventDrivenThreads(null); +snapshot.setMaxTimerDrivenThreads(null); +snapshot.setBundlesLoaded(null); +} +} + +// filter system diagnostics information +final boolean canReadSystem = authorizableLookup.getSystem().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadSystem) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { +snapshot.setContentRepositoryStorageUsage(null); +snapshot.setCpuCores(null); +snapshot.setCpuLoadAverage(null); +snapshot.setFlowFileRepositoryStorageUsage(null); +snapshot.setMaxHeap(null); +snapshot.setMaxHeapBytes(null); +snapshot.setProvenanceRepositoryStorageUsage(null); +snapshot.setPhysicalMemory(null); +snapshot.setPhysicalMemoryBytes(null); +snapshot.setGarbageCollectionDiagnostics(null); +} +} + +// filter connections +final Predicate connectionAuthorized = connectionDiagnostics -> { +final String connectionId = connectionDiagnostics.getConnection().getId(); +return authorizableLookup.getConnection(connectionId).getAuthorizable().isAuthorized(authorizer, RequestAction.READ, user); +}; + +// Function that can be used to remove the Source or Destination of a ConnectionDTO, if the user is not authorized. +final FunctionfilterSourceDestination = connectionDiagnostics -> { --- End diff -- Good call. That would mean that the second Function there is not really needed. Will address. > Add REST Endpoint for gathering Processor Diagnostics information > - > > Key: NIFI-4849 > URL: https://issues.apache.org/jira/browse/NIFI-4849 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > We need to add a REST endpoint that will use the appropriate resources to > gather the Processor Diagnostics information. Information to return should > include things like: > * Processor config > * Processor status > * Garbage Collection info > * Repo Sizes > * Connection info for connections whose source
[GitHub] nifi pull request #2468: NIFI-4849: Implemented REST Endpoint and associated...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173906446 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java --- @@ -4506,6 +4515,123 @@ public ComponentHistoryDTO getComponentHistory(final String componentId) { return history; } +private ControllerServiceEntity createControllerServiceEntity(final String serviceId, final NiFiUser user) { +final ControllerServiceNode serviceNode = controllerServiceDAO.getControllerService(serviceId); +return createControllerServiceEntity(serviceNode, Collections.emptySet(), user); +} + +@Override +public ProcessorDiagnosticsEntity getProcessorDiagnostics(final String id) { +final ProcessorNode processor = processorDAO.getProcessor(id); +final ProcessorStatus processorStatus = controllerFacade.getProcessorStatus(id); + +// Generate Processor Diagnostics +final NiFiUser user = NiFiUserUtils.getNiFiUser(); +final ProcessorDiagnosticsDTO dto = controllerFacade.getProcessorDiagnostics(processor, processorStatus, bulletinRepository, serviceId -> createControllerServiceEntity(serviceId, user)); + +// Filter anything out of diagnostics that the user is not authorized to see. +final List jvmDiagnosticsSnaphots = new ArrayList<>(); +final JVMDiagnosticsDTO jvmDiagnostics = dto.getJvmDiagnostics(); +jvmDiagnosticsSnaphots.add(jvmDiagnostics.getAggregateSnapshot()); + +// filter controller-related information +final boolean canReadController = authorizableLookup.getController().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadController) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { +snapshot.setMaxEventDrivenThreads(null); +snapshot.setMaxTimerDrivenThreads(null); +snapshot.setBundlesLoaded(null); +} +} + +// filter system diagnostics information +final boolean canReadSystem = authorizableLookup.getSystem().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadSystem) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { +snapshot.setContentRepositoryStorageUsage(null); +snapshot.setCpuCores(null); +snapshot.setCpuLoadAverage(null); +snapshot.setFlowFileRepositoryStorageUsage(null); +snapshot.setMaxHeap(null); +snapshot.setMaxHeapBytes(null); +snapshot.setProvenanceRepositoryStorageUsage(null); +snapshot.setPhysicalMemory(null); +snapshot.setPhysicalMemoryBytes(null); +snapshot.setGarbageCollectionDiagnostics(null); +} +} + +// filter connections +final Predicate connectionAuthorized = connectionDiagnostics -> { +final String connectionId = connectionDiagnostics.getConnection().getId(); +return authorizableLookup.getConnection(connectionId).getAuthorizable().isAuthorized(authorizer, RequestAction.READ, user); +}; + +// Function that can be used to remove the Source or Destination of a ConnectionDTO, if the user is not authorized. +final FunctionfilterSourceDestination = connectionDiagnostics -> { --- End diff -- Good call. That would mean that the second Function there is not really needed. Will address. ---
[jira] [Commented] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
[ https://issues.apache.org/jira/browse/NIFI-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395671#comment-16395671 ] ASF GitHub Bot commented on NIFI-4849: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173902301 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java --- @@ -4506,6 +4515,123 @@ public ComponentHistoryDTO getComponentHistory(final String componentId) { return history; } +private ControllerServiceEntity createControllerServiceEntity(final String serviceId, final NiFiUser user) { +final ControllerServiceNode serviceNode = controllerServiceDAO.getControllerService(serviceId); +return createControllerServiceEntity(serviceNode, Collections.emptySet(), user); +} + +@Override +public ProcessorDiagnosticsEntity getProcessorDiagnostics(final String id) { +final ProcessorNode processor = processorDAO.getProcessor(id); +final ProcessorStatus processorStatus = controllerFacade.getProcessorStatus(id); + +// Generate Processor Diagnostics +final NiFiUser user = NiFiUserUtils.getNiFiUser(); +final ProcessorDiagnosticsDTO dto = controllerFacade.getProcessorDiagnostics(processor, processorStatus, bulletinRepository, serviceId -> createControllerServiceEntity(serviceId, user)); + +// Filter anything out of diagnostics that the user is not authorized to see. +final List jvmDiagnosticsSnaphots = new ArrayList<>(); +final JVMDiagnosticsDTO jvmDiagnostics = dto.getJvmDiagnostics(); +jvmDiagnosticsSnaphots.add(jvmDiagnostics.getAggregateSnapshot()); + +// filter controller-related information +final boolean canReadController = authorizableLookup.getController().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadController) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { --- End diff -- I don't think that active event & timer driven threads need to be filtered because they are available to anyone with access to /flow. I also feel like uptime should be available. I could see an argument for the elected primary & coordinator not being included, but I went back and forth on that a bit personally, because currently that info is not exposed anywhere except if you have that permissions. It seemed quite benign to me to include this, but If you think they should be removed I am okay with it. > Add REST Endpoint for gathering Processor Diagnostics information > - > > Key: NIFI-4849 > URL: https://issues.apache.org/jira/browse/NIFI-4849 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > We need to add a REST endpoint that will use the appropriate resources to > gather the Processor Diagnostics information. Information to return should > include things like: > * Processor config > * Processor status > * Garbage Collection info > * Repo Sizes > * Connection info for connections whose source or destination is the > processor > * Controller Services that the processor is referencing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
[ https://issues.apache.org/jira/browse/NIFI-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395672#comment-16395672 ] ASF GitHub Bot commented on NIFI-4849: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173902565 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java --- @@ -4506,6 +4515,123 @@ public ComponentHistoryDTO getComponentHistory(final String componentId) { return history; } +private ControllerServiceEntity createControllerServiceEntity(final String serviceId, final NiFiUser user) { +final ControllerServiceNode serviceNode = controllerServiceDAO.getControllerService(serviceId); +return createControllerServiceEntity(serviceNode, Collections.emptySet(), user); +} + +@Override +public ProcessorDiagnosticsEntity getProcessorDiagnostics(final String id) { +final ProcessorNode processor = processorDAO.getProcessor(id); +final ProcessorStatus processorStatus = controllerFacade.getProcessorStatus(id); + +// Generate Processor Diagnostics +final NiFiUser user = NiFiUserUtils.getNiFiUser(); +final ProcessorDiagnosticsDTO dto = controllerFacade.getProcessorDiagnostics(processor, processorStatus, bulletinRepository, serviceId -> createControllerServiceEntity(serviceId, user)); + +// Filter anything out of diagnostics that the user is not authorized to see. +final List jvmDiagnosticsSnaphots = new ArrayList<>(); +final JVMDiagnosticsDTO jvmDiagnostics = dto.getJvmDiagnostics(); +jvmDiagnosticsSnaphots.add(jvmDiagnostics.getAggregateSnapshot()); + +// filter controller-related information +final boolean canReadController = authorizableLookup.getController().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadController) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { +snapshot.setMaxEventDrivenThreads(null); +snapshot.setMaxTimerDrivenThreads(null); +snapshot.setBundlesLoaded(null); +} +} + +// filter system diagnostics information +final boolean canReadSystem = authorizableLookup.getSystem().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadSystem) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { --- End diff -- That's a good call. Will update that. > Add REST Endpoint for gathering Processor Diagnostics information > - > > Key: NIFI-4849 > URL: https://issues.apache.org/jira/browse/NIFI-4849 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > We need to add a REST endpoint that will use the appropriate resources to > gather the Processor Diagnostics information. Information to return should > include things like: > * Processor config > * Processor status > * Garbage Collection info > * Repo Sizes > * Connection info for connections whose source or destination is the > processor > * Controller Services that the processor is referencing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2468: NIFI-4849: Implemented REST Endpoint and associated...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173902565 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java --- @@ -4506,6 +4515,123 @@ public ComponentHistoryDTO getComponentHistory(final String componentId) { return history; } +private ControllerServiceEntity createControllerServiceEntity(final String serviceId, final NiFiUser user) { +final ControllerServiceNode serviceNode = controllerServiceDAO.getControllerService(serviceId); +return createControllerServiceEntity(serviceNode, Collections.emptySet(), user); +} + +@Override +public ProcessorDiagnosticsEntity getProcessorDiagnostics(final String id) { +final ProcessorNode processor = processorDAO.getProcessor(id); +final ProcessorStatus processorStatus = controllerFacade.getProcessorStatus(id); + +// Generate Processor Diagnostics +final NiFiUser user = NiFiUserUtils.getNiFiUser(); +final ProcessorDiagnosticsDTO dto = controllerFacade.getProcessorDiagnostics(processor, processorStatus, bulletinRepository, serviceId -> createControllerServiceEntity(serviceId, user)); + +// Filter anything out of diagnostics that the user is not authorized to see. +final List jvmDiagnosticsSnaphots = new ArrayList<>(); +final JVMDiagnosticsDTO jvmDiagnostics = dto.getJvmDiagnostics(); +jvmDiagnosticsSnaphots.add(jvmDiagnostics.getAggregateSnapshot()); + +// filter controller-related information +final boolean canReadController = authorizableLookup.getController().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadController) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { +snapshot.setMaxEventDrivenThreads(null); +snapshot.setMaxTimerDrivenThreads(null); +snapshot.setBundlesLoaded(null); +} +} + +// filter system diagnostics information +final boolean canReadSystem = authorizableLookup.getSystem().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadSystem) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { --- End diff -- That's a good call. Will update that. ---
[GitHub] nifi pull request #2468: NIFI-4849: Implemented REST Endpoint and associated...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173902301 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/StandardNiFiServiceFacade.java --- @@ -4506,6 +4515,123 @@ public ComponentHistoryDTO getComponentHistory(final String componentId) { return history; } +private ControllerServiceEntity createControllerServiceEntity(final String serviceId, final NiFiUser user) { +final ControllerServiceNode serviceNode = controllerServiceDAO.getControllerService(serviceId); +return createControllerServiceEntity(serviceNode, Collections.emptySet(), user); +} + +@Override +public ProcessorDiagnosticsEntity getProcessorDiagnostics(final String id) { +final ProcessorNode processor = processorDAO.getProcessor(id); +final ProcessorStatus processorStatus = controllerFacade.getProcessorStatus(id); + +// Generate Processor Diagnostics +final NiFiUser user = NiFiUserUtils.getNiFiUser(); +final ProcessorDiagnosticsDTO dto = controllerFacade.getProcessorDiagnostics(processor, processorStatus, bulletinRepository, serviceId -> createControllerServiceEntity(serviceId, user)); + +// Filter anything out of diagnostics that the user is not authorized to see. +final List jvmDiagnosticsSnaphots = new ArrayList<>(); +final JVMDiagnosticsDTO jvmDiagnostics = dto.getJvmDiagnostics(); +jvmDiagnosticsSnaphots.add(jvmDiagnostics.getAggregateSnapshot()); + +// filter controller-related information +final boolean canReadController = authorizableLookup.getController().isAuthorized(authorizer, RequestAction.READ, user); +if (!canReadController) { +for (final JVMDiagnosticsSnapshotDTO snapshot : jvmDiagnosticsSnaphots) { --- End diff -- I don't think that active event & timer driven threads need to be filtered because they are available to anyone with access to /flow. I also feel like uptime should be available. I could see an argument for the elected primary & coordinator not being included, but I went back and forth on that a bit personally, because currently that info is not exposed anywhere except if you have that permissions. It seemed quite benign to me to include this, but If you think they should be removed I am okay with it. ---
[jira] [Commented] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
[ https://issues.apache.org/jira/browse/NIFI-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395666#comment-16395666 ] ASF GitHub Bot commented on NIFI-4849: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173900631 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/dto/EntityFactory.java --- @@ -77,6 +79,27 @@ public final class EntityFactory { +public ProcessorDiagnosticsEntity createProcessorDiagnosticsEntity(final ProcessorDiagnosticsDTO dto, final RevisionDTO revision, final PermissionsDTO processorPermissions, +final ProcessorStatusDTO status, final List bulletins) { +final ProcessorDiagnosticsEntity entity = new ProcessorDiagnosticsEntity(); +entity.setRevision(revision); +if (dto != null) { +entity.setPermissions(processorPermissions); +entity.setId(dto.getProcessor().getId()); +if (processorPermissions != null && processorPermissions.getCanRead()) { +entity.setComponent(dto); +entity.setBulletins(bulletins); +} +} + +entity.setBulletins(bulletins); +return entity; +} + +private void pairDownDiagnostics(final ProcessorDiagnosticsDTO dto, final PermissionsDTO controllerPermissions) { --- End diff -- Whoops, yes, good call. > Add REST Endpoint for gathering Processor Diagnostics information > - > > Key: NIFI-4849 > URL: https://issues.apache.org/jira/browse/NIFI-4849 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > We need to add a REST endpoint that will use the appropriate resources to > gather the Processor Diagnostics information. Information to return should > include things like: > * Processor config > * Processor status > * Garbage Collection info > * Repo Sizes > * Connection info for connections whose source or destination is the > processor > * Controller Services that the processor is referencing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2468: NIFI-4849: Implemented REST Endpoint and associated...
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2468#discussion_r173900631 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-web/nifi-web-api/src/main/java/org/apache/nifi/web/api/dto/EntityFactory.java --- @@ -77,6 +79,27 @@ public final class EntityFactory { +public ProcessorDiagnosticsEntity createProcessorDiagnosticsEntity(final ProcessorDiagnosticsDTO dto, final RevisionDTO revision, final PermissionsDTO processorPermissions, +final ProcessorStatusDTO status, final List bulletins) { +final ProcessorDiagnosticsEntity entity = new ProcessorDiagnosticsEntity(); +entity.setRevision(revision); +if (dto != null) { +entity.setPermissions(processorPermissions); +entity.setId(dto.getProcessor().getId()); +if (processorPermissions != null && processorPermissions.getCanRead()) { +entity.setComponent(dto); +entity.setBulletins(bulletins); +} +} + +entity.setBulletins(bulletins); +return entity; +} + +private void pairDownDiagnostics(final ProcessorDiagnosticsDTO dto, final PermissionsDTO controllerPermissions) { --- End diff -- Whoops, yes, good call. ---
[jira] [Resolved] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-4885. -- Resolution: Fixed Fix Version/s: 1.6.0 > More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > Fix For: 1.6.0 > > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395653#comment-16395653 ] ASF GitHub Bot commented on NIFI-4885: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2515 @mcgilman this all looks good to me as well! Given positive review feedback from @andrewmlim and a +1 from @scottyaslan I am happy with the changes and think this is a great improvement on our security model. +1 merged to master. Thanks! > More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > Fix For: 1.6.0 > > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2515: NIFI-4885: Granular component restrictions
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2515 @mcgilman this all looks good to me as well! Given positive review feedback from @andrewmlim and a +1 from @scottyaslan I am happy with the changes and think this is a great improvement on our security model. +1 merged to master. Thanks! ---
[jira] [Commented] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395648#comment-16395648 ] ASF subversion and git services commented on NIFI-4885: --- Commit b1217f529bfc5ea9296d1d55c6b0fe92a881a485 in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=b1217f5 ] NIFI-4885: - Introducing more granular restricted component access policies. This closes #2515. Signed-off-by: Mark Payne> More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-3599) Add nifi.properties value to globally set the default backpressure size threshold for each connection
[ https://issues.apache.org/jira/browse/NIFI-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395645#comment-16395645 ] ASF GitHub Bot commented on NIFI-3599: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2497 @mosermw @markap14 It's kind of a gray area. I suggested an endpoint like the /nifi-api/flow/about because that seemed like the best fit currently and was obviously trying to avoid requiring another request to load the UI. I'm a little hesitant to go the route of a /nifi-api/flow/properties endpoint because I'm not sure that's a concept we want to advertise/expose. I don't mind doing something more generic, but what if the concept was more related to default values or config. This is something that any client using the API may want to know ahead of time for this reason exactly. Would adding a DefaultsDTO or ConfigDTO which is set on the AboutDTO fit a little better? > Add nifi.properties value to globally set the default backpressure size > threshold for each connection > - > > Key: NIFI-3599 > URL: https://issues.apache.org/jira/browse/NIFI-3599 > Project: Apache NiFi > Issue Type: Improvement > Components: Configuration >Reporter: Jeremy Dyer >Assignee: Michael Moser >Priority: Major > > By default each new connection added to the workflow canvas will have a > default backpressure size threshold of 10,000 objects. While the threshold > can be changed on a connection level it would be convenient to have a global > mechanism for setting that value to something other than 10,000. This > enhancement would add a property to nifi.properties that would allow for this > threshold to be set globally unless otherwise overridden at the connection > level. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2497: NIFI-3599 Allowed back pressure object count and data size...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2497 @mosermw @markap14 It's kind of a gray area. I suggested an endpoint like the /nifi-api/flow/about because that seemed like the best fit currently and was obviously trying to avoid requiring another request to load the UI. I'm a little hesitant to go the route of a /nifi-api/flow/properties endpoint because I'm not sure that's a concept we want to advertise/expose. I don't mind doing something more generic, but what if the concept was more related to default values or config. This is something that any client using the API may want to know ahead of time for this reason exactly. Would adding a DefaultsDTO or ConfigDTO which is set on the AboutDTO fit a little better? ---
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395609#comment-16395609 ] ASF GitHub Bot commented on NIFI-4325: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2113 I'm not sure I'll have time to close the loop before 1.6.0, so if you'd like to finish the review/merge after Mike's rebase that would be very cool, thanks! > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2113 I'm not sure I'll have time to close the loop before 1.6.0, so if you'd like to finish the review/merge after Mike's rebase that would be very cool, thanks! ---
[jira] [Commented] (NIFI-4325) Create a new ElasticSearch processor that supports the JSON DSL
[ https://issues.apache.org/jira/browse/NIFI-4325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395597#comment-16395597 ] ASF GitHub Bot commented on NIFI-4325: -- Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Hey @mattyb149, what's your status for this review? From a cursory look, it appears like it just needs an updated pom from @MikeThomsen and the final approval. With talks of 1.6.0 happening it would be nice to get this in so those using ES 6 aren't limited to the HTTP processor. If help is needed to finalize things just let me know where I can help. > Create a new ElasticSearch processor that supports the JSON DSL > --- > > Key: NIFI-4325 > URL: https://issues.apache.org/jira/browse/NIFI-4325 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Mike Thomsen >Priority: Minor > > The existing ElasticSearch processors use the Lucene-style syntax for > querying, not the JSON DSL. A new processor is needed that can take a full > JSON query and execute it. It should also support aggregation queries in this > syntax. A user needs to be able to take a query as-is from Kibana and drop it > into NiFi and have it just run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2113: NIFI-4325 Added new processor that uses the JSON DSL.
Github user JPercivall commented on the issue: https://github.com/apache/nifi/pull/2113 Hey @mattyb149, what's your status for this review? From a cursory look, it appears like it just needs an updated pom from @MikeThomsen and the final approval. With talks of 1.6.0 happening it would be nice to get this in so those using ES 6 aren't limited to the HTTP processor. If help is needed to finalize things just let me know where I can help. ---
[jira] [Commented] (NIFI-4944) PutHiveStreaming multiple instances with Snappy fail intermittently
[ https://issues.apache.org/jira/browse/NIFI-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395575#comment-16395575 ] ASF GitHub Bot commented on NIFI-4944: -- Github user moonkev commented on the issue: https://github.com/apache/nifi/pull/2519 I was aware that PutHiveStreaming used snappy-java vs native snappy, but was unaware of the methods that snappy-java protect against loading in multiple class loaders. Many thanks for the detailed explanation @mcgilman! > PutHiveStreaming multiple instances with Snappy fail intermittently > --- > > Key: NIFI-4944 > URL: https://issues.apache.org/jira/browse/NIFI-4944 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.6.0 > > > When data coming into PutHiveStreaming is compressed with Snappy, then > multiple instances of PutHiveStreaming in a flow can cause a failure, the log > often shows the following: > {{org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$510/1467586448@68a5884d > failed to process due to org.xerial.snappy.SnappyError: > [FAILED_TO_LOAD_NATIVE_LIBRARY] null; rolling back session: {}}} > This is due to a race condition in Snappy 1.0.5 (the version used by the Hive > NAR) where two classloaders can try to define the native loader class, thus > the second one would fail, giving the error above. > The proposed solution is to guarantee that Snappy is loaded before this > situation is encountered (i.e. before the InstanceClassLoaders are created). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2519: NIFI-4944: Guard against race condition in Snappy for PutH...
Github user moonkev commented on the issue: https://github.com/apache/nifi/pull/2519 I was aware that PutHiveStreaming used snappy-java vs native snappy, but was unaware of the methods that snappy-java protect against loading in multiple class loaders. Many thanks for the detailed explanation @mcgilman! ---
[jira] [Created] (MINIFICPP-423) Implement encode/decode EL functions
Andrew Christianson created MINIFICPP-423: - Summary: Implement encode/decode EL functions Key: MINIFICPP-423 URL: https://issues.apache.org/jira/browse/MINIFICPP-423 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: Andrew Christianson Assignee: Andrew Christianson [Encode/Decode Functions|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#encode] * [escapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapejson] * [escapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapexml] * [escapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapecsv] * [escapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml3] * [escapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#escapehtml4] * [unescapeJson|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapejson] * [unescapeXml|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapexml] * [unescapeCsv|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapecsv] * [unescapeHtml3|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml3] * [unescapeHtml4|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#unescapehtml4] * [urlEncode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urlencode] * [urlDecode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#urldecode] * [base64Encode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64encode] * [base64Decode|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#base64decode] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4944) PutHiveStreaming multiple instances with Snappy fail intermittently
[ https://issues.apache.org/jira/browse/NIFI-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395549#comment-16395549 ] ASF GitHub Bot commented on NIFI-4944: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2519 Thanks @mattyb149! This has been merged to master. > PutHiveStreaming multiple instances with Snappy fail intermittently > --- > > Key: NIFI-4944 > URL: https://issues.apache.org/jira/browse/NIFI-4944 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.6.0 > > > When data coming into PutHiveStreaming is compressed with Snappy, then > multiple instances of PutHiveStreaming in a flow can cause a failure, the log > often shows the following: > {{org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$510/1467586448@68a5884d > failed to process due to org.xerial.snappy.SnappyError: > [FAILED_TO_LOAD_NATIVE_LIBRARY] null; rolling back session: {}}} > This is due to a race condition in Snappy 1.0.5 (the version used by the Hive > NAR) where two classloaders can try to define the native loader class, thus > the second one would fail, giving the error above. > The proposed solution is to guarantee that Snappy is loaded before this > situation is encountered (i.e. before the InstanceClassLoaders are created). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4944) PutHiveStreaming multiple instances with Snappy fail intermittently
[ https://issues.apache.org/jira/browse/NIFI-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395550#comment-16395550 ] ASF GitHub Bot commented on NIFI-4944: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2519 > PutHiveStreaming multiple instances with Snappy fail intermittently > --- > > Key: NIFI-4944 > URL: https://issues.apache.org/jira/browse/NIFI-4944 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.6.0 > > > When data coming into PutHiveStreaming is compressed with Snappy, then > multiple instances of PutHiveStreaming in a flow can cause a failure, the log > often shows the following: > {{org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$510/1467586448@68a5884d > failed to process due to org.xerial.snappy.SnappyError: > [FAILED_TO_LOAD_NATIVE_LIBRARY] null; rolling back session: {}}} > This is due to a race condition in Snappy 1.0.5 (the version used by the Hive > NAR) where two classloaders can try to define the native loader class, thus > the second one would fail, giving the error above. > The proposed solution is to guarantee that Snappy is loaded before this > situation is encountered (i.e. before the InstanceClassLoaders are created). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4944) PutHiveStreaming multiple instances with Snappy fail intermittently
[ https://issues.apache.org/jira/browse/NIFI-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-4944: -- Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > PutHiveStreaming multiple instances with Snappy fail intermittently > --- > > Key: NIFI-4944 > URL: https://issues.apache.org/jira/browse/NIFI-4944 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > Fix For: 1.6.0 > > > When data coming into PutHiveStreaming is compressed with Snappy, then > multiple instances of PutHiveStreaming in a flow can cause a failure, the log > often shows the following: > {{org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$510/1467586448@68a5884d > failed to process due to org.xerial.snappy.SnappyError: > [FAILED_TO_LOAD_NATIVE_LIBRARY] null; rolling back session: {}}} > This is due to a race condition in Snappy 1.0.5 (the version used by the Hive > NAR) where two classloaders can try to define the native loader class, thus > the second one would fail, giving the error above. > The proposed solution is to guarantee that Snappy is loaded before this > situation is encountered (i.e. before the InstanceClassLoaders are created). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2519: NIFI-4944: Guard against race condition in Snappy for PutH...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2519 Thanks @mattyb149! This has been merged to master. ---
[jira] [Commented] (NIFI-4944) PutHiveStreaming multiple instances with Snappy fail intermittently
[ https://issues.apache.org/jira/browse/NIFI-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395548#comment-16395548 ] ASF subversion and git services commented on NIFI-4944: --- Commit d4632bdd5dce85cc7adb8c70bafda44d6a333da9 in nifi's branch refs/heads/master from [~ca9mbu] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=d4632bd ] NIFI-4944: Guard against race condition in Snappy for PutHiveStreaming NIFI-4944: Removed unnecessary synchronized block, added more comments This closes #2519 > PutHiveStreaming multiple instances with Snappy fail intermittently > --- > > Key: NIFI-4944 > URL: https://issues.apache.org/jira/browse/NIFI-4944 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > When data coming into PutHiveStreaming is compressed with Snappy, then > multiple instances of PutHiveStreaming in a flow can cause a failure, the log > often shows the following: > {{org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$510/1467586448@68a5884d > failed to process due to org.xerial.snappy.SnappyError: > [FAILED_TO_LOAD_NATIVE_LIBRARY] null; rolling back session: {}}} > This is due to a race condition in Snappy 1.0.5 (the version used by the Hive > NAR) where two classloaders can try to define the native loader class, thus > the second one would fail, giving the error above. > The proposed solution is to guarantee that Snappy is loaded before this > situation is encountered (i.e. before the InstanceClassLoaders are created). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2519: NIFI-4944: Guard against race condition in Snappy f...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2519 ---
[jira] [Commented] (NIFI-4944) PutHiveStreaming multiple instances with Snappy fail intermittently
[ https://issues.apache.org/jira/browse/NIFI-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395547#comment-16395547 ] ASF subversion and git services commented on NIFI-4944: --- Commit d4632bdd5dce85cc7adb8c70bafda44d6a333da9 in nifi's branch refs/heads/master from [~ca9mbu] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=d4632bd ] NIFI-4944: Guard against race condition in Snappy for PutHiveStreaming NIFI-4944: Removed unnecessary synchronized block, added more comments This closes #2519 > PutHiveStreaming multiple instances with Snappy fail intermittently > --- > > Key: NIFI-4944 > URL: https://issues.apache.org/jira/browse/NIFI-4944 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > When data coming into PutHiveStreaming is compressed with Snappy, then > multiple instances of PutHiveStreaming in a flow can cause a failure, the log > often shows the following: > {{org.apache.nifi.processors.hive.PutHiveStreaming$$Lambda$510/1467586448@68a5884d > failed to process due to org.xerial.snappy.SnappyError: > [FAILED_TO_LOAD_NATIVE_LIBRARY] null; rolling back session: {}}} > This is due to a race condition in Snappy 1.0.5 (the version used by the Hive > NAR) where two classloaders can try to define the native loader class, thus > the second one would fail, giving the error above. > The proposed solution is to guarantee that Snappy is loaded before this > situation is encountered (i.e. before the InstanceClassLoaders are created). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395501#comment-16395501 ] ASF GitHub Bot commented on NIFI-4885: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2515#discussion_r173869295 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/CurrentUserEndpointMerger.java --- @@ -53,6 +54,23 @@ protected void mergeResponses(final CurrentUserEntity clientEntity, final Map clientEntityComponentRestrictionsPermissions = clientEntity.getComponentRestrictionPermissions(); +final Set entityComponentRestrictionsPermissions = entity.getComponentRestrictionPermissions(); + +// only retain the component restriction permissions in common + clientEntityComponentRestrictionsPermissions.retainAll(entityComponentRestrictionsPermissions); + +// merge the component restriction permissions + clientEntityComponentRestrictionsPermissions.forEach(clientEntityPermission -> { +final ComponentRestrictionPermissionDTO entityPermission = entityComponentRestrictionsPermissions.stream().filter(entityComponentRestrictionsPermission -> { +return entityComponentRestrictionsPermission.getRequiredPermission().getId().equals(clientEntityPermission.getRequiredPermission().getId()); +}).findFirst().orElse(null); --- End diff -- Because we're doing a retainAll right before this we know that both collections will each have an entry for the current clientEntityPermission. I will update to use get() instead. > More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2515: NIFI-4885: Granular component restrictions
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2515#discussion_r173869295 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/CurrentUserEndpointMerger.java --- @@ -53,6 +54,23 @@ protected void mergeResponses(final CurrentUserEntity clientEntity, final Map clientEntityComponentRestrictionsPermissions = clientEntity.getComponentRestrictionPermissions(); +final Set entityComponentRestrictionsPermissions = entity.getComponentRestrictionPermissions(); + +// only retain the component restriction permissions in common + clientEntityComponentRestrictionsPermissions.retainAll(entityComponentRestrictionsPermissions); + +// merge the component restriction permissions + clientEntityComponentRestrictionsPermissions.forEach(clientEntityPermission -> { +final ComponentRestrictionPermissionDTO entityPermission = entityComponentRestrictionsPermissions.stream().filter(entityComponentRestrictionsPermission -> { +return entityComponentRestrictionsPermission.getRequiredPermission().getId().equals(clientEntityPermission.getRequiredPermission().getId()); +}).findFirst().orElse(null); --- End diff -- Because we're doing a retainAll right before this we know that both collections will each have an entry for the current clientEntityPermission. I will update to use get() instead. ---
[jira] [Commented] (NIFI-4885) More granular restricted component categories
[ https://issues.apache.org/jira/browse/NIFI-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395485#comment-16395485 ] ASF GitHub Bot commented on NIFI-4885: -- Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2515#discussion_r173858341 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/CurrentUserEndpointMerger.java --- @@ -53,6 +54,23 @@ protected void mergeResponses(final CurrentUserEntity clientEntity, final Map clientEntityComponentRestrictionsPermissions = clientEntity.getComponentRestrictionPermissions(); +final Set entityComponentRestrictionsPermissions = entity.getComponentRestrictionPermissions(); + +// only retain the component restriction permissions in common + clientEntityComponentRestrictionsPermissions.retainAll(entityComponentRestrictionsPermissions); + +// merge the component restriction permissions + clientEntityComponentRestrictionsPermissions.forEach(clientEntityPermission -> { +final ComponentRestrictionPermissionDTO entityPermission = entityComponentRestrictionsPermissions.stream().filter(entityComponentRestrictionsPermission -> { +return entityComponentRestrictionsPermission.getRequiredPermission().getId().equals(clientEntityPermission.getRequiredPermission().getId()); +}).findFirst().orElse(null); --- End diff -- Are we guaranteed at this point that there will be at least one entry? If so, then we should probably just use findFirst().get() because it makes this more clear. If not, then we could end up with a null value here, and the next line would then throw a NPE. > More granular restricted component categories > - > > Key: NIFI-4885 > URL: https://issues.apache.org/jira/browse/NIFI-4885 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework, Core UI >Reporter: Matt Gilman >Assignee: Matt Gilman >Priority: Major > > Update the Restricted annotation to support more granular categories. > Available categories will map to new access policies. Example categories and > their corresponding access policies may be > * read-filesystem (/restricted-components/read-filesystem) > * write-filesystem (/restricted-components/write-filesystem) > * code-execution (/restricted-components/code-execution) > * keytab-access (/restricted-components/keytab-access) > The hierarchical nature of the access policies will support backward > compatibility with existing installations where the policy of > /restricted-components was used to enforce all subcategories. Any users with > /restricted-components permissions will be granted access to all > subcategories. In order to leverage the new granular categories, an > administrator will need to use NiFi to update their access policies (remove a > user from /restricted-components and place them into the desired subcategory) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2515: NIFI-4885: Granular component restrictions
Github user markap14 commented on a diff in the pull request: https://github.com/apache/nifi/pull/2515#discussion_r173858341 --- Diff: nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-cluster/src/main/java/org/apache/nifi/cluster/coordination/http/endpoints/CurrentUserEndpointMerger.java --- @@ -53,6 +54,23 @@ protected void mergeResponses(final CurrentUserEntity clientEntity, final Map clientEntityComponentRestrictionsPermissions = clientEntity.getComponentRestrictionPermissions(); +final Set entityComponentRestrictionsPermissions = entity.getComponentRestrictionPermissions(); + +// only retain the component restriction permissions in common + clientEntityComponentRestrictionsPermissions.retainAll(entityComponentRestrictionsPermissions); + +// merge the component restriction permissions + clientEntityComponentRestrictionsPermissions.forEach(clientEntityPermission -> { +final ComponentRestrictionPermissionDTO entityPermission = entityComponentRestrictionsPermissions.stream().filter(entityComponentRestrictionsPermission -> { +return entityComponentRestrictionsPermission.getRequiredPermission().getId().equals(clientEntityPermission.getRequiredPermission().getId()); +}).findFirst().orElse(null); --- End diff -- Are we guaranteed at this point that there will be at least one entry? If so, then we should probably just use findFirst().get() because it makes this more clear. If not, then we could end up with a null value here, and the next line would then throw a NPE. ---
[jira] [Commented] (MINIFICPP-414) Implement Expression Language boolean logic operations
[ https://issues.apache.org/jira/browse/MINIFICPP-414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395448#comment-16395448 ] ASF GitHub Bot commented on MINIFICPP-414: -- Github user achristianson commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/275 This one depends wholly on MINIFICPP-422 and so cannot be separated from that commit. > Implement Expression Language boolean logic operations > -- > > Key: MINIFICPP-414 > URL: https://issues.apache.org/jira/browse/MINIFICPP-414 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > [Boolean > Logic|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#boolean] > * > [isNull|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#isnull] > * > [notNull|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#notnull] > * > [isEmpty|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#isempty] > * > [equals|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#equals] > * > [equalsIgnoreCase|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#equalsignorecase] > * > [gt|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#gt] > * > [ge|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#ge] > * > [lt|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#lt] > * > [le|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#le] > * > [and|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#and] > * > [or|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#or] > * > [not|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#not] > * > [ifElse|https://nifi.apache.org/docs/nifi-docs/html/expression-language-guide.html#ifelse] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #275: MINIFICPP-414 Added boolean Expression Language ...
Github user achristianson commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/275 This one depends wholly on MINIFICPP-422 and so cannot be separated from that commit. ---
[GitHub] nifi-minifi-cpp pull request #275: Minificpp 414
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/275 Minificpp 414 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-414 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/275.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #275 commit 1b851e4668aeb3b3e6a3588ba22b60485052cf25 Author: Andrew I. ChristiansonDate: 2018-03-09T17:56:37Z MINIFICPP-422 Refactored type system in EL to preserve type information between operations commit 11addb3114baae88b9b02c8c44d8b5cda6610554 Author: Andrew I. Christianson Date: 2018-03-12T15:29:41Z MINIFICPP-414 Added boolean Expression Language functions ---
[jira] [Resolved] (NIFI-4896) Add option to UI for terminating a Processor when stopped but still has threads
[ https://issues.apache.org/jira/browse/NIFI-4896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-4896. --- Resolution: Duplicate > Add option to UI for terminating a Processor when stopped but still has > threads > --- > > Key: NIFI-4896 > URL: https://issues.apache.org/jira/browse/NIFI-4896 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core UI >Reporter: Mark Payne >Assignee: Matt Gilman >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-1295) Add UI option to interrupt a running processor
[ https://issues.apache.org/jira/browse/NIFI-1295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-1295: -- Issue Type: Sub-task (was: Improvement) Parent: NIFI-78 > Add UI option to interrupt a running processor > -- > > Key: NIFI-1295 > URL: https://issues.apache.org/jira/browse/NIFI-1295 > Project: Apache NiFi > Issue Type: Sub-task > Components: Core UI >Affects Versions: 0.4.0 >Reporter: Oleg Zhurakousky >Assignee: Matt Gilman >Priority: Major > > Basically we need an expose option to a user to kill Processors that can't be > shut down the usual way (see NIFI-78 for more details). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4963) Add support for Hive 3.0 processors
Matt Burgess created NIFI-4963: -- Summary: Add support for Hive 3.0 processors Key: NIFI-4963 URL: https://issues.apache.org/jira/browse/NIFI-4963 Project: Apache NiFi Issue Type: New Feature Components: Extensions Reporter: Matt Burgess Apache Hive is working on Hive 3.0, this Jira is to add a bundle of components (much like the current Hive bundle) that supports Hive 3.0 (and Apache ORC if necessary). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFIREG-134) Enable Spring Boot Actuator REST API endpoints
[ https://issues.apache.org/jira/browse/NIFIREG-134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16395388#comment-16395388 ] ASF GitHub Bot commented on NIFIREG-134: Github user kevdoran commented on the issue: https://github.com/apache/nifi-registry/pull/97 Thanks for reviewing, @bbende! I'll open JIRAs for the follow on work that builds upon this capability to make it more useful for NiFi Registry users. > Enable Spring Boot Actuator REST API endpoints > -- > > Key: NIFIREG-134 > URL: https://issues.apache.org/jira/browse/NIFIREG-134 > Project: NiFi Registry > Issue Type: New Feature >Reporter: Kevin Doran >Assignee: Kevin Doran >Priority: Minor > Fix For: 0.2.0 > > > Spring Boot comes with an optional module known as Actuator which enables > remote administration, management, and monitoring of the application through > the REST API: > https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#production-ready-endpoints > This task is to enable the Actuator module and expose it in such a way that > access is gated by the standard NiFi Registry Authorization framework. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #97: NIFIREG-134 Enable SpringBoot Actuator endpoints
Github user kevdoran commented on the issue: https://github.com/apache/nifi-registry/pull/97 Thanks for reviewing, @bbende! I'll open JIRAs for the follow on work that builds upon this capability to make it more useful for NiFi Registry users. ---
[jira] [Resolved] (NIFIREG-134) Enable Spring Boot Actuator REST API endpoints
[ https://issues.apache.org/jira/browse/NIFIREG-134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende resolved NIFIREG-134. - Resolution: Fixed > Enable Spring Boot Actuator REST API endpoints > -- > > Key: NIFIREG-134 > URL: https://issues.apache.org/jira/browse/NIFIREG-134 > Project: NiFi Registry > Issue Type: New Feature >Reporter: Kevin Doran >Assignee: Kevin Doran >Priority: Minor > Fix For: 0.2.0 > > > Spring Boot comes with an optional module known as Actuator which enables > remote administration, management, and monitoring of the application through > the REST API: > https://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/#production-ready-endpoints > This task is to enable the Actuator module and expose it in such a way that > access is gated by the standard NiFi Registry Authorization framework. -- This message was sent by Atlassian JIRA (v7.6.3#76005)