[jira] [Created] (NIFIREG-142) First start problem: Error creating bean with name 'flywayInitializer'
Daniel Oakley created NIFIREG-142: - Summary: First start problem: Error creating bean with name 'flywayInitializer' Key: NIFIREG-142 URL: https://issues.apache.org/jira/browse/NIFIREG-142 Project: NiFi Registry Issue Type: Bug Affects Versions: 0.1.0 Environment: RHEL7 openjdk version "1.8.0_161" Reporter: Daniel Oakley Downloaded 0.1.0 tarball and and tried to run on RHEL7. No changes to any config files. Error in log file was: java.lang.RuntimeException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'flywayInitializer' defined in class path resource [org/springframework/boot/autoconfigure/flyway/FlywayAutoConfiguration$FlywayConfiguration.class]: Invocation of init method failed; nested exception is org.flywaydb.core.api.FlywayException: Validate failed: Detected failed migration to version 1.3 (DropBucketItemNameUniqueness) I could not find anything about "flyway" in the docs or config files...any hints on how to get around this problem? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2226: NIFI-4080: Added EL support to fields in ValidateCSV
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2226 Are you comfortable giving a +1 without the other test failures? If so we can get a committer to merge. Thanks to both @mgaido91 and @patricker for improvements. ---
[jira] [Commented] (NIFI-4080) ValidateCSV - Add support for Expression Language
[ https://issues.apache.org/jira/browse/NIFI-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356293#comment-16356293 ] ASF GitHub Bot commented on NIFI-4080: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2226 Are you comfortable giving a +1 without the other test failures? If so we can get a committer to merge. Thanks to both @mgaido91 and @patricker for improvements. > ValidateCSV - Add support for Expression Language > -- > > Key: NIFI-4080 > URL: https://issues.apache.org/jira/browse/NIFI-4080 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > The ValidateCSV processor could benefit if the following fields supported > Expression Language evaluation: > - Schema > - Quote character > - Delimiter character > - End of line symbols -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4846) AvroTypeUtil to support more input types for logical decimal conversion
[ https://issues.apache.org/jira/browse/NIFI-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4846: --- Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > AvroTypeUtil to support more input types for logical decimal conversion > --- > > Key: NIFI-4846 > URL: https://issues.apache.org/jira/browse/NIFI-4846 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > Fix For: 1.6.0 > > > Currently, only double and BigDecimal can be mapped to a logical decimal Avro > field. AvroTypeUtil should support String, Integer and Long as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4846) AvroTypeUtil to support more input types for logical decimal conversion
[ https://issues.apache.org/jira/browse/NIFI-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356288#comment-16356288 ] ASF GitHub Bot commented on NIFI-4846: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2451 Thank you sir, merging to master > AvroTypeUtil to support more input types for logical decimal conversion > --- > > Key: NIFI-4846 > URL: https://issues.apache.org/jira/browse/NIFI-4846 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > > Currently, only double and BigDecimal can be mapped to a logical decimal Avro > field. AvroTypeUtil should support String, Integer and Long as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4846) AvroTypeUtil to support more input types for logical decimal conversion
[ https://issues.apache.org/jira/browse/NIFI-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356291#comment-16356291 ] ASF GitHub Bot commented on NIFI-4846: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2451 > AvroTypeUtil to support more input types for logical decimal conversion > --- > > Key: NIFI-4846 > URL: https://issues.apache.org/jira/browse/NIFI-4846 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > > Currently, only double and BigDecimal can be mapped to a logical decimal Avro > field. AvroTypeUtil should support String, Integer and Long as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2451: NIFI-4846: AvroTypeUtil to support more input types...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2451 ---
[GitHub] nifi issue #2451: NIFI-4846: AvroTypeUtil to support more input types for lo...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2451 Thank you sir, merging to master ---
[jira] [Commented] (NIFI-4846) AvroTypeUtil to support more input types for logical decimal conversion
[ https://issues.apache.org/jira/browse/NIFI-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356276#comment-16356276 ] ASF GitHub Bot commented on NIFI-4846: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2451 @mattyb149 Rebased with the latest master. Thanks! > AvroTypeUtil to support more input types for logical decimal conversion > --- > > Key: NIFI-4846 > URL: https://issues.apache.org/jira/browse/NIFI-4846 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > > Currently, only double and BigDecimal can be mapped to a logical decimal Avro > field. AvroTypeUtil should support String, Integer and Long as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2451: NIFI-4846: AvroTypeUtil to support more input types for lo...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2451 @mattyb149 Rebased with the latest master. Thanks! ---
[jira] [Updated] (NIFI-4853) PutMongoRecord doesn't handle nested records
[ https://issues.apache.org/jira/browse/NIFI-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4853: --- Status: Patch Available (was: In Progress) > PutMongoRecord doesn't handle nested records > > > Key: NIFI-4853 > URL: https://issues.apache.org/jira/browse/NIFI-4853 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > PutMongoRecord works well with flat records, but when attempting to put in > nested records (records whose fields are arrays or records themselves), the > Mongo serializer doesn't know how to handle the complex NiFi record field > types (such as MapRecord). > The fix is to traverse all fields in the record, transforming the fields (if > necessary) to Java objects for use by the Mongo serializer(s). Something very > similar was done for PutDruidRecord, and in fact there is a utility method > DataTypeUtils.convertRecordFieldtoObject() that can perform this task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4853) PutMongoRecord doesn't handle nested records
[ https://issues.apache.org/jira/browse/NIFI-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356222#comment-16356222 ] ASF GitHub Bot commented on NIFI-4853: -- GitHub user mattyb149 opened a pull request: https://github.com/apache/nifi/pull/2457 NIFI-4853: Fixed PutMongoRecord handling of nested records Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mattyb149/nifi NIFI-4853 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2457.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2457 commit 2df44b51512da9cb910ab9281fc53bb952116ad3 Author: Matthew BurgessDate: 2018-02-07T23:15:35Z NIFI-4853: Fixed PutMongoRecord handling of nested records > PutMongoRecord doesn't handle nested records > > > Key: NIFI-4853 > URL: https://issues.apache.org/jira/browse/NIFI-4853 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > PutMongoRecord works well with flat records, but when attempting to put in > nested records (records whose fields are arrays or records themselves), the > Mongo serializer doesn't know how to handle the complex NiFi record field > types (such as MapRecord). > The fix is to traverse all fields in the record, transforming the fields (if > necessary) to Java objects for use by the Mongo serializer(s). Something very > similar was done for PutDruidRecord, and in fact there is a utility method > DataTypeUtils.convertRecordFieldtoObject() that can perform this task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2457: NIFI-4853: Fixed PutMongoRecord handling of nested ...
GitHub user mattyb149 opened a pull request: https://github.com/apache/nifi/pull/2457 NIFI-4853: Fixed PutMongoRecord handling of nested records Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mattyb149/nifi NIFI-4853 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2457.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2457 commit 2df44b51512da9cb910ab9281fc53bb952116ad3 Author: Matthew BurgessDate: 2018-02-07T23:15:35Z NIFI-4853: Fixed PutMongoRecord handling of nested records ---
[jira] [Assigned] (NIFI-4853) PutMongoRecord doesn't handle nested records
[ https://issues.apache.org/jira/browse/NIFI-4853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess reassigned NIFI-4853: -- Assignee: Matt Burgess > PutMongoRecord doesn't handle nested records > > > Key: NIFI-4853 > URL: https://issues.apache.org/jira/browse/NIFI-4853 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Matt Burgess >Assignee: Matt Burgess >Priority: Major > > PutMongoRecord works well with flat records, but when attempting to put in > nested records (records whose fields are arrays or records themselves), the > Mongo serializer doesn't know how to handle the complex NiFi record field > types (such as MapRecord). > The fix is to traverse all fields in the record, transforming the fields (if > necessary) to Java objects for use by the Mongo serializer(s). Something very > similar was done for PutDruidRecord, and in fact there is a utility method > DataTypeUtils.convertRecordFieldtoObject() that can perform this task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4853) PutMongoRecord doesn't handle nested records
Matt Burgess created NIFI-4853: -- Summary: PutMongoRecord doesn't handle nested records Key: NIFI-4853 URL: https://issues.apache.org/jira/browse/NIFI-4853 Project: Apache NiFi Issue Type: Bug Components: Extensions Reporter: Matt Burgess PutMongoRecord works well with flat records, but when attempting to put in nested records (records whose fields are arrays or records themselves), the Mongo serializer doesn't know how to handle the complex NiFi record field types (such as MapRecord). The fix is to traverse all fields in the record, transforming the fields (if necessary) to Java objects for use by the Mongo serializer(s). Something very similar was done for PutDruidRecord, and in fact there is a utility method DataTypeUtils.convertRecordFieldtoObject() that can perform this task. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-4837: -- Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Fix For: 1.6.0 > > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2455: NIFI-4837: Addressing thread leak in HandleHTTPRequ...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2455 ---
[jira] [Commented] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356114#comment-16356114 ] ASF subversion and git services commented on NIFI-4837: --- Commit f3013d0764202dfb936231eceecc64f40540e0c3 in nifi's branch refs/heads/master from [~mcgilman] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f3013d0 ] NIFI-4837: - When Jetty initializes fails, performing a shutdown sequence to ensure all allocated resources are released. This closes #2455. Signed-off-by: Bryan Bende> Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356113#comment-16356113 ] ASF GitHub Bot commented on NIFI-4837: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2455 +1 Looks good, tested the scenario in the JIRA and verified the thread count longer continues to increase, will merge > Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2455: NIFI-4837: Addressing thread leak in HandleHTTPRequest
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2455 +1 Looks good, tested the scenario in the JIRA and verified the thread count longer continues to increase, will merge ---
[jira] [Updated] (NIFI-4828) MergeContent only processes one bin even if there are multiple ready bins
[ https://issues.apache.org/jira/browse/NIFI-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-4828: - Resolution: Fixed Fix Version/s: 1.6.0 Status: Resolved (was: Patch Available) > MergeContent only processes one bin even if there are multiple ready bins > - > > Key: NIFI-4828 > URL: https://issues.apache.org/jira/browse/NIFI-4828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.6.0 > > Attachments: mergecontent-multi-bins.xml > > > [BinFiles.processBins|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java#L219] > is expected to loop through all ready bins, but it only process the first > bin. This incurs larger latency for FlowFiles to be merged. > For example, if there are two FlowFiles FF1 and FF2 queued for a MergeContent > processor, each has an attribute named 'group'. FF1.group = 'a', and > FF2.group = 'b'. MergeContent is configured to use 'Correlation Attribute > Name' as 'group'. > MergeContent takes FF1 and FF2 from its input queue, then correctly creates > two bins for group a and b, each having FF1 and FF2 respectively. > Bug BinFiles.processBins only processes the first bin, which can be either > the bin for group a or b. The other bin is left unprocessed. > The attached flow template has a flow to reproduce this. > Expected behavior is MergeContent to process all queued FlowFiles at a single > onTrigger run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4828) MergeContent only processes one bin even if there are multiple ready bins
[ https://issues.apache.org/jira/browse/NIFI-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356049#comment-16356049 ] ASF GitHub Bot commented on NIFI-4828: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2444 > MergeContent only processes one bin even if there are multiple ready bins > - > > Key: NIFI-4828 > URL: https://issues.apache.org/jira/browse/NIFI-4828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.6.0 > > Attachments: mergecontent-multi-bins.xml > > > [BinFiles.processBins|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java#L219] > is expected to loop through all ready bins, but it only process the first > bin. This incurs larger latency for FlowFiles to be merged. > For example, if there are two FlowFiles FF1 and FF2 queued for a MergeContent > processor, each has an attribute named 'group'. FF1.group = 'a', and > FF2.group = 'b'. MergeContent is configured to use 'Correlation Attribute > Name' as 'group'. > MergeContent takes FF1 and FF2 from its input queue, then correctly creates > two bins for group a and b, each having FF1 and FF2 respectively. > Bug BinFiles.processBins only processes the first bin, which can be either > the bin for group a or b. The other bin is left unprocessed. > The attached flow template has a flow to reproduce this. > Expected behavior is MergeContent to process all queued FlowFiles at a single > onTrigger run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4828) MergeContent only processes one bin even if there are multiple ready bins
[ https://issues.apache.org/jira/browse/NIFI-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356050#comment-16356050 ] ASF GitHub Bot commented on NIFI-4828: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2444 @ijokarumawak thanks for addressing! Code review looks good. All unit tests pass (including the ones that were @ignored previously!) and some local testing all worked out exactly as expected. +1 merged to master. > MergeContent only processes one bin even if there are multiple ready bins > - > > Key: NIFI-4828 > URL: https://issues.apache.org/jira/browse/NIFI-4828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.6.0 > > Attachments: mergecontent-multi-bins.xml > > > [BinFiles.processBins|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java#L219] > is expected to loop through all ready bins, but it only process the first > bin. This incurs larger latency for FlowFiles to be merged. > For example, if there are two FlowFiles FF1 and FF2 queued for a MergeContent > processor, each has an attribute named 'group'. FF1.group = 'a', and > FF2.group = 'b'. MergeContent is configured to use 'Correlation Attribute > Name' as 'group'. > MergeContent takes FF1 and FF2 from its input queue, then correctly creates > two bins for group a and b, each having FF1 and FF2 respectively. > Bug BinFiles.processBins only processes the first bin, which can be either > the bin for group a or b. The other bin is left unprocessed. > The attached flow template has a flow to reproduce this. > Expected behavior is MergeContent to process all queued FlowFiles at a single > onTrigger run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2444: NIFI-4828: Fix MergeContent to process all ready bins
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2444 @ijokarumawak thanks for addressing! Code review looks good. All unit tests pass (including the ones that were @ignored previously!) and some local testing all worked out exactly as expected. +1 merged to master. ---
[GitHub] nifi pull request #2444: NIFI-4828: Fix MergeContent to process all ready bi...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2444 ---
[jira] [Commented] (NIFI-4828) MergeContent only processes one bin even if there are multiple ready bins
[ https://issues.apache.org/jira/browse/NIFI-4828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356046#comment-16356046 ] ASF subversion and git services commented on NIFI-4828: --- Commit e9af6c6ad85bb7eafbd8d0703e783032120ea577 in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=e9af6c6 ] NIFI-4828: Fix MergeContent to process all ready bins Before this fix, MergeContent only processed the first bin even if there were multiple bins. There were two unit tests marked with Ignore those had been failing because of this. This closes #2444. Signed-off-by: Mark Payne> MergeContent only processes one bin even if there are multiple ready bins > - > > Key: NIFI-4828 > URL: https://issues.apache.org/jira/browse/NIFI-4828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Attachments: mergecontent-multi-bins.xml > > > [BinFiles.processBins|https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-extension-utils/nifi-processor-utils/src/main/java/org/apache/nifi/processor/util/bin/BinFiles.java#L219] > is expected to loop through all ready bins, but it only process the first > bin. This incurs larger latency for FlowFiles to be merged. > For example, if there are two FlowFiles FF1 and FF2 queued for a MergeContent > processor, each has an attribute named 'group'. FF1.group = 'a', and > FF2.group = 'b'. MergeContent is configured to use 'Correlation Attribute > Name' as 'group'. > MergeContent takes FF1 and FF2 from its input queue, then correctly creates > two bins for group a and b, each having FF1 and FF2 respectively. > Bug BinFiles.processBins only processes the first bin, which can be either > the bin for group a or b. The other bin is left unprocessed. > The attached flow template has a flow to reproduce this. > Expected behavior is MergeContent to process all queued FlowFiles at a single > onTrigger run. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4538) Add Process Group information to Search results
[ https://issues.apache.org/jira/browse/NIFI-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356043#comment-16356043 ] ASF GitHub Bot commented on NIFI-4538: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2364 @mcgilman 's suggestion makes sense to me and should prove useful in the search results. Make it so! :) > Add Process Group information to Search results > --- > > Key: NIFI-4538 > URL: https://issues.apache.org/jira/browse/NIFI-4538 > Project: Apache NiFi > Issue Type: Improvement > Components: Core UI >Reporter: Matt Burgess >Assignee: Yuri >Priority: Major > Attachments: Screenshot from 2017-12-23 21-08-45.png, Screenshot from > 2017-12-23 21-42-24.png > > > When querying for components in the Search bar, no Process Group (PG) > information is displayed. When copies of PGs are made on the canvas, the > search results can be hard to navigate, as you may jump into a different PG > than what you're looking for. > I propose adding (conditionally, based on user permissions) the immediate > parent PG name and/or ID, as well as the top-level PG. In this case I mean > top-level being the highest parent PG except root, unless the component's > immediate parent PG is root, in which case it wouldn't need to be displayed > (or could be displayed as the root PG, albeit a duplicate of the immediate). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2364: NIFI-4538 - Add Process Group information to...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2364 @mcgilman 's suggestion makes sense to me and should prove useful in the search results. Make it so! :) ---
[jira] [Commented] (NIFIREG-126) Entering an invalid bucket id in a deep link causes JS error
[ https://issues.apache.org/jira/browse/NIFIREG-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16356020#comment-16356020 ] ASF GitHub Bot commented on NIFIREG-126: Github user scottyaslan commented on the issue: https://github.com/apache/nifi-registry/pull/99 @kevdoran I have updated this PR with @moranr suggested messaging. > Entering an invalid bucket id in a deep link causes JS error > > > Key: NIFIREG-126 > URL: https://issues.apache.org/jira/browse/NIFIREG-126 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Major > > As a user when I enter an invalid deep link I want to be routed to view all > the items in all the buckets that I am authorized to view and to be notified > that the requested bucket/item id is invalid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #99: [NIFIREG-126] adding some polish and testing around...
Github user scottyaslan commented on the issue: https://github.com/apache/nifi-registry/pull/99 @kevdoran I have updated this PR with @moranr suggested messaging. ---
[jira] [Commented] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355999#comment-16355999 ] ASF GitHub Bot commented on NIFI-4837: -- Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2455 Reviewing... > Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2455: NIFI-4837: Addressing thread leak in HandleHTTPRequest
Github user bbende commented on the issue: https://github.com/apache/nifi/pull/2455 Reviewing... ---
[GitHub] nifi issue #2456: Fix for unit tests that are causing build failures in cert...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2456 @mosermw thanks for confirming that you were able to replicate and see the correct results. Was able to confirm that Travis now builds properly as well. Pushed fix to master. ---
[GitHub] nifi pull request #2456: Fix for unit tests that are causing build failures ...
Github user markap14 closed the pull request at: https://github.com/apache/nifi/pull/2456 ---
[GitHub] nifi issue #2456: Fix for unit tests that are causing build failures in cert...
Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2456 Nevermind my last comment. It looks like this is going to resolve both unit test failures. ---
[GitHub] nifi issue #2456: Fix for unit tests that are causing build failures in cert...
Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2456 @markap14 I tried your change locally and it does fix one of the two test failures. The validateConsumeWithCustomHeadersAndProperties() is fixed but validateFailOnUnsupportedMessageType() still fails. I haven't been able to figure out why, yet. ---
[GitHub] nifi issue #2456: Fix for unit tests that are causing build failures in cert...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2456 This PR exists solely to check that it addresses a build failure that occurs on Travis CI. I have been unable to replicate locally but believe that I understand the issue. If this addresses the build failure, I will handle merging to master. ---
[GitHub] nifi pull request #2456: Fix for unit tests that are causing build failures ...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2456 Fix for unit tests that are causing build failures in certain environ⦠â¦ments Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi jms-processors-unit-test-failures Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2456.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2456 commit 68d6cadfc17fe7d4b381462179862f23f35a4b38 Author: Mark PayneDate: 2018-02-07T19:13:06Z Fix for unit tests that are causing build failures in certain environments ---
[jira] [Updated] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-4837: -- Status: Patch Available (was: In Progress) > Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0, 1.4.0, 1.3.0, 1.2.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355894#comment-16355894 ] ASF GitHub Bot commented on NIFI-4837: -- GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/2455 NIFI-4837: Addressing thread leak in HandleHTTPRequest NIFI-4837: - When Jetty initializes fails, performing a shutdown sequence to ensure all allocated resources are released. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-4837 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2455.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2455 commit 535fadab4830de0634617eb66698fc9204987b69 Author: Matt GilmanDate: 2018-02-07T18:52:02Z NIFI-4837: - When Jetty initializes fails, performing a shutdown sequence to ensure all allocated resources are released. > Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2455: NIFI-4837: Addressing thread leak in HandleHTTPRequ...
GitHub user mcgilman opened a pull request: https://github.com/apache/nifi/pull/2455 NIFI-4837: Addressing thread leak in HandleHTTPRequest NIFI-4837: - When Jetty initializes fails, performing a shutdown sequence to ensure all allocated resources are released. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mcgilman/nifi NIFI-4837 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2455.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2455 commit 535fadab4830de0634617eb66698fc9204987b69 Author: Matt GilmanDate: 2018-02-07T18:52:02Z NIFI-4837: - When Jetty initializes fails, performing a shutdown sequence to ensure all allocated resources are released. ---
[jira] [Assigned] (MINIFICPP-31) Support UpdateAttribute for nifi-minifi-cpp
[ https://issues.apache.org/jira/browse/MINIFICPP-31?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Christianson reassigned MINIFICPP-31: Assignee: Andrew Christianson > Support UpdateAttribute for nifi-minifi-cpp > --- > > Key: MINIFICPP-31 > URL: https://issues.apache.org/jira/browse/MINIFICPP-31 > Project: NiFi MiNiFi C++ > Issue Type: New Feature >Reporter: Randy Gelhausen >Assignee: Andrew Christianson >Priority: Major > > nifi-minifi-cpp agents can generate multiple "streams" of flowfiles. > For instance, to monitor a host, nifi-minifi-cpp runs nmon, ps, netstat, and > gathers logfiles from applications. > But, for a given flowfile, any downstream NiFi collectors wont have > visibility into the originating hostname, nor metadata about which "stream" > (ExecuteProcess(nmon), ExecuteProcess(ps), TailFile(app1), TailFile(app2)) > generated it. > One solution is to use a separate InputPort for each stream. This works, but > burdens both the team working on agent flows, and the team managing the > collector- they have to be in concert. > A simpler (better?) approach is to allow agent teams to tag flowfiles with > differentiating metadata via use of UpdateAttribute. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-118) Dynamic Properties support for processors
[ https://issues.apache.org/jira/browse/MINIFICPP-118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355859#comment-16355859 ] ASF GitHub Bot commented on MINIFICPP-118: -- Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/261#discussion_r166711961 --- Diff: libminifi/src/core/ConfigurableComponent.cpp --- @@ -160,6 +161,78 @@ bool ConfigurableComponent::setSupportedProperties(std::set properties return true; } +bool ConfigurableComponent::getDynamicProperty(const std::string name, std::string ) { + std::lock_guard lock(configuration_mutex_); + + auto & = dynamic_properties_.find(name); + if (it != dynamic_properties_.end()) { +Property item = it->second; +value = item.getValue(); +logger_->log_debug("Component %s dynamic property name %s value %s", name, item.getName(), value); +return true; + } else { +return false; + } +} + +bool ConfigurableComponent::createDynamicProperty(const std::string , const std::string ) { + if (!supportsDynamicProperties()) { +logger_->log_debug("Attempted to create dynamic property %s, but this component does not support creation." + "of dynamic properties.", name); +return false; + } + + Property dyn(name, DEFAULT_DYNAMIC_PROPERTY_DESC, value); + logger_->log_info("Processor %s dynamic property '%s' value '%s'", +name.c_str(), --- End diff -- nit: not a big deal but you used c_str() here and then name above. if you happen to make other changes and remember this feel free to change it, but otherwise it's not important enough to change on its own imo. > Dynamic Properties support for processors > - > > Key: MINIFICPP-118 > URL: https://issues.apache.org/jira/browse/MINIFICPP-118 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Jeremy Dyer >Assignee: Andrew Christianson >Priority: Major > > Currently any Property read from the config.yml file that is not explicitly > defined in the processor's implementation will be ignored by Processor.cpp > when reading the configurations. This prevents any dynamic property from > being defined in the config.yml and passed to the processor at runtime. > Certain processors rely heavily on the concept of dynamic properties that are > passed to them at runtime to handle things like setting dynamic properties > based on properties that are declared. All of these possibilities cannot be > handled upfront so there should be a mechanism, most likely in Processor.cpp, > that allows for a list of dynamicProperties that are parsed form the > config.yml file to be stored and accessed by the underlying processor > implementation at runtime and use them as the processor desires. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #261: MINIFICPP-118 Added dynamic properties su...
Github user phrocker commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/261#discussion_r166711961 --- Diff: libminifi/src/core/ConfigurableComponent.cpp --- @@ -160,6 +161,78 @@ bool ConfigurableComponent::setSupportedProperties(std::set properties return true; } +bool ConfigurableComponent::getDynamicProperty(const std::string name, std::string ) { + std::lock_guard lock(configuration_mutex_); + + auto & = dynamic_properties_.find(name); + if (it != dynamic_properties_.end()) { +Property item = it->second; +value = item.getValue(); +logger_->log_debug("Component %s dynamic property name %s value %s", name, item.getName(), value); +return true; + } else { +return false; + } +} + +bool ConfigurableComponent::createDynamicProperty(const std::string , const std::string ) { + if (!supportsDynamicProperties()) { +logger_->log_debug("Attempted to create dynamic property %s, but this component does not support creation." + "of dynamic properties.", name); +return false; + } + + Property dyn(name, DEFAULT_DYNAMIC_PROPERTY_DESC, value); + logger_->log_info("Processor %s dynamic property '%s' value '%s'", +name.c_str(), --- End diff -- nit: not a big deal but you used c_str() here and then name above. if you happen to make other changes and remember this feel free to change it, but otherwise it's not important enough to change on its own imo. ---
[jira] [Commented] (MINIFICPP-118) Dynamic Properties support for processors
[ https://issues.apache.org/jira/browse/MINIFICPP-118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355841#comment-16355841 ] ASF GitHub Bot commented on MINIFICPP-118: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/261 MINIFICPP-118 Added dynamic properties support Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-118 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/261.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #261 commit c862ec3f33f7a0157ec0ee7289e7f4ca4d91fe0b Author: Andrew I. ChristiansonDate: 2018-02-05T22:01:36Z MINIFICPP-118 Added dynamic properties support > Dynamic Properties support for processors > - > > Key: MINIFICPP-118 > URL: https://issues.apache.org/jira/browse/MINIFICPP-118 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Affects Versions: 0.1.0 >Reporter: Jeremy Dyer >Assignee: Andrew Christianson >Priority: Major > > Currently any Property read from the config.yml file that is not explicitly > defined in the processor's implementation will be ignored by Processor.cpp > when reading the configurations. This prevents any dynamic property from > being defined in the config.yml and passed to the processor at runtime. > Certain processors rely heavily on the concept of dynamic properties that are > passed to them at runtime to handle things like setting dynamic properties > based on properties that are declared. All of these possibilities cannot be > handled upfront so there should be a mechanism, most likely in Processor.cpp, > that allows for a list of dynamicProperties that are parsed form the > config.yml file to be stored and accessed by the underlying processor > implementation at runtime and use them as the processor desires. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #261: MINIFICPP-118 Added dynamic properties su...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/261 MINIFICPP-118 Added dynamic properties support Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-118 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/261.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #261 commit c862ec3f33f7a0157ec0ee7289e7f4ca4d91fe0b Author: Andrew I. ChristiansonDate: 2018-02-05T22:01:36Z MINIFICPP-118 Added dynamic properties support ---
[jira] [Assigned] (NIFI-4837) Thread leak on HandleHTTPRequest processor
[ https://issues.apache.org/jira/browse/NIFI-4837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman reassigned NIFI-4837: - Assignee: Matt Gilman > Thread leak on HandleHTTPRequest processor > -- > > Key: NIFI-4837 > URL: https://issues.apache.org/jira/browse/NIFI-4837 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.2.0, 1.3.0, 1.4.0, 1.5.0 > Environment: CENTOS 7 >Reporter: Matthew Clarke >Assignee: Matt Gilman >Priority: Blocker > Attachments: image-2018-02-02-11-14-51-964.png, > image-2018-02-02-11-16-52-389.png > > > When you have multiple HandleHTTPRequest processors trying to listen on the > same port, for every Listen attempt NiFi builds a new thread and never > recycles the old thread which eventually leads to NiFi shutting down when > reaching the OS limit of the number of threads (default is 10.000). > The following error can be seen in nifi-app.log: > Caused by: java.lang.OutOfMemoryError: unable to create new native thread > at java.lang.Thread.start0(Native Method) > at java.lang.Thread.start(Thread.java:714) > This has happened before with version 1.2 and probably even with older > versions. but I could also replicate the issue with the latest 1.5 version. > Steps to replicate the issue: > 1) build a simple flow with 2 HandleHTTPRequest processors listening on the > same port. > !image-2018-02-02-11-14-51-964.png! > 2) Start the processors. > — The second HandleHTTPRequest processor starts logging following as > expected: > 2018-02-02 16:18:29,518 ERROR [Timer-Driven Process Thread-3] > o.a.n.p.standard.HandleHttpRequest > HandleHttpRequest[id=af013c62-b26f-1eeb-ae81-8423c70bdc7f] Failed to process > session due to org.apache.nifi.processor.exception.ProcessException: Failed > to initialize the server: {} > org.apache.nifi.processor.exception.ProcessException: Failed to initialize > the server > > Caused by: java.net.BindException: Address already in use > ... > ... 12 common frames omitted > > 3) Go to the Summary section in NiFi and watch the number of threads going up > to 9959. > !image-2018-02-02-11-16-52-389.png! > > With above, I had processors scheduled on primary node only as to not affect > every node. > If you stop the second HandleHTTPRequest processor the threads stop climbing, > but are not released. > > After this, NiFi will soon stop. > > A restart of NIFi is required to release these threads. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-385) RPG destruction can lead to EOFException in NiFi when sockets are not closed.
[ https://issues.apache.org/jira/browse/MINIFICPP-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355780#comment-16355780 ] ASF GitHub Bot commented on MINIFICPP-385: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/258 > RPG destruction can lead to EOFException in NiFi when sockets are not closed. > -- > > Key: MINIFICPP-385 > URL: https://issues.apache.org/jira/browse/MINIFICPP-385 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: marco polo >Assignee: marco polo >Priority: Major > > Current solution has not caused the issue in hours: > > Setup a countdown latch using RAII to control closure while there are any > open sockets in the ontrigger function in RPG. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #258: MINIFICPP-385: Add countdown latch and no...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/258 ---
[jira] [Updated] (NIFIREG-141) Bucket descriptions in the Registry UI
[ https://issues.apache.org/jira/browse/NIFIREG-141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kevin Doran updated NIFIREG-141: Description: On the backend, a bucket in NiFi Registry has an optional "description" field that can be used to convey the intended purpose of a bucket, or a more meaningful description than just what is used in the name. This field should be settable (at bucket creation time and/or in the edit bucket side panel) and rendered in the table and/or detail view of a bucket in the NiFi Registry UI. If NiFi is rendering this field, we should make sure that works correctly in collaboration with this ticket. If NiFi is not displaying bucket descriptions (for instance, when choosing where to initially save a flow), we should consider if adding that information is useful in the NiFi UI. was: On the backend, a bucket in NiFi Registry has an optional "description" field that can be used to convey the intended purpose of a bucket, or a more meaningful description than just what is used in the name. This field should be settable (at bucket creation time and/or in the edit bucket side panel) and rendered in the table and/or detail view of a bucket in the NiFi Registry UI. If NiFi is rendering this field if set, we should make sure that works correctly in collaboration with this ticket. If NiFi is not displaying bucket descriptions (for instance, when choosing where to initially save a flow), we should consider if adding that information is useful in the NiFi UI. > Bucket descriptions in the Registry UI > -- > > Key: NIFIREG-141 > URL: https://issues.apache.org/jira/browse/NIFIREG-141 > Project: NiFi Registry > Issue Type: Improvement >Reporter: Kevin Doran >Priority: Minor > > On the backend, a bucket in NiFi Registry has an optional "description" field > that can be used to convey the intended purpose of a bucket, or a more > meaningful description than just what is used in the name. > This field should be settable (at bucket creation time and/or in the edit > bucket side panel) and rendered in the table and/or detail view of a bucket > in the NiFi Registry UI. > If NiFi is rendering this field, we should make sure that works correctly in > collaboration with this ticket. If NiFi is not displaying bucket descriptions > (for instance, when choosing where to initially save a flow), we should > consider if adding that information is useful in the NiFi UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFIREG-141) Bucket descriptions in the Registry UI
Kevin Doran created NIFIREG-141: --- Summary: Bucket descriptions in the Registry UI Key: NIFIREG-141 URL: https://issues.apache.org/jira/browse/NIFIREG-141 Project: NiFi Registry Issue Type: Improvement Reporter: Kevin Doran On the backend, a bucket in NiFi Registry has an optional "description" field that can be used to convey the intended purpose of a bucket, or a more meaningful description than just what is used in the name. This field should be settable (at bucket creation time and/or in the edit bucket side panel) and rendered in the table and/or detail view of a bucket in the NiFi Registry UI. If NiFi is rendering this field if set, we should make sure that works correctly in collaboration with this ticket. If NiFi is not displaying bucket descriptions (for instance, when choosing where to initially save a flow), we should consider if adding that information is useful in the NiFi UI. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4846) AvroTypeUtil to support more input types for logical decimal conversion
[ https://issues.apache.org/jira/browse/NIFI-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355687#comment-16355687 ] ASF GitHub Bot commented on NIFI-4846: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2451 +1 LGTM, ran a build with contrib-check and unit tests, also tried with a live NiFi with various schema conversions. Once the rebase has been performed, I will merge this to master. Thanks for the improvement! > AvroTypeUtil to support more input types for logical decimal conversion > --- > > Key: NIFI-4846 > URL: https://issues.apache.org/jira/browse/NIFI-4846 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > > Currently, only double and BigDecimal can be mapped to a logical decimal Avro > field. AvroTypeUtil should support String, Integer and Long as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2451: NIFI-4846: AvroTypeUtil to support more input types for lo...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2451 +1 LGTM, ran a build with contrib-check and unit tests, also tried with a live NiFi with various schema conversions. Once the rebase has been performed, I will merge this to master. Thanks for the improvement! ---
[jira] [Created] (MINIFICPP-395) Need C2 transfer and ability to return whether update was successful
marco polo created MINIFICPP-395: Summary: Need C2 transfer and ability to return whether update was successful Key: MINIFICPP-395 URL: https://issues.apache.org/jira/browse/MINIFICPP-395 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: marco polo Assignee: marco polo Fix For: 0.5.0 We need a binary transfer command. This has been implemented for the purpose of updates. For configuration files we can simply download the data. We should move the configuration update to use transfer and pass back success/failure in the acknowledgement. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4834) ConsumeJMS does not scale when given more than 1 thread
[ https://issues.apache.org/jira/browse/NIFI-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355672#comment-16355672 ] ASF GitHub Bot commented on NIFI-4834: -- Github user mgaido91 commented on the issue: https://github.com/apache/nifi/pull/2445 on OSX they are passing too, so it may be a platform related error > ConsumeJMS does not scale when given more than 1 thread > --- > > Key: NIFI-4834 > URL: https://issues.apache.org/jira/browse/NIFI-4834 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > When I run ConsumeJMS against a local broker, the performance is great. > However, if I run against a broker that is running remotely with a 75 ms > round trip time (i.e., somewhat high latency), then the performance is pretty > poor, allowing me to receive only about 30-40 msgs/sec (1-2 MB/sec). > Increasing the number of threads should result in multiple connections to the > JMS Broker, which would provide better throughput. However, when I increase > the number of Concurrent Tasks to 10, I see 10 consumers but only a single > connection being created, so the throughput is no better (in fact it's a bit > slower due to added lock contention). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2445: NIFI-4834: Updated AbstractJMSProcessor to use a separate ...
Github user mgaido91 commented on the issue: https://github.com/apache/nifi/pull/2445 on OSX they are passing too, so it may be a platform related error ---
[jira] [Commented] (NIFI-4834) ConsumeJMS does not scale when given more than 1 thread
[ https://issues.apache.org/jira/browse/NIFI-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355648#comment-16355648 ] ASF GitHub Bot commented on NIFI-4834: -- Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2445 I get the exact same unit test failures on Ubuntu 16.04. I was working on NIFI-2630, so I thought it was my code changes, but the test failure happens when I build master without any changes. Interestingly, when I build master on Windows 10, these unit tests pass. > ConsumeJMS does not scale when given more than 1 thread > --- > > Key: NIFI-4834 > URL: https://issues.apache.org/jira/browse/NIFI-4834 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.6.0 > > > When I run ConsumeJMS against a local broker, the performance is great. > However, if I run against a broker that is running remotely with a 75 ms > round trip time (i.e., somewhat high latency), then the performance is pretty > poor, allowing me to receive only about 30-40 msgs/sec (1-2 MB/sec). > Increasing the number of threads should result in multiple connections to the > JMS Broker, which would provide better throughput. However, when I increase > the number of Concurrent Tasks to 10, I see 10 consumers but only a single > connection being created, so the throughput is no better (in fact it's a bit > slower due to added lock contention). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2445: NIFI-4834: Updated AbstractJMSProcessor to use a separate ...
Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2445 I get the exact same unit test failures on Ubuntu 16.04. I was working on NIFI-2630, so I thought it was my code changes, but the test failure happens when I build master without any changes. Interestingly, when I build master on Windows 10, these unit tests pass. ---
[jira] [Commented] (NIFI-4846) AvroTypeUtil to support more input types for logical decimal conversion
[ https://issues.apache.org/jira/browse/NIFI-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355615#comment-16355615 ] ASF GitHub Bot commented on NIFI-4846: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2451 There's a merge conflict now that #2450 is merged, I'm guessing it's because you built this PR with your other commit from 2450 as well. Can you rebase / replace your NIFI-4844 commit with the one from master? Please and thanks :) In the meantime I will cherry-pick the NIFI-4846 commit into master for testing > AvroTypeUtil to support more input types for logical decimal conversion > --- > > Key: NIFI-4846 > URL: https://issues.apache.org/jira/browse/NIFI-4846 > Project: Apache NiFi > Issue Type: Improvement > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Minor > > Currently, only double and BigDecimal can be mapped to a logical decimal Avro > field. AvroTypeUtil should support String, Integer and Long as well. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2451: NIFI-4846: AvroTypeUtil to support more input types for lo...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2451 There's a merge conflict now that #2450 is merged, I'm guessing it's because you built this PR with your other commit from 2450 as well. Can you rebase / replace your NIFI-4844 commit with the one from master? Please and thanks :) In the meantime I will cherry-pick the NIFI-4846 commit into master for testing ---
[jira] [Updated] (NIFI-4844) AvroRecordSetWriter should be able to convert a double having less scale than intended target Avro schema instead of throwing an AvroTypeException
[ https://issues.apache.org/jira/browse/NIFI-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4844: --- Fix Version/s: 1.6.0 > AvroRecordSetWriter should be able to convert a double having less scale than > intended target Avro schema instead of throwing an AvroTypeException > -- > > Key: NIFI-4844 > URL: https://issues.apache.org/jira/browse/NIFI-4844 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.6.0 > > > Current AvroTypeUtil conversion logic can throw AvroTypeException when it > maps double values into Avro decimal logical type fields if the double value > has less scale than the one defined at the target Avro decimal field schema. > For example, with following schema: > {code} > { > "type": "record", > "name": "logicalDecimalTest", > "fields": [ > {"name": "id", "type": "int"}, > {"name": "name", "type": "string"}, > { > "name": "price", > "type": { > "type": "bytes", > "logicalType": "decimal", > "precision": 18, > "scale": 8 > }}]} > {code} > And following CSV records: > {code} > id|name|price > 1|one|1.23 > 2|two|2.34 > {code} > Would produce this Exception: > {code} > 2018-02-06 09:57:27,461 ERROR [Timer-Driven Process Thread-7] > o.a.n.processors.standard.ConvertRecord > ConvertRecord[id=6897bc30-0161-1000-a8e7-9ce0ce8eb9ae] Failed to process > StandardFlowFileRecord[uuid=a97366a0-79bb-42ff-9023-c5d62ecfdbc5,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1517878123416-2, container=default, > section=2], offset=5, length=48],offset=0,name=220105646548465,size=48]; will > route to failure: org.apache.avro.AvroTypeException: Cannot encode decimal > with scale 17 as scale 8 > org.apache.avro.AvroTypeException: Cannot encode decimal with scale 17 as > scale 8 > at > org.apache.avro.Conversions$DecimalConversion.toBytes(Conversions.java:86) > at > org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:546) > at > org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:457) > at > org.apache.nifi.avro.WriteAvroResultWithExternalSchema.writeRecord(WriteAvroResultWithExternalSchema.java:76) > at > org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:122) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2827) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:109) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > The same issue is reported in the Avro project, > [AVRO-1864|https://issues.apache.org/jira/browse/AVRO-1864]. The recommended > approach is to adjust the scale at NiFi side. Actually, for BigDecimal input > values, NiFi already does this, but not with double values. AvroTypeUtil > should do the same scale adjustment for double values, too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4844) AvroRecordSetWriter should be able to convert a double having less scale than intended target Avro schema instead of throwing an AvroTypeException
[ https://issues.apache.org/jira/browse/NIFI-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Burgess updated NIFI-4844: --- Resolution: Fixed Status: Resolved (was: Patch Available) > AvroRecordSetWriter should be able to convert a double having less scale than > intended target Avro schema instead of throwing an AvroTypeException > -- > > Key: NIFI-4844 > URL: https://issues.apache.org/jira/browse/NIFI-4844 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Current AvroTypeUtil conversion logic can throw AvroTypeException when it > maps double values into Avro decimal logical type fields if the double value > has less scale than the one defined at the target Avro decimal field schema. > For example, with following schema: > {code} > { > "type": "record", > "name": "logicalDecimalTest", > "fields": [ > {"name": "id", "type": "int"}, > {"name": "name", "type": "string"}, > { > "name": "price", > "type": { > "type": "bytes", > "logicalType": "decimal", > "precision": 18, > "scale": 8 > }}]} > {code} > And following CSV records: > {code} > id|name|price > 1|one|1.23 > 2|two|2.34 > {code} > Would produce this Exception: > {code} > 2018-02-06 09:57:27,461 ERROR [Timer-Driven Process Thread-7] > o.a.n.processors.standard.ConvertRecord > ConvertRecord[id=6897bc30-0161-1000-a8e7-9ce0ce8eb9ae] Failed to process > StandardFlowFileRecord[uuid=a97366a0-79bb-42ff-9023-c5d62ecfdbc5,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1517878123416-2, container=default, > section=2], offset=5, length=48],offset=0,name=220105646548465,size=48]; will > route to failure: org.apache.avro.AvroTypeException: Cannot encode decimal > with scale 17 as scale 8 > org.apache.avro.AvroTypeException: Cannot encode decimal with scale 17 as > scale 8 > at > org.apache.avro.Conversions$DecimalConversion.toBytes(Conversions.java:86) > at > org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:546) > at > org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:457) > at > org.apache.nifi.avro.WriteAvroResultWithExternalSchema.writeRecord(WriteAvroResultWithExternalSchema.java:76) > at > org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:122) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2827) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:109) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > The same issue is reported in the Avro project, > [AVRO-1864|https://issues.apache.org/jira/browse/AVRO-1864]. The recommended > approach is to adjust the scale at NiFi side. Actually, for BigDecimal input > values, NiFi already does this, but not with double values. AvroTypeUtil > should do the same scale adjustment for double values, too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4844) AvroRecordSetWriter should be able to convert a double having less scale than intended target Avro schema instead of throwing an AvroTypeException
[ https://issues.apache.org/jira/browse/NIFI-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355610#comment-16355610 ] ASF subversion and git services commented on NIFI-4844: --- Commit 2b062e211f36a0833c273e1a552d42723fdf54fe in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2b062e2 ] NIFI-4844: Adjust BigDecimal scale to the target Avro schema - Applied the same scale adjustment not only to BigDecimal inputs, but also to Double values. Signed-off-by: Matthew BurgessThis closes #2450 > AvroRecordSetWriter should be able to convert a double having less scale than > intended target Avro schema instead of throwing an AvroTypeException > -- > > Key: NIFI-4844 > URL: https://issues.apache.org/jira/browse/NIFI-4844 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Current AvroTypeUtil conversion logic can throw AvroTypeException when it > maps double values into Avro decimal logical type fields if the double value > has less scale than the one defined at the target Avro decimal field schema. > For example, with following schema: > {code} > { > "type": "record", > "name": "logicalDecimalTest", > "fields": [ > {"name": "id", "type": "int"}, > {"name": "name", "type": "string"}, > { > "name": "price", > "type": { > "type": "bytes", > "logicalType": "decimal", > "precision": 18, > "scale": 8 > }}]} > {code} > And following CSV records: > {code} > id|name|price > 1|one|1.23 > 2|two|2.34 > {code} > Would produce this Exception: > {code} > 2018-02-06 09:57:27,461 ERROR [Timer-Driven Process Thread-7] > o.a.n.processors.standard.ConvertRecord > ConvertRecord[id=6897bc30-0161-1000-a8e7-9ce0ce8eb9ae] Failed to process > StandardFlowFileRecord[uuid=a97366a0-79bb-42ff-9023-c5d62ecfdbc5,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1517878123416-2, container=default, > section=2], offset=5, length=48],offset=0,name=220105646548465,size=48]; will > route to failure: org.apache.avro.AvroTypeException: Cannot encode decimal > with scale 17 as scale 8 > org.apache.avro.AvroTypeException: Cannot encode decimal with scale 17 as > scale 8 > at > org.apache.avro.Conversions$DecimalConversion.toBytes(Conversions.java:86) > at > org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:546) > at > org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:457) > at > org.apache.nifi.avro.WriteAvroResultWithExternalSchema.writeRecord(WriteAvroResultWithExternalSchema.java:76) > at > org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:122) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2827) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:109) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > The same issue is reported in the Avro project, > [AVRO-1864|https://issues.apache.org/jira/browse/AVRO-1864]. The recommended > approach is to adjust the scale at NiFi side. Actually, for
[jira] [Commented] (NIFI-4844) AvroRecordSetWriter should be able to convert a double having less scale than intended target Avro schema instead of throwing an AvroTypeException
[ https://issues.apache.org/jira/browse/NIFI-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355609#comment-16355609 ] ASF GitHub Bot commented on NIFI-4844: -- Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2450 +1 LGTM, ran build with contrib-check and unit tests, also tried with a live NiFi, using QueryDatabaseTable -> ConvertRecord. Reproduced the issue then verified the correct behavior. Thanks for this fix! Merging to master > AvroRecordSetWriter should be able to convert a double having less scale than > intended target Avro schema instead of throwing an AvroTypeException > -- > > Key: NIFI-4844 > URL: https://issues.apache.org/jira/browse/NIFI-4844 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Current AvroTypeUtil conversion logic can throw AvroTypeException when it > maps double values into Avro decimal logical type fields if the double value > has less scale than the one defined at the target Avro decimal field schema. > For example, with following schema: > {code} > { > "type": "record", > "name": "logicalDecimalTest", > "fields": [ > {"name": "id", "type": "int"}, > {"name": "name", "type": "string"}, > { > "name": "price", > "type": { > "type": "bytes", > "logicalType": "decimal", > "precision": 18, > "scale": 8 > }}]} > {code} > And following CSV records: > {code} > id|name|price > 1|one|1.23 > 2|two|2.34 > {code} > Would produce this Exception: > {code} > 2018-02-06 09:57:27,461 ERROR [Timer-Driven Process Thread-7] > o.a.n.processors.standard.ConvertRecord > ConvertRecord[id=6897bc30-0161-1000-a8e7-9ce0ce8eb9ae] Failed to process > StandardFlowFileRecord[uuid=a97366a0-79bb-42ff-9023-c5d62ecfdbc5,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1517878123416-2, container=default, > section=2], offset=5, length=48],offset=0,name=220105646548465,size=48]; will > route to failure: org.apache.avro.AvroTypeException: Cannot encode decimal > with scale 17 as scale 8 > org.apache.avro.AvroTypeException: Cannot encode decimal with scale 17 as > scale 8 > at > org.apache.avro.Conversions$DecimalConversion.toBytes(Conversions.java:86) > at > org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:546) > at > org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:457) > at > org.apache.nifi.avro.WriteAvroResultWithExternalSchema.writeRecord(WriteAvroResultWithExternalSchema.java:76) > at > org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:122) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2827) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:109) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > The same issue is reported in the Avro project, > [AVRO-1864|https://issues.apache.org/jira/browse/AVRO-1864]. The recommended > approach is to adjust the scale at NiFi side. Actually, for BigDecimal input > values, NiFi already does this, but not with double values. AvroTypeUtil > should do the
[jira] [Commented] (NIFI-4844) AvroRecordSetWriter should be able to convert a double having less scale than intended target Avro schema instead of throwing an AvroTypeException
[ https://issues.apache.org/jira/browse/NIFI-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355612#comment-16355612 ] ASF GitHub Bot commented on NIFI-4844: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2450 > AvroRecordSetWriter should be able to convert a double having less scale than > intended target Avro schema instead of throwing an AvroTypeException > -- > > Key: NIFI-4844 > URL: https://issues.apache.org/jira/browse/NIFI-4844 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.3.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Current AvroTypeUtil conversion logic can throw AvroTypeException when it > maps double values into Avro decimal logical type fields if the double value > has less scale than the one defined at the target Avro decimal field schema. > For example, with following schema: > {code} > { > "type": "record", > "name": "logicalDecimalTest", > "fields": [ > {"name": "id", "type": "int"}, > {"name": "name", "type": "string"}, > { > "name": "price", > "type": { > "type": "bytes", > "logicalType": "decimal", > "precision": 18, > "scale": 8 > }}]} > {code} > And following CSV records: > {code} > id|name|price > 1|one|1.23 > 2|two|2.34 > {code} > Would produce this Exception: > {code} > 2018-02-06 09:57:27,461 ERROR [Timer-Driven Process Thread-7] > o.a.n.processors.standard.ConvertRecord > ConvertRecord[id=6897bc30-0161-1000-a8e7-9ce0ce8eb9ae] Failed to process > StandardFlowFileRecord[uuid=a97366a0-79bb-42ff-9023-c5d62ecfdbc5,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1517878123416-2, container=default, > section=2], offset=5, length=48],offset=0,name=220105646548465,size=48]; will > route to failure: org.apache.avro.AvroTypeException: Cannot encode decimal > with scale 17 as scale 8 > org.apache.avro.AvroTypeException: Cannot encode decimal with scale 17 as > scale 8 > at > org.apache.avro.Conversions$DecimalConversion.toBytes(Conversions.java:86) > at > org.apache.nifi.avro.AvroTypeUtil.convertToAvroObject(AvroTypeUtil.java:546) > at > org.apache.nifi.avro.AvroTypeUtil.createAvroRecord(AvroTypeUtil.java:457) > at > org.apache.nifi.avro.WriteAvroResultWithExternalSchema.writeRecord(WriteAvroResultWithExternalSchema.java:76) > at > org.apache.nifi.serialization.AbstractRecordSetWriter.write(AbstractRecordSetWriter.java:59) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor$1.process(AbstractRecordProcessor.java:122) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2827) > at > org.apache.nifi.processors.standard.AbstractRecordProcessor.onTrigger(AbstractRecordProcessor.java:109) > at > org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {code} > The same issue is reported in the Avro project, > [AVRO-1864|https://issues.apache.org/jira/browse/AVRO-1864]. The recommended > approach is to adjust the scale at NiFi side. Actually, for BigDecimal input > values, NiFi already does this, but not with double values. AvroTypeUtil > should do the same scale adjustment for double values, too. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2450: NIFI-4844: Adjust BigDecimal scale to the target Av...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2450 ---
[GitHub] nifi issue #2450: NIFI-4844: Adjust BigDecimal scale to the target Avro sche...
Github user mattyb149 commented on the issue: https://github.com/apache/nifi/pull/2450 +1 LGTM, ran build with contrib-check and unit tests, also tried with a live NiFi, using QueryDatabaseTable -> ConvertRecord. Reproduced the issue then verified the correct behavior. Thanks for this fix! Merging to master ---
[jira] [Commented] (NIFI-4164) Realistic Time Series Processor Simulator
[ https://issues.apache.org/jira/browse/NIFI-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355605#comment-16355605 ] ASF GitHub Bot commented on NIFI-4164: -- Github user cherrera2001 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1997#discussion_r166655562 --- Diff: nifi-nar-bundles/nifi-simulator-bundle/nifi-simulator-processors/src/main/java/com/apache/nifi/processors/simulator/GenerateTimeSeriesFlowFile.java --- @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.apache.nifi.processors.simulator; + +import be.cetic.tsimulus.config.Configuration; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.util.StandardValidators; +import org.joda.time.LocalDateTime; +import scala.Some; +import scala.Tuple3; +import scala.collection.JavaConverters; + +import java.util.List; +import java.util.Set; +import java.util.Collections; +import java.util.HashSet; +import java.util.ArrayList; + +@Tags({"Simulator, Timeseries, IOT, Testing"}) +@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN) +@CapabilityDescription("Generates realistic time series data using the TSimulus time series generator, and places the values into the flowfile in a CSV format.") +public class GenerateTimeSeriesFlowFile extends AbstractProcessor { + +private Configuration simConfig = null; +private boolean isTest = false; + +public static final PropertyDescriptor SIMULATOR_CONFIG = new PropertyDescriptor +.Builder().name("SIMULATOR_CONFIG") +.displayName("Simulator Configuration File") +.description("The JSON configuration file to use to configure TSimulus") +.required(true) +.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) +.build(); + +public static final PropertyDescriptor PRINT_HEADER = new PropertyDescriptor +.Builder().name("PRINT_HEADER") +.displayName("Print Header") +.description("Directs the processor whether to print a header line or not.") +.required(true) +.allowableValues("true", "false") +.defaultValue("false") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final Relationship SUCCESS = new Relationship.Builder() +.name("Success") +.description("When the flowfile is successfully generated") +.build(); + +private List descriptors; + +private Set relationships; + +@Override +protected void init(final ProcessorInitializationContext context) { + +final List descriptors = new ArrayList<>(); + +descriptors.add(SIMULATOR_CONFIG); +descriptors.add(PRINT_HEADER); + +this.descriptors = Collections.unmodifiableList(descriptors); + +final Set relationships = new HashSet<>(); +relationships.add(SUCCESS); +this.relationships = Collections.unmodifiableSet(relationships); +} + +@Override +public Set getRelationships() {
[GitHub] nifi pull request #1997: NIFI-4164 Adding a realistic time simulator process...
Github user cherrera2001 commented on a diff in the pull request: https://github.com/apache/nifi/pull/1997#discussion_r166655562 --- Diff: nifi-nar-bundles/nifi-simulator-bundle/nifi-simulator-processors/src/main/java/com/apache/nifi/processors/simulator/GenerateTimeSeriesFlowFile.java --- @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.apache.nifi.processors.simulator; + +import be.cetic.tsimulus.config.Configuration; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.util.StandardValidators; +import org.joda.time.LocalDateTime; +import scala.Some; +import scala.Tuple3; +import scala.collection.JavaConverters; + +import java.util.List; +import java.util.Set; +import java.util.Collections; +import java.util.HashSet; +import java.util.ArrayList; + +@Tags({"Simulator, Timeseries, IOT, Testing"}) +@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN) +@CapabilityDescription("Generates realistic time series data using the TSimulus time series generator, and places the values into the flowfile in a CSV format.") +public class GenerateTimeSeriesFlowFile extends AbstractProcessor { + +private Configuration simConfig = null; +private boolean isTest = false; + +public static final PropertyDescriptor SIMULATOR_CONFIG = new PropertyDescriptor +.Builder().name("SIMULATOR_CONFIG") +.displayName("Simulator Configuration File") +.description("The JSON configuration file to use to configure TSimulus") +.required(true) +.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) +.build(); + +public static final PropertyDescriptor PRINT_HEADER = new PropertyDescriptor +.Builder().name("PRINT_HEADER") +.displayName("Print Header") +.description("Directs the processor whether to print a header line or not.") +.required(true) +.allowableValues("true", "false") +.defaultValue("false") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final Relationship SUCCESS = new Relationship.Builder() +.name("Success") +.description("When the flowfile is successfully generated") +.build(); + +private List descriptors; + +private Set relationships; + +@Override +protected void init(final ProcessorInitializationContext context) { + +final List descriptors = new ArrayList<>(); + +descriptors.add(SIMULATOR_CONFIG); +descriptors.add(PRINT_HEADER); + +this.descriptors = Collections.unmodifiableList(descriptors); + +final Set relationships = new HashSet<>(); +relationships.add(SUCCESS); +this.relationships = Collections.unmodifiableSet(relationships); +} + +@Override +public Set getRelationships() { +return this.relationships; +} + +@Override +public final List getSupportedPropertyDescriptors() { +return descriptors; +} + +@Override +public void
[jira] [Commented] (MINIFICPP-382) Add SUSE support to bootstrap process.
[ https://issues.apache.org/jira/browse/MINIFICPP-382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355584#comment-16355584 ] ASF GitHub Bot commented on MINIFICPP-382: -- Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/260 How fitting, a unit test is seg faultingI'll check and update. > Add SUSE support to bootstrap process. > --- > > Key: MINIFICPP-382 > URL: https://issues.apache.org/jira/browse/MINIFICPP-382 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: marco polo >Assignee: marco polo >Priority: Major > Fix For: 0.5.0 > > > Add support to bootstrap process. > > Currently have tested on OpenSUSE and SLES12. > > SLES12/OpenSUSE – build and tested, verifying SiteToSite > SLES11 – TBD -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #260: MINIFICPP-382: Implement SUSE release support fo...
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/260 How fitting, a unit test is seg faultingI'll check and update. ---
[jira] [Commented] (NIFIREG-126) Entering an invalid bucket id in a deep link causes JS error
[ https://issues.apache.org/jira/browse/NIFIREG-126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355577#comment-16355577 ] ASF GitHub Bot commented on NIFIREG-126: Github user moranr commented on the issue: https://github.com/apache/nifi-registry/pull/99 @scottyaslan , @kevdoran – redirect behavior looks good. Below are recommendations for the dialog titles and messages. UNSECURED _-nifi-registry/explorer/grid-list/buckets/0_ _-nifi-registry/explorer/grid-list/buckets/0/flows/0_ - Title – Bucket Not Found - Msg – The specified bucket ID does not exist in this registry. _-nifi-registry/explorer/grid-list/buckets/**existing-bucket-id**/flows/0_ - Title – Flow Not Found - Msg – The specified flow ID does not exist in this bucket. _-nifi-registry/administration/users_ - Title – Not Applicable - Msg – User administration is not configured for this registry. SECURED _-nifi-registry/administration/workflow(sidenav:manage/bucket/)_ - Title – Bucket Not Found - Msg – The specified bucket ID does not exist in this registry. _-nifi-registry/administration/users(sidenav:manage/user/)_ - Title – User Not Found - Msg – The specified user ID does not exist in this registry. _-nifi-registry/administration/users(sidenav:manage/group/)_ - Title – User Group Not Found - Msg – The specified user group ID does not exist in this registry. _-nifi-registry/administration/*_ (If user does not have applicable admin privileges) - Title – Access Denied - Msg – Please contact your System Administrator. > Entering an invalid bucket id in a deep link causes JS error > > > Key: NIFIREG-126 > URL: https://issues.apache.org/jira/browse/NIFIREG-126 > Project: NiFi Registry > Issue Type: Bug >Affects Versions: 0.1.0 >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Major > > As a user when I enter an invalid deep link I want to be routed to view all > the items in all the buckets that I am authorized to view and to be notified > that the requested bucket/item id is invalid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-registry issue #99: [NIFIREG-126] adding some polish and testing around...
Github user moranr commented on the issue: https://github.com/apache/nifi-registry/pull/99 @scottyaslan , @kevdoran â redirect behavior looks good. Below are recommendations for the dialog titles and messages. UNSECURED _-nifi-registry/explorer/grid-list/buckets/0_ _-nifi-registry/explorer/grid-list/buckets/0/flows/0_ - Title â Bucket Not Found - Msg â The specified bucket ID does not exist in this registry. _-nifi-registry/explorer/grid-list/buckets/**existing-bucket-id**/flows/0_ - Title â Flow Not Found - Msg â The specified flow ID does not exist in this bucket. _-nifi-registry/administration/users_ - Title â Not Applicable - Msg â User administration is not configured for this registry. SECURED _-nifi-registry/administration/workflow(sidenav:manage/bucket/)_ - Title â Bucket Not Found - Msg â The specified bucket ID does not exist in this registry. _-nifi-registry/administration/users(sidenav:manage/user/)_ - Title â User Not Found - Msg â The specified user ID does not exist in this registry. _-nifi-registry/administration/users(sidenav:manage/group/)_ - Title â User Group Not Found - Msg â The specified user group ID does not exist in this registry. _-nifi-registry/administration/*_ (If user does not have applicable admin privileges) - Title â Access Denied - Msg â Please contact your System Administrator. ---
[jira] [Commented] (NIFI-4164) Realistic Time Series Processor Simulator
[ https://issues.apache.org/jira/browse/NIFI-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355575#comment-16355575 ] ASF GitHub Bot commented on NIFI-4164: -- Github user YolandaMDavis commented on a diff in the pull request: https://github.com/apache/nifi/pull/1997#discussion_r166648706 --- Diff: nifi-nar-bundles/nifi-simulator-bundle/nifi-simulator-processors/src/main/java/com/apache/nifi/processors/simulator/GenerateTimeSeriesFlowFile.java --- @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.apache.nifi.processors.simulator; + +import be.cetic.tsimulus.config.Configuration; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.util.StandardValidators; +import org.joda.time.LocalDateTime; +import scala.Some; +import scala.Tuple3; +import scala.collection.JavaConverters; + +import java.util.List; +import java.util.Set; +import java.util.Collections; +import java.util.HashSet; +import java.util.ArrayList; + +@Tags({"Simulator, Timeseries, IOT, Testing"}) +@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN) +@CapabilityDescription("Generates realistic time series data using the TSimulus time series generator, and places the values into the flowfile in a CSV format.") +public class GenerateTimeSeriesFlowFile extends AbstractProcessor { + +private Configuration simConfig = null; +private boolean isTest = false; + +public static final PropertyDescriptor SIMULATOR_CONFIG = new PropertyDescriptor +.Builder().name("SIMULATOR_CONFIG") +.displayName("Simulator Configuration File") +.description("The JSON configuration file to use to configure TSimulus") +.required(true) +.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) +.build(); + +public static final PropertyDescriptor PRINT_HEADER = new PropertyDescriptor +.Builder().name("PRINT_HEADER") +.displayName("Print Header") +.description("Directs the processor whether to print a header line or not.") +.required(true) +.allowableValues("true", "false") +.defaultValue("false") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final Relationship SUCCESS = new Relationship.Builder() +.name("Success") +.description("When the flowfile is successfully generated") +.build(); + +private List descriptors; + +private Set relationships; + +@Override +protected void init(final ProcessorInitializationContext context) { + +final List descriptors = new ArrayList<>(); + +descriptors.add(SIMULATOR_CONFIG); +descriptors.add(PRINT_HEADER); + +this.descriptors = Collections.unmodifiableList(descriptors); + +final Set relationships = new HashSet<>(); +relationships.add(SUCCESS); +this.relationships = Collections.unmodifiableSet(relationships); +} + +@Override +public Set getRelationships() {
[GitHub] nifi pull request #1997: NIFI-4164 Adding a realistic time simulator process...
Github user YolandaMDavis commented on a diff in the pull request: https://github.com/apache/nifi/pull/1997#discussion_r166648706 --- Diff: nifi-nar-bundles/nifi-simulator-bundle/nifi-simulator-processors/src/main/java/com/apache/nifi/processors/simulator/GenerateTimeSeriesFlowFile.java --- @@ -0,0 +1,180 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package com.apache.nifi.processors.simulator; + +import be.cetic.tsimulus.config.Configuration; +import org.apache.nifi.annotation.behavior.InputRequirement; +import org.apache.nifi.annotation.documentation.CapabilityDescription; +import org.apache.nifi.annotation.documentation.Tags; +import org.apache.nifi.annotation.lifecycle.OnScheduled; +import org.apache.nifi.components.PropertyDescriptor; +import org.apache.nifi.flowfile.FlowFile; +import org.apache.nifi.logging.ComponentLog; +import org.apache.nifi.processor.AbstractProcessor; +import org.apache.nifi.processor.Relationship; +import org.apache.nifi.processor.ProcessorInitializationContext; +import org.apache.nifi.processor.ProcessContext; +import org.apache.nifi.processor.exception.ProcessException; +import org.apache.nifi.processor.ProcessSession; +import org.apache.nifi.processor.util.StandardValidators; +import org.joda.time.LocalDateTime; +import scala.Some; +import scala.Tuple3; +import scala.collection.JavaConverters; + +import java.util.List; +import java.util.Set; +import java.util.Collections; +import java.util.HashSet; +import java.util.ArrayList; + +@Tags({"Simulator, Timeseries, IOT, Testing"}) +@InputRequirement(InputRequirement.Requirement.INPUT_FORBIDDEN) +@CapabilityDescription("Generates realistic time series data using the TSimulus time series generator, and places the values into the flowfile in a CSV format.") +public class GenerateTimeSeriesFlowFile extends AbstractProcessor { + +private Configuration simConfig = null; +private boolean isTest = false; + +public static final PropertyDescriptor SIMULATOR_CONFIG = new PropertyDescriptor +.Builder().name("SIMULATOR_CONFIG") +.displayName("Simulator Configuration File") +.description("The JSON configuration file to use to configure TSimulus") +.required(true) +.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR) +.build(); + +public static final PropertyDescriptor PRINT_HEADER = new PropertyDescriptor +.Builder().name("PRINT_HEADER") +.displayName("Print Header") +.description("Directs the processor whether to print a header line or not.") +.required(true) +.allowableValues("true", "false") +.defaultValue("false") +.addValidator(StandardValidators.BOOLEAN_VALIDATOR) +.build(); + +public static final Relationship SUCCESS = new Relationship.Builder() +.name("Success") +.description("When the flowfile is successfully generated") +.build(); + +private List descriptors; + +private Set relationships; + +@Override +protected void init(final ProcessorInitializationContext context) { + +final List descriptors = new ArrayList<>(); + +descriptors.add(SIMULATOR_CONFIG); +descriptors.add(PRINT_HEADER); + +this.descriptors = Collections.unmodifiableList(descriptors); + +final Set relationships = new HashSet<>(); +relationships.add(SUCCESS); +this.relationships = Collections.unmodifiableSet(relationships); +} + +@Override +public Set getRelationships() { +return this.relationships; +} + +@Override +public final List getSupportedPropertyDescriptors() { +return descriptors; +} + +@Override +public void
[GitHub] nifi issue #2101: NIFI-4289 - InfluxDB put processor
Github user mans2singh commented on the issue: https://github.com/apache/nifi/pull/2101 Hi @MikeThomsen @mattyb149 @joewitt I believe I have implemented all the review changes. Please let me know if there is anything I have missed or you have any additional recommendations. Thanks for your feedback. ---
[jira] [Commented] (NIFI-4289) Implement put processor for InfluxDB
[ https://issues.apache.org/jira/browse/NIFI-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355571#comment-16355571 ] ASF GitHub Bot commented on NIFI-4289: -- Github user mans2singh commented on the issue: https://github.com/apache/nifi/pull/2101 Hi @MikeThomsen @mattyb149 @joewitt I believe I have implemented all the review changes. Please let me know if there is anything I have missed or you have any additional recommendations. Thanks for your feedback. > Implement put processor for InfluxDB > > > Key: NIFI-4289 > URL: https://issues.apache.org/jira/browse/NIFI-4289 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.3.0 > Environment: All >Reporter: Mans Singh >Assignee: Mans Singh >Priority: Minor > Labels: insert, measurements,, put, timeseries > > Support inserting time series measurements into InfluxDB. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4852) Add an Analysis section to the Processor Diagnostics
Mark Payne created NIFI-4852: Summary: Add an Analysis section to the Processor Diagnostics Key: NIFI-4852 URL: https://issues.apache.org/jira/browse/NIFI-4852 Project: Apache NiFi Issue Type: Sub-task Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne Adding this ability to generate a Diagnostics Report will be very powerful in and of itself, as it can generate a lot of raw data about the internal state of NiFi, which can be used to diagnose a range of problems. However, it will result in quite a lot of data being generated, especially for a cluster that has several nodes. There are common (perceived) problems that occur more frequently than others. For example, a user may think there is something wrong with a Processor because it is not processing the data in the incoming queue. Closer analysis may reveal that the Processor is only scheduled to run once every 1 second instead of the default of 0 seconds. Or the incoming queue may consist solely of penalized FlowFiles. In such a situation, it would be helpful if the Diagnostics Report were to contain an "Analysis" section that indicates when these sorts of common issues arise. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4851) Update User Guide to include information about Processor Diagnostics
Mark Payne created NIFI-4851: Summary: Update User Guide to include information about Processor Diagnostics Key: NIFI-4851 URL: https://issues.apache.org/jira/browse/NIFI-4851 Project: Apache NiFi Issue Type: Sub-task Components: Documentation Website Reporter: Mark Payne The User Guide will need to be updated to reflect the new feature. We should include screen shot of how to obtain the information (context menu) and a screen shot of the resulting report. We should also include information about what this is used for and each entry in the report, so that the user understands what he/she is looking at. If not obvious, we should also include information about how to copy/paste the info or save it off so that it can easily be sent to appropriate personnel for troubleshooting. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4850) Add UI support for rendering the Diagnostics Report
Mark Payne created NIFI-4850: Summary: Add UI support for rendering the Diagnostics Report Key: NIFI-4850 URL: https://issues.apache.org/jira/browse/NIFI-4850 Project: Apache NiFi Issue Type: Sub-task Components: Core UI Reporter: Mark Payne Once the Endpoint has been developed (NIFI-4849) we will need the UI to be updated to allow the user to run the diagnostics report and render those results, likely in some sort of shell. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-4849) Add REST Endpoint for gathering Processor Diagnostics information
Mark Payne created NIFI-4849: Summary: Add REST Endpoint for gathering Processor Diagnostics information Key: NIFI-4849 URL: https://issues.apache.org/jira/browse/NIFI-4849 Project: Apache NiFi Issue Type: Sub-task Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne We need to add a REST endpoint that will use the appropriate resources to gather the Processor Diagnostics information. Information to return should include things like: * Processor config * Processor status * Garbage Collection info * Repo Sizes * Connection info for connections whose source or destination is the processor * Controller Services that the processor is referencing -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-4848) Update HttpComponents version
[ https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-4848. --- Resolution: Fixed Fix Version/s: 1.6.0 > Update HttpComponents version > - > > Key: NIFI-4848 > URL: https://issues.apache.org/jira/browse/NIFI-4848 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.6.0 > > > Following dependencies should be updated to the latest GA: > httpclient 4.5.3 -> 4.5.5 > httpcore 4.4.4 -> 4.4.9 > httpasyncclient 4.1.2 -> 4.1.3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas
[ https://issues.apache.org/jira/browse/NIFI-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-4818. --- Resolution: Fixed Fix Version/s: 1.6.0 > Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas > > > Key: NIFI-4818 > URL: https://issues.apache.org/jira/browse/NIFI-4818 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > Fix For: 1.6.0 > > > ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. > It works if a connection URL does not have parameters. (e.g. > jdbc:hive2://host:port/dbName) But it reports wrong database name if there > are parameters. E.g. with > jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > the reported database name will be > dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > including the connection parameters. > Also, if there are more than one host:port defined, it will not be able to > analyze cluster name from hostnames correctly. > Similarly for Kafka topic, the reporting task uses transit URIs to analyze > hostnames and topic names. It does handle multiple host:port definitions > within a URI, however, current logic only uses the first hostname entry even > if there are multiple ones. For example, with a transit URI, > "PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses > "0.example.com" to match configured regular expressions to derive a cluster > name. If none of regex matches, then it uses the default cluster name without > looping through all hostnames. It never uses the 2nd or later hostnames to > derive a cluster name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4848) Update HttpComponents version
[ https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355466#comment-16355466 ] ASF GitHub Bot commented on NIFI-4848: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2453 Thanks @ijokarumawak! This has been merged to master. > Update HttpComponents version > - > > Key: NIFI-4848 > URL: https://issues.apache.org/jira/browse/NIFI-4848 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Following dependencies should be updated to the latest GA: > httpclient 4.5.3 -> 4.5.5 > httpcore 4.4.4 -> 4.4.9 > httpasyncclient 4.1.2 -> 4.1.3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4848) Update HttpComponents version
[ https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355467#comment-16355467 ] ASF GitHub Bot commented on NIFI-4848: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2453 > Update HttpComponents version > - > > Key: NIFI-4848 > URL: https://issues.apache.org/jira/browse/NIFI-4848 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Following dependencies should be updated to the latest GA: > httpclient 4.5.3 -> 4.5.5 > httpcore 4.4.4 -> 4.4.9 > httpasyncclient 4.1.2 -> 4.1.3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4848) Update HttpComponents version
[ https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355465#comment-16355465 ] ASF subversion and git services commented on NIFI-4848: --- Commit dbbf78f22c1046ae1f9a47780d6b731531d7e445 in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=dbbf78f ] NIFI-4848: Update HttpComponents version - httpclient 4.5.3 -> 4.5.5 - httpcore 4.4.4 -> 4.4.9 - ThreadSafe annotation is removed since 4.4.5, HTTPCLIENT-1743. Removed the annotation from DebugFlow processor. - httpasyncclient 4.1.2 -> 4.1.3 - This closes #2453 > Update HttpComponents version > - > > Key: NIFI-4848 > URL: https://issues.apache.org/jira/browse/NIFI-4848 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Following dependencies should be updated to the latest GA: > httpclient 4.5.3 -> 4.5.5 > httpcore 4.4.4 -> 4.4.9 > httpasyncclient 4.1.2 -> 4.1.3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2453: NIFI-4848: Update HttpComponents version
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2453 ---
[GitHub] nifi issue #2453: NIFI-4848: Update HttpComponents version
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2453 Thanks @ijokarumawak! This has been merged to master. ---
[jira] [Commented] (NIFI-4848) Update HttpComponents version
[ https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355462#comment-16355462 ] ASF GitHub Bot commented on NIFI-4848: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2453 Will review... > Update HttpComponents version > - > > Key: NIFI-4848 > URL: https://issues.apache.org/jira/browse/NIFI-4848 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Following dependencies should be updated to the latest GA: > httpclient 4.5.3 -> 4.5.5 > httpcore 4.4.4 -> 4.4.9 > httpasyncclient 4.1.2 -> 4.1.3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2453: NIFI-4848: Update HttpComponents version
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2453 Will review... ---
[jira] [Commented] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas
[ https://issues.apache.org/jira/browse/NIFI-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355457#comment-16355457 ] ASF subversion and git services commented on NIFI-4818: --- Commit f16cbd462b8d5bfea2cf4e1d02910f22e77d0354 in nifi's branch refs/heads/master from [~ijokarumawak] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=f16cbd4 ] NIFI-4818: Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas - Hive2JDBC: Handle connection parameters and multiple host entries correctly - KafkaTopic: Handle multiple host entries correctly - Avoid potential "IllegalStateException: Duplicate key" exception when NiFiAtlasHook analyzes existing NiFiFlowPath input/output entries - This closes #2435 > Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas > > > Key: NIFI-4818 > URL: https://issues.apache.org/jira/browse/NIFI-4818 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. > It works if a connection URL does not have parameters. (e.g. > jdbc:hive2://host:port/dbName) But it reports wrong database name if there > are parameters. E.g. with > jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > the reported database name will be > dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > including the connection parameters. > Also, if there are more than one host:port defined, it will not be able to > analyze cluster name from hostnames correctly. > Similarly for Kafka topic, the reporting task uses transit URIs to analyze > hostnames and topic names. It does handle multiple host:port definitions > within a URI, however, current logic only uses the first hostname entry even > if there are multiple ones. For example, with a transit URI, > "PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses > "0.example.com" to match configured regular expressions to derive a cluster > name. If none of regex matches, then it uses the default cluster name without > looping through all hostnames. It never uses the 2nd or later hostnames to > derive a cluster name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas
[ https://issues.apache.org/jira/browse/NIFI-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355458#comment-16355458 ] ASF GitHub Bot commented on NIFI-4818: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2435 Thanks @ijokarumawak! This has been merged to master. > Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas > > > Key: NIFI-4818 > URL: https://issues.apache.org/jira/browse/NIFI-4818 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. > It works if a connection URL does not have parameters. (e.g. > jdbc:hive2://host:port/dbName) But it reports wrong database name if there > are parameters. E.g. with > jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > the reported database name will be > dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > including the connection parameters. > Also, if there are more than one host:port defined, it will not be able to > analyze cluster name from hostnames correctly. > Similarly for Kafka topic, the reporting task uses transit URIs to analyze > hostnames and topic names. It does handle multiple host:port definitions > within a URI, however, current logic only uses the first hostname entry even > if there are multiple ones. For example, with a transit URI, > "PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses > "0.example.com" to match configured regular expressions to derive a cluster > name. If none of regex matches, then it uses the default cluster name without > looping through all hostnames. It never uses the 2nd or later hostnames to > derive a cluster name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4818) Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas
[ https://issues.apache.org/jira/browse/NIFI-4818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355459#comment-16355459 ] ASF GitHub Bot commented on NIFI-4818: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2435 > Fix transit URL parsing at Hive2JDBC and KafkaTopic for ReportLineageToAtlas > > > Key: NIFI-4818 > URL: https://issues.apache.org/jira/browse/NIFI-4818 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.5.0 >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > ReportLineageToAtlas parses Hive JDBC connection URLs to get database names. > It works if a connection URL does not have parameters. (e.g. > jdbc:hive2://host:port/dbName) But it reports wrong database name if there > are parameters. E.g. with > jdbc:hive2://host.port/dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > the reported database name will be > dbName;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2, > including the connection parameters. > Also, if there are more than one host:port defined, it will not be able to > analyze cluster name from hostnames correctly. > Similarly for Kafka topic, the reporting task uses transit URIs to analyze > hostnames and topic names. It does handle multiple host:port definitions > within a URI, however, current logic only uses the first hostname entry even > if there are multiple ones. For example, with a transit URI, > "PLAINTEXT://0.example.com:6667,1.example.com:6667/topicA", it uses > "0.example.com" to match configured regular expressions to derive a cluster > name. If none of regex matches, then it uses the default cluster name without > looping through all hostnames. It never uses the 2nd or later hostnames to > derive a cluster name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2435: NIFI-4818: Fix transit URL parsing at Hive2JDBC and...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2435 ---
[GitHub] nifi issue #2435: NIFI-4818: Fix transit URL parsing at Hive2JDBC and KafkaT...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2435 Thanks @ijokarumawak! This has been merged to master. ---
[jira] [Updated] (NIFI-4841) NPE when reverting local modifications to a versioned process group
[ https://issues.apache.org/jira/browse/NIFI-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-4841: -- Status: Patch Available (was: Open) > NPE when reverting local modifications to a versioned process group > --- > > Key: NIFI-4841 > URL: https://issues.apache.org/jira/browse/NIFI-4841 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Charlie Meyer >Assignee: Bryan Bende >Priority: Major > Attachments: NIFI-4841.xml > > > I created a process group via importing from the registry. I then made a few > modifications including settings properties and connecting some components. I > then attempted to revert my local changes so I could update the flow to a > newer version. When reverting the local changes, NiFi threw a NPE with the > following stack trace: > {noformat} > 2018-02-05 17:18:52,356 INFO [Version Control Update Thread-1] > org.apache.nifi.web.api.VersionsResource Stopping 1 Processors > 2018-02-05 17:18:52,477 ERROR [Version Control Update Thread-1] > org.apache.nifi.web.api.VersionsResource Failed to update flow to new version > java.lang.NullPointerException: null > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.scheduleComponents(StandardProcessGroupDAO.java:179) > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673) > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$EnhancerBySpringCGLIB$$bc287b8b.scheduleComponents() > at > org.apache.nifi.web.StandardNiFiServiceFacade$3.update(StandardNiFiServiceFacade.java:981) > at > org.apache.nifi.web.revision.NaiveRevisionManager.updateRevision(NaiveRevisionManager.java:120) > at > org.apache.nifi.web.StandardNiFiServiceFacade.scheduleComponents(StandardNiFiServiceFacade.java:976) > at > org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) > at > org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85) > at > org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithWriteLock(NiFiServiceFacadeLock.java:173) > at > org.apache.nifi.web.NiFiServiceFacadeLock.scheduleLock(NiFiServiceFacadeLock.java:102) > at sun.reflect.GeneratedMethodAccessor557.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:629) > at > org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:618) > at > org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673) > at > org.apache.nifi.web.StandardNiFiServiceFacade$$EnhancerBySpringCGLIB$$8a758fa4.scheduleComponents() > at > org.apache.nifi.web.util.LocalComponentLifecycle.stopComponents(LocalComponentLifecycle.java:125) > at > org.apache.nifi.web.util.LocalComponentLifecycle.scheduleComponents(LocalComponentLifecycle.java:66) > at > org.apache.nifi.web.api.VersionsResource.updateFlowVersion(VersionsResource.java:1365) > at >
[jira] [Assigned] (NIFI-4841) NPE when reverting local modifications to a versioned process group
[ https://issues.apache.org/jira/browse/NIFI-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende reassigned NIFI-4841: - Assignee: Bryan Bende > NPE when reverting local modifications to a versioned process group > --- > > Key: NIFI-4841 > URL: https://issues.apache.org/jira/browse/NIFI-4841 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Charlie Meyer >Assignee: Bryan Bende >Priority: Major > Attachments: NIFI-4841.xml > > > I created a process group via importing from the registry. I then made a few > modifications including settings properties and connecting some components. I > then attempted to revert my local changes so I could update the flow to a > newer version. When reverting the local changes, NiFi threw a NPE with the > following stack trace: > {noformat} > 2018-02-05 17:18:52,356 INFO [Version Control Update Thread-1] > org.apache.nifi.web.api.VersionsResource Stopping 1 Processors > 2018-02-05 17:18:52,477 ERROR [Version Control Update Thread-1] > org.apache.nifi.web.api.VersionsResource Failed to update flow to new version > java.lang.NullPointerException: null > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.scheduleComponents(StandardProcessGroupDAO.java:179) > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673) > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$EnhancerBySpringCGLIB$$bc287b8b.scheduleComponents() > at > org.apache.nifi.web.StandardNiFiServiceFacade$3.update(StandardNiFiServiceFacade.java:981) > at > org.apache.nifi.web.revision.NaiveRevisionManager.updateRevision(NaiveRevisionManager.java:120) > at > org.apache.nifi.web.StandardNiFiServiceFacade.scheduleComponents(StandardNiFiServiceFacade.java:976) > at > org.apache.nifi.web.StandardNiFiServiceFacade$$FastClassBySpringCGLIB$$358780e0.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) > at > org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:85) > at > org.apache.nifi.web.NiFiServiceFacadeLock.proceedWithWriteLock(NiFiServiceFacadeLock.java:173) > at > org.apache.nifi.web.NiFiServiceFacadeLock.scheduleLock(NiFiServiceFacadeLock.java:102) > at sun.reflect.GeneratedMethodAccessor557.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethodWithGivenArgs(AbstractAspectJAdvice.java:629) > at > org.springframework.aop.aspectj.AbstractAspectJAdvice.invokeAdviceMethod(AbstractAspectJAdvice.java:618) > at > org.springframework.aop.aspectj.AspectJAroundAdvice.invoke(AspectJAroundAdvice.java:70) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673) > at > org.apache.nifi.web.StandardNiFiServiceFacade$$EnhancerBySpringCGLIB$$8a758fa4.scheduleComponents() > at > org.apache.nifi.web.util.LocalComponentLifecycle.stopComponents(LocalComponentLifecycle.java:125) > at > org.apache.nifi.web.util.LocalComponentLifecycle.scheduleComponents(LocalComponentLifecycle.java:66) > at > org.apache.nifi.web.api.VersionsResource.updateFlowVersion(VersionsResource.java:1365) > at > org.apache.nifi.web.api.VersionsResource.lambda$null$22(VersionsResource.java:1305) >
[jira] [Commented] (NIFI-4841) NPE when reverting local modifications to a versioned process group
[ https://issues.apache.org/jira/browse/NIFI-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355424#comment-16355424 ] ASF GitHub Bot commented on NIFI-4841: -- GitHub user bbende opened a pull request: https://github.com/apache/nifi/pull/2454 NIFI-4841 Fixing NPE when reverting local changes involving remote gr… …oup ports Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi NIFI-4841 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2454.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2454 commit 634bd0dcc1184685cde652c55ec1448e33dbd370 Author: Bryan BendeDate: 2018-02-06T22:43:59Z NIFI-4841 Fixing NPE when reverting local changes involving remote group ports > NPE when reverting local modifications to a versioned process group > --- > > Key: NIFI-4841 > URL: https://issues.apache.org/jira/browse/NIFI-4841 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.5.0 >Reporter: Charlie Meyer >Priority: Major > Attachments: NIFI-4841.xml > > > I created a process group via importing from the registry. I then made a few > modifications including settings properties and connecting some components. I > then attempted to revert my local changes so I could update the flow to a > newer version. When reverting the local changes, NiFi threw a NPE with the > following stack trace: > {noformat} > 2018-02-05 17:18:52,356 INFO [Version Control Update Thread-1] > org.apache.nifi.web.api.VersionsResource Stopping 1 Processors > 2018-02-05 17:18:52,477 ERROR [Version Control Update Thread-1] > org.apache.nifi.web.api.VersionsResource Failed to update flow to new version > java.lang.NullPointerException: null > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO.scheduleComponents(StandardProcessGroupDAO.java:179) > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$FastClassBySpringCGLIB$$10a99b47.invoke() > at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) > at > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:738) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) > at > org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) > at > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) > at > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:673) > at > org.apache.nifi.web.dao.impl.StandardProcessGroupDAO$$EnhancerBySpringCGLIB$$bc287b8b.scheduleComponents() > at >
[GitHub] nifi pull request #2454: NIFI-4841 Fixing NPE when reverting local changes i...
GitHub user bbende opened a pull request: https://github.com/apache/nifi/pull/2454 NIFI-4841 Fixing NPE when reverting local changes involving remote gr⦠â¦oup ports Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi NIFI-4841 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2454.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2454 commit 634bd0dcc1184685cde652c55ec1448e33dbd370 Author: Bryan BendeDate: 2018-02-06T22:43:59Z NIFI-4841 Fixing NPE when reverting local changes involving remote group ports ---
[jira] [Commented] (NIFI-4848) Update HttpComponents version
[ https://issues.apache.org/jira/browse/NIFI-4848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16355319#comment-16355319 ] Pierre Villard commented on NIFI-4848: -- Just want to comment here that this change should also be reflected in MiNiFi agent to avoid a new issue like MINIFI-435. > Update HttpComponents version > - > > Key: NIFI-4848 > URL: https://issues.apache.org/jira/browse/NIFI-4848 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Koji Kawamura >Assignee: Koji Kawamura >Priority: Major > > Following dependencies should be updated to the latest GA: > httpclient 4.5.3 -> 4.5.5 > httpcore 4.4.4 -> 4.4.9 > httpasyncclient 4.1.2 -> 4.1.3 -- This message was sent by Atlassian JIRA (v7.6.3#76005)