[jira] [Commented] (NIFI-5054) Nifi Couchbase Processors does not support User Authentication
[ https://issues.apache.org/jira/browse/NIFI-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513365#comment-16513365 ] ASF GitHub Bot commented on NIFI-5054: -- Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2750 @mcgilman Thanks for reviewing. I've changed how Relationships are created. > Nifi Couchbase Processors does not support User Authentication > -- > > Key: NIFI-5054 > URL: https://issues.apache.org/jira/browse/NIFI-5054 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0, 1.6.0 >Reporter: Shagun Jaju >Assignee: Koji Kawamura >Priority: Major > Labels: authentication, security > > Issue Description: Nifi Couchbase processors don't work with new couchbase > versions 5.0 and 5.1. > New Couchbase Version 5.x has introduced *Role Based Access Control (RBAC),* > a ** new security feature. > # All buckets must now be accessed by a *user*/*password* combination that > has a *role with access rights* to the bucket. > # Buckets no longer use bucket-level passwords > # There is no default bucket and no sample buckets with blank passwords. > # You cannot create a user without a password. > *(Ref:* > https://developer.couchbase.com/documentation/server/5.0/introduction/whats-new.html > [https://blog.couchbase.com/new-sdk-authentication/] ) > > nifi-couchbase-processors : GetCouchbaseKey and PutCouchbaseKey using > Controller Service still uses old authentication mechanism. > * org.apache.nifi.processors.couchbase.GetCouchbaseKey > * org.apache.nifi.processors.couchbase.PutCouchbaseKey > Ref: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-couchbase-bundle/nifi-couchbase-processors/src/main/java/org/apache/nifi/couchbase/CouchbaseClusterService.java#L116] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2750: NIFI-5054: Couchbase Authentication, NIFI-5257: Expand Cou...
Github user ijokarumawak commented on the issue: https://github.com/apache/nifi/pull/2750 @mcgilman Thanks for reviewing. I've changed how Relationships are created. ---
[jira] [Commented] (MINIFICPP-515) Implement emplace_back for values in Property class
[ https://issues.apache.org/jira/browse/MINIFICPP-515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513361#comment-16513361 ] ASF GitHub Bot commented on MINIFICPP-515: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/359 MINIFICPP-515 Use emplace_back instead of push_back Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-515 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/359.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #359 commit 0123dfa5579ca4d831fd608391ae54b5660a6d18 Author: Andrew I. Christianson Date: 2018-06-15T05:27:46Z MINIFICPP-515 Use emplace_back instead of push_back > Implement emplace_back for values in Property class > --- > > Key: MINIFICPP-515 > URL: https://issues.apache.org/jira/browse/MINIFICPP-515 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > We're calling push_back instead of emplace_back, as well as unnecessarily > rebuilding strings from c_string() values. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #359: MINIFICPP-515 Use emplace_back instead of...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/359 MINIFICPP-515 Use emplace_back instead of push_back Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-515 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/359.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #359 commit 0123dfa5579ca4d831fd608391ae54b5660a6d18 Author: Andrew I. Christianson Date: 2018-06-15T05:27:46Z MINIFICPP-515 Use emplace_back instead of push_back ---
[jira] [Commented] (NIFI-5145) MockPropertyValue.evaluateExpressionLanguage(FlowFile) cannot handle null inputs
[ https://issues.apache.org/jira/browse/NIFI-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513344#comment-16513344 ] ASF GitHub Bot commented on NIFI-5145: -- Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/2749 Sorry, I didn't get a notification about this one. I've been working on some other things but will see if I can look at this tomorrow. It's 50/50 right now. > MockPropertyValue.evaluateExpressionLanguage(FlowFile) cannot handle null > inputs > > > Key: NIFI-5145 > URL: https://issues.apache.org/jira/browse/NIFI-5145 > Project: Apache NiFi > Issue Type: Bug >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > Fix For: 1.7.0 > > > The method mentioned in the title line cannot handle null inputs, even though > the main NiFi execution classes can handle that scenario. This forces hack to > pass testing with nulls that looks like this: > String val = flowFile != null ? > context.getProperty(PROP).evaluateExpressionLanguage(flowfile).getValue() : > context.getProperty(PROP).evaluateExpressionLanguage(new > HashMap()).getValue(); -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2749: NIFI-5145 Fixed evaluateAttributeExpressions in mockproper...
Github user alopresto commented on the issue: https://github.com/apache/nifi/pull/2749 Sorry, I didn't get a notification about this one. I've been working on some other things but will see if I can look at this tomorrow. It's 50/50 right now. ---
[jira] [Resolved] (NIFI-5231) Record stats processor
[ https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto resolved NIFI-5231. - Resolution: Fixed > Record stats processor > -- > > Key: NIFI-5231 > URL: https://issues.apache.org/jira/browse/NIFI-5231 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > Fix For: 1.7.0 > > > Should the following: > > # Take a record reader. > # Count the # of records and add a record_count attribute to the flowfile. > # Allow user-defined properties that do the following: > ## Map attribute name -> record path. > ## Provide aggregate value counts for each record path statement. > ## Provide total count for record path operation. > ## Put those values on the flowfile as attributes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5209) Remove toolkit migration without password functionality
[ https://issues.apache.org/jira/browse/NIFI-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5209: Status: Patch Available (was: Reopened) > Remove toolkit migration without password functionality > --- > > Key: NIFI-5209 > URL: https://issues.apache.org/jira/browse/NIFI-5209 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Affects Versions: 1.7.0 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Blocker > Labels: hash, key, passwords, revert, security, toolkit > Fix For: 1.7.0 > > > In NIFI-4942, new functionality was added to allow Ambari clients to perform > the encrypted configuration migration without providing the original password > or key by using a secure hash of the original credential to demonstrate > knowledge of that value. The Ambari team found another way on their end to > perform this action, and rather than allow the {{./secure_hash.key}} behavior > to be released and then removed at a later time, complicating our security > posture and potentially creating difficult support cases, it is better to > remove it completely before the 1.7.0 release. > However, it is not as simple as just backing out a few commits, as necessary > refactoring of the tool code also occurred at that time. I will remove this > feature while maintaining the improvements made to the toolkit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (NIFI-5209) Remove toolkit migration without password functionality
[ https://issues.apache.org/jira/browse/NIFI-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto reopened NIFI-5209: - There are some test resources that need to be deleted and the pom.xml RAT section removed. > Remove toolkit migration without password functionality > --- > > Key: NIFI-5209 > URL: https://issues.apache.org/jira/browse/NIFI-5209 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Affects Versions: 1.7.0 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Blocker > Labels: hash, key, passwords, revert, security, toolkit > Fix For: 1.7.0 > > > In NIFI-4942, new functionality was added to allow Ambari clients to perform > the encrypted configuration migration without providing the original password > or key by using a secure hash of the original credential to demonstrate > knowledge of that value. The Ambari team found another way on their end to > perform this action, and rather than allow the {{./secure_hash.key}} behavior > to be released and then removed at a later time, complicating our security > posture and potentially creating difficult support cases, it is better to > remove it completely before the 1.7.0 release. > However, it is not as simple as just backing out a few commits, as necessary > refactoring of the tool code also occurred at that time. I will remove this > feature while maintaining the improvements made to the toolkit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2798: NIFI-5209 Removed unused test resources.
GitHub user alopresto opened a pull request: https://github.com/apache/nifi/pull/2798 NIFI-5209 Removed unused test resources. Removed RAT exclusion from pom.xml. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alopresto/nifi NIFI-5209-rat Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2798.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2798 commit 36443d54c885c0e737f936cc76e68ca9d33d29bd Author: Andy LoPresto Date: 2018-06-15T04:54:37Z NIFI-5209 Removed unused test resources. Removed RAT exclusion from pom.xml. ---
[jira] [Commented] (NIFI-5209) Remove toolkit migration without password functionality
[ https://issues.apache.org/jira/browse/NIFI-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513338#comment-16513338 ] ASF GitHub Bot commented on NIFI-5209: -- GitHub user alopresto opened a pull request: https://github.com/apache/nifi/pull/2798 NIFI-5209 Removed unused test resources. Removed RAT exclusion from pom.xml. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alopresto/nifi NIFI-5209-rat Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2798.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2798 commit 36443d54c885c0e737f936cc76e68ca9d33d29bd Author: Andy LoPresto Date: 2018-06-15T04:54:37Z NIFI-5209 Removed unused test resources. Removed RAT exclusion from pom.xml. > Remove toolkit migration without password functionality > --- > > Key: NIFI-5209 > URL: https://issues.apache.org/jira/browse/NIFI-5209 > Project: Apache NiFi > Issue Type: Improvement > Components: Tools and Build >Affects Versions: 1.7.0 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Blocker > Labels: hash, key, passwords, revert, security, toolkit > Fix For: 1.7.0 > > > In NIFI-4942, new functionality was added to allow Ambari clients to perform > the encrypted configuration migration without providing the original password > or key by using a secure hash of the original credential to demonstrate > knowledge of that value. The Ambari team found another way on their end to > perform this action, and rather than allow the {{./secure_hash.key}} behavior > to be released and then removed at a later time, complicating our security > posture and potentially creating difficult support cases, it is better to > remove it completely before the 1.7.0 release. > However, it is not as simple as just backing out a few commits, as necessary > refactoring of the tool code also occurred at that time. I will remove this > feature while maintaining the improvements made to the toolkit. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5193) Improve ConfigEncryptionTool handling of complex user search mapping values
[ https://issues.apache.org/jira/browse/NIFI-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andy LoPresto updated NIFI-5193: Status: Patch Available (was: In Progress) > Improve ConfigEncryptionTool handling of complex user search mapping values > --- > > Key: NIFI-5193 > URL: https://issues.apache.org/jira/browse/NIFI-5193 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.6.0 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: regex, security, toolkit > > The {{ConfigEncryptionTool}} can fail to encrypt > {{login-identity-providers.xml}} or {{authorizers.xml}} if the XML contains a > User Search Mapping value which is interpreted as having regular expression > capture groups. > {code} > (& > (objectCategory=Person)(sAMAccountName=*)(!(UserAccountControl:1.2.840.113556.1.4.803:=2))(!(sAMAccountName=$*))) > {code} > Results in: > {code} > 2018/05/14 15:05:22 ERROR [main] > org.apache.nifi.properties.ConfigEncryptionTool: Encountered an error > java.lang.IllegalArgumentException: Illegal group reference > at java.util.regex.Matcher.appendReplacement(Matcher.java:857) > at java.util.regex.Matcher.replaceFirst(Matcher.java:1004) > at java.lang.String.replaceFirst(String.java:2178) > at java_lang_String$replaceFirst$6.call(Unknown Source) > at > org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:133) > at > org.apache.nifi.properties.ConfigEncryptionTool.serializeAuthorizersAndPreserveFormat(ConfigEncryptionTool.groovy:1246) > at > org.apache.nifi.properties.ConfigEncryptionTool$serializeAuthorizersAndPreserveFormat$6.callStatic(Unknown > Source) > at > org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCallStatic(CallSiteArray.java:56) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:194) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.callStatic(AbstractCallSite.java:214) > at > org.apache.nifi.properties.ConfigEncryptionTool.writeAuthorizers(ConfigEncryptionTool.groovy:1118) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSiteNoUnwrapNoCoerce.invoke(PogoMetaMethodSite.java:210) > at > org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.call(PogoMetaMethodSite.java:71) > at > org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:117) > at > org.apache.nifi.properties.ConfigEncryptionTool.main(ConfigEncryptionTool.groovy:1485) > at > org.apache.nifi.properties.ConfigEncryptionTool$main.call(Unknown Source) > at > org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113) > at > org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125) > at > org.apache.nifi.toolkit.encryptconfig.LegacyMode.run(LegacyMode.groovy:30) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite$PogoCachedMethodSite.invoke(PogoMetaMethodSite.java:169) > at > org.codehaus.groovy.runtime.callsite.PogoMetaMethodSite.call(PogoMetaMethodSite.java:71) > at > org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48) > at >
[jira] [Commented] (NIFI-5193) Improve ConfigEncryptionTool handling of complex user search mapping values
[ https://issues.apache.org/jira/browse/NIFI-5193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513331#comment-16513331 ] ASF GitHub Bot commented on NIFI-5193: -- GitHub user alopresto opened a pull request: https://github.com/apache/nifi/pull/2797 NIFI-5193 Fixed issue in ConfigEncryptionTool when XML contained regex-breaking LDAP filters Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alopresto/nifi NIFI-5193 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2797.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2797 commit 16b2deaa27244c595f6544874b17dd893fe42e16 Author: Andy LoPresto Date: 2018-06-15T03:38:11Z NIFI-5193 Added logic to handle complex user filter expressions. Added unit tests. Added unit test resources. commit a6f21818e83a51d3ff84fb8cfbd6d5f03bff259d Author: Andy LoPresto Date: 2018-06-15T03:50:38Z NIFI-5193 Fixed comments. Refactored XmlSlurper instantiation to keep ignorable whitespace. commit fea1716e057c222ccc4dd659676a70f05af6c90b Author: Andy LoPresto Date: 2018-06-15T03:57:25Z NIFI-5193 Added logic to handle LIP complex user search filter. Added unit tests. Added unit test resources. commit 7a0471e1201602fea00feae76159d9fb1f84dce3 Author: Andy LoPresto Date: 2018-06-15T04:07:56Z NIFI-5193 Removed unnecessary substitution/repopulation logic from encrypt|decryptAuthorizers. All unit tests pass. commit a96b63ae6a6269f3277e781b6e6a76b0166b6abc Author: Andy LoPresto Date: 2018-06-15T04:33:35Z NIFI-5193 Removed unnecessary substitution/repopulation logic from CET. Removed unnecessary unit tests. commit cca5ce4c718927cb42c1c44762a642119e58062d Author: Andy LoPresto Date: 2018-06-15T04:36:54Z NIFI-5193 Removed unnecessary commons-text dependency from pom.xml. > Improve ConfigEncryptionTool handling of complex user search mapping values > --- > > Key: NIFI-5193 > URL: https://issues.apache.org/jira/browse/NIFI-5193 > Project: Apache NiFi > Issue Type: Bug > Components: Tools and Build >Affects Versions: 1.6.0 >Reporter: Andy LoPresto >Assignee: Andy LoPresto >Priority: Major > Labels: regex, security, toolkit > > The {{ConfigEncryptionTool}} can fail to encrypt > {{login-identity-providers.xml}} or {{authorizers.xml}} if the XML contains a > User Search Mapping value which is interpreted as having regular expression > capture groups. > {code} > (& > (objectCategory=Person)(sAMAccountName=*)(!(UserAccountControl:1.2.840.113556.1.4.803:=2))(!(sAMAccountName=$*))) > {code} > Results in: > {code} > 2018/05/14 15:05:22 ERROR [main] > org.apache.nifi.properties.ConfigEncryptionTool: Encountered an error > java.lang.IllegalArgumentException: Illegal group
[GitHub] nifi pull request #2797: NIFI-5193 Fixed issue in ConfigEncryptionTool when ...
GitHub user alopresto opened a pull request: https://github.com/apache/nifi/pull/2797 NIFI-5193 Fixed issue in ConfigEncryptionTool when XML contained regex-breaking LDAP filters Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/alopresto/nifi NIFI-5193 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2797.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2797 commit 16b2deaa27244c595f6544874b17dd893fe42e16 Author: Andy LoPresto Date: 2018-06-15T03:38:11Z NIFI-5193 Added logic to handle complex user filter expressions. Added unit tests. Added unit test resources. commit a6f21818e83a51d3ff84fb8cfbd6d5f03bff259d Author: Andy LoPresto Date: 2018-06-15T03:50:38Z NIFI-5193 Fixed comments. Refactored XmlSlurper instantiation to keep ignorable whitespace. commit fea1716e057c222ccc4dd659676a70f05af6c90b Author: Andy LoPresto Date: 2018-06-15T03:57:25Z NIFI-5193 Added logic to handle LIP complex user search filter. Added unit tests. Added unit test resources. commit 7a0471e1201602fea00feae76159d9fb1f84dce3 Author: Andy LoPresto Date: 2018-06-15T04:07:56Z NIFI-5193 Removed unnecessary substitution/repopulation logic from encrypt|decryptAuthorizers. All unit tests pass. commit a96b63ae6a6269f3277e781b6e6a76b0166b6abc Author: Andy LoPresto Date: 2018-06-15T04:33:35Z NIFI-5193 Removed unnecessary substitution/repopulation logic from CET. Removed unnecessary unit tests. commit cca5ce4c718927cb42c1c44762a642119e58062d Author: Andy LoPresto Date: 2018-06-15T04:36:54Z NIFI-5193 Removed unnecessary commons-text dependency from pom.xml. ---
[jira] [Commented] (NIFI-5252) Allow arbitrary headers in PutEmail processor
[ https://issues.apache.org/jira/browse/NIFI-5252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513254#comment-16513254 ] ASF GitHub Bot commented on NIFI-5252: -- Github user dtrodrigues commented on the issue: https://github.com/apache/nifi/pull/2787 moved regex compilation to when processor is scheduled and ensured header values are encoded appropriately > Allow arbitrary headers in PutEmail processor > - > > Key: NIFI-5252 > URL: https://issues.apache.org/jira/browse/NIFI-5252 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Dustin Rodrigues >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2787: NIFI-5252 - support arbitrary headers in PutEmail processo...
Github user dtrodrigues commented on the issue: https://github.com/apache/nifi/pull/2787 moved regex compilation to when processor is scheduled and ensured header values are encoded appropriately ---
[jira] [Commented] (NIFI-5223) Allow the usage of expression language for properties of RecordSetWriters
[ https://issues.apache.org/jira/browse/NIFI-5223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513053#comment-16513053 ] ASF GitHub Bot commented on NIFI-5223: -- Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2736#discussion_r195584322 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-service-api/src/main/java/org/apache/nifi/serialization/RecordSetWriterFactory.java --- @@ -76,5 +78,23 @@ * @return a RecordSetWriter that can write record sets to an OutputStream * @throws IOException if unable to read from the given InputStream */ -RecordSetWriter createWriter(ComponentLog logger, RecordSchema schema, OutputStream out) throws SchemaNotFoundException, IOException; --- End diff -- I agree with @pvillard31 on this point. We can't really change service interfaces with a very compelling reason otherwise we'll alienate third party developers. > Allow the usage of expression language for properties of RecordSetWriters > - > > Key: NIFI-5223 > URL: https://issues.apache.org/jira/browse/NIFI-5223 > Project: Apache NiFi > Issue Type: Improvement >Reporter: Johannes Peter >Assignee: Johannes Peter >Priority: Major > > To allow the usage of expression language for properties of RecordSetWriters, > the method createWriter of the interface RecordSetWriterFactory has to be > enhanced by a parameter to provide a map containing variables of a FlowFile. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2736: NIFI-5223 Allow the usage of expression language fo...
Github user MikeThomsen commented on a diff in the pull request: https://github.com/apache/nifi/pull/2736#discussion_r195584322 --- Diff: nifi-nar-bundles/nifi-standard-services/nifi-record-serialization-service-api/src/main/java/org/apache/nifi/serialization/RecordSetWriterFactory.java --- @@ -76,5 +78,23 @@ * @return a RecordSetWriter that can write record sets to an OutputStream * @throws IOException if unable to read from the given InputStream */ -RecordSetWriter createWriter(ComponentLog logger, RecordSchema schema, OutputStream out) throws SchemaNotFoundException, IOException; --- End diff -- I agree with @pvillard31 on this point. We can't really change service interfaces with a very compelling reason otherwise we'll alienate third party developers. ---
[jira] [Commented] (NIFI-5292) Rename existing ElasticSearch client service impl to specify it is for 5.X
[ https://issues.apache.org/jira/browse/NIFI-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513050#comment-16513050 ] ASF GitHub Bot commented on NIFI-5292: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2782 @markap14 I cleaned it up and got most of what I suggested done. Couldn't figure out a good way to detect the protocol ranges, but this should do. > Rename existing ElasticSearch client service impl to specify it is for 5.X > -- > > Key: NIFI-5292 > URL: https://issues.apache.org/jira/browse/NIFI-5292 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.7.0 >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > Labels: Migration > > The current version of the impl is 5.X, but has a generic name that will be > confusing down the road. > Add an ES 6.X client service as well. > > Migration note: Anyone using the existing client service component will have > to create a new one that corresponds to the version of ElasticSearch they are > using. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2782: NIFI-5292 Renamed ElasticSearch client service impl to sho...
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2782 @markap14 I cleaned it up and got most of what I suggested done. Couldn't figure out a good way to detect the protocol ranges, but this should do. ---
[jira] [Commented] (NIFI-5275) PostHTTP - Hung connections and zero reuse of existing connections
[ https://issues.apache.org/jira/browse/NIFI-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513027#comment-16513027 ] ASF GitHub Bot commented on NIFI-5275: -- GitHub user mosermw opened a pull request: https://github.com/apache/nifi/pull/2796 NIFI-5275 PostHTTP SocketConfig setup, fixed connection pool when ... using HTTPS, setup idle connection checker, setup request retry handler, improved some exception handling Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mosermw/nifi NIFI-5275 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2796.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2796 commit ec33b97d4cc6744404e498a20da3a4faebc83c59 Author: Mike Moser Date: 2018-06-14T21:15:39Z NIFI-5275 PostHTTP SocketConfig setup, fixed connection pool when using HTTPS, setup idle connection checker, setup request retry handler, improved some exception handling > PostHTTP - Hung connections and zero reuse of existing connections > -- > > Key: NIFI-5275 > URL: https://issues.apache.org/jira/browse/NIFI-5275 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.6.0 >Reporter: Steven Youtsey >Assignee: Michael Moser >Priority: Major > > Connection setups, the HEAD request, and the DELETE request do not have any > timeout associated with them. When the remote server goes sideways, these > actions will wait indefinitely and appear as being hung. See > https://issues.apache.org/jira/browse/HTTPCLIENT-1892 for an explanation as > to why the initial connection setups are not timing out. > Connections, though pooled, are not being re-used. A new connection is > established for every POST. This creates a burden on highly loaded remote > listener servers. Verified by both netstat and turning on Debug for > org.apache.http.impl.conn. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2796: NIFI-5275 PostHTTP SocketConfig setup, fixed connec...
GitHub user mosermw opened a pull request: https://github.com/apache/nifi/pull/2796 NIFI-5275 PostHTTP SocketConfig setup, fixed connection pool when ... using HTTPS, setup idle connection checker, setup request retry handler, improved some exception handling Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mosermw/nifi NIFI-5275 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2796.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2796 commit ec33b97d4cc6744404e498a20da3a4faebc83c59 Author: Mike Moser Date: 2018-06-14T21:15:39Z NIFI-5275 PostHTTP SocketConfig setup, fixed connection pool when using HTTPS, setup idle connection checker, setup request retry handler, improved some exception handling ---
[jira] [Commented] (NIFI-5275) PostHTTP - Hung connections and zero reuse of existing connections
[ https://issues.apache.org/jira/browse/NIFI-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16513003#comment-16513003 ] Michael Moser commented on NIFI-5275: - I spent a good bit of time testing this, and I learned that normal connections were reused properly by the connection pool, but HTTPS connections were *not* being reused. > PostHTTP - Hung connections and zero reuse of existing connections > -- > > Key: NIFI-5275 > URL: https://issues.apache.org/jira/browse/NIFI-5275 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.6.0 >Reporter: Steven Youtsey >Assignee: Michael Moser >Priority: Major > > Connection setups, the HEAD request, and the DELETE request do not have any > timeout associated with them. When the remote server goes sideways, these > actions will wait indefinitely and appear as being hung. See > https://issues.apache.org/jira/browse/HTTPCLIENT-1892 for an explanation as > to why the initial connection setups are not timing out. > Connections, though pooled, are not being re-used. A new connection is > established for every POST. This creates a burden on highly loaded remote > listener servers. Verified by both netstat and turning on Debug for > org.apache.http.impl.conn. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFIREG-176) Release Manager - Release 0.2.0
Kevin Doran created NIFIREG-176: --- Summary: Release Manager - Release 0.2.0 Key: NIFIREG-176 URL: https://issues.apache.org/jira/browse/NIFIREG-176 Project: NiFi Registry Issue Type: Task Reporter: Kevin Doran Assignee: Kevin Doran Fix For: 0.2.0 Perform release manager activities for 0.2.0 release. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-5249) Dockerfile enhancements
[ https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Storck resolved NIFI-5249. --- Resolution: Resolved Fix Version/s: 1.7.0 > Dockerfile enhancements > --- > > Key: NIFI-5249 > URL: https://issues.apache.org/jira/browse/NIFI-5249 > Project: Apache NiFi > Issue Type: Improvement > Components: Docker >Reporter: Peter Wilcsinszky >Priority: Minor > Fix For: 1.7.0 > > > * make environment variables more explicit > * create data and log directories > * add procps for process visibility inside the container -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5249) Dockerfile enhancements
[ https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512903#comment-16512903 ] ASF GitHub Bot commented on NIFI-5249: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2747 > Dockerfile enhancements > --- > > Key: NIFI-5249 > URL: https://issues.apache.org/jira/browse/NIFI-5249 > Project: Apache NiFi > Issue Type: Improvement > Components: Docker >Reporter: Peter Wilcsinszky >Priority: Minor > > * make environment variables more explicit > * create data and log directories > * add procps for process visibility inside the container -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2747 ---
[jira] [Commented] (NIFI-5274) ReplaceText can product StackOverflowError which causes admin yield
[ https://issues.apache.org/jira/browse/NIFI-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512802#comment-16512802 ] ASF GitHub Bot commented on NIFI-5274: -- Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2767 @mattyb149 I think the issue is whether there is a reasonable expectation that a user would loop back a 'failure' relationship, and precedent set in other similar processors. For example, I consider ReplaceText in the same category as processors that modify content such as CompressContent, UnpackContent, and EncryptContent. None of those processors penalize flowfiles sent to failure. In those processors it's not reasonable to expect a failure to correct itself, so it's not reasonable to loop back the failure relationship. I just followed that precedent when modifying ReplaceText for this PR. > ReplaceText can product StackOverflowError which causes admin yield > --- > > Key: NIFI-5274 > URL: https://issues.apache.org/jira/browse/NIFI-5274 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.6.0 >Reporter: Michael Moser >Assignee: Michael Moser >Priority: Major > > Regex Replace mode can easily produce StackOverflowError. Certain regular > expressions are implemented using recursion, which when used on large input > text can cause StackOverflowError. This causes the ReplaceText processor to > rollback and admin yield, which causes the input flowfile to get stuck in the > input queue. > We should be able to catch this condition and route the flowfile to failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2767: NIFI-5274 avoid rollback on uncaught errors in ReplaceText
Github user mosermw commented on the issue: https://github.com/apache/nifi/pull/2767 @mattyb149 I think the issue is whether there is a reasonable expectation that a user would loop back a 'failure' relationship, and precedent set in other similar processors. For example, I consider ReplaceText in the same category as processors that modify content such as CompressContent, UnpackContent, and EncryptContent. None of those processors penalize flowfiles sent to failure. In those processors it's not reasonable to expect a failure to correct itself, so it's not reasonable to loop back the failure relationship. I just followed that precedent when modifying ReplaceText for this PR. ---
[jira] [Updated] (NIFI-5311) Wait a bit for components to finish validation on creation
[ https://issues.apache.org/jira/browse/NIFI-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman updated NIFI-5311: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Wait a bit for components to finish validation on creation > -- > > Key: NIFI-5311 > URL: https://issues.apache.org/jira/browse/NIFI-5311 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > In NIFI-5279, we updated the framework so that we won't return web requests > that update components until either the component's validation completes or > we wait 50 milliseconds. We should do the same when creating components. > Otherwise, we end up seeing "Validating..." quite often when a component is > created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5311) Wait a bit for components to finish validation on creation
[ https://issues.apache.org/jira/browse/NIFI-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512786#comment-16512786 ] ASF GitHub Bot commented on NIFI-5311: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2795 Thanks @markap14! This has been merged to master. > Wait a bit for components to finish validation on creation > -- > > Key: NIFI-5311 > URL: https://issues.apache.org/jira/browse/NIFI-5311 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > In NIFI-5279, we updated the framework so that we won't return web requests > that update components until either the component's validation completes or > we wait 50 milliseconds. We should do the same when creating components. > Otherwise, we end up seeing "Validating..." quite often when a component is > created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5311) Wait a bit for components to finish validation on creation
[ https://issues.apache.org/jira/browse/NIFI-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512787#comment-16512787 ] ASF GitHub Bot commented on NIFI-5311: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2795 > Wait a bit for components to finish validation on creation > -- > > Key: NIFI-5311 > URL: https://issues.apache.org/jira/browse/NIFI-5311 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > In NIFI-5279, we updated the framework so that we won't return web requests > that update components until either the component's validation completes or > we wait 50 milliseconds. We should do the same when creating components. > Otherwise, we end up seeing "Validating..." quite often when a component is > created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2795: NIFI-5311: When creating a processor, controller se...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2795 ---
[GitHub] nifi issue #2795: NIFI-5311: When creating a processor, controller service, ...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2795 Thanks @markap14! This has been merged to master. ---
[jira] [Commented] (NIFI-5311) Wait a bit for components to finish validation on creation
[ https://issues.apache.org/jira/browse/NIFI-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512767#comment-16512767 ] ASF GitHub Bot commented on NIFI-5311: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2795 Will review... > Wait a bit for components to finish validation on creation > -- > > Key: NIFI-5311 > URL: https://issues.apache.org/jira/browse/NIFI-5311 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > In NIFI-5279, we updated the framework so that we won't return web requests > that update components until either the component's validation completes or > we wait 50 milliseconds. We should do the same when creating components. > Otherwise, we end up seeing "Validating..." quite often when a component is > created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2795: NIFI-5311: When creating a processor, controller service, ...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2795 Will review... ---
[jira] [Resolved] (NIFI-4907) Provenance authorization refactoring
[ https://issues.apache.org/jira/browse/NIFI-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt Gilman resolved NIFI-4907. --- Resolution: Fixed Fix Version/s: 1.7.0 > Provenance authorization refactoring > > > Key: NIFI-4907 > URL: https://issues.apache.org/jira/browse/NIFI-4907 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > Fix For: 1.7.0 > > > Currently, the 'view the data' component policy is too tightly coupled with > Provenance queries. The 'query provenance' policy should be the only policy > required for viewing Provenance query results. Both 'view the component' and > 'view the data' policies should be used to refine the appropriate visibility > of event details - but not the event itself. > 1) Component Visibility > The authorization of Provenance events is inconsistent with the behavior of > the graph. For example, if a user does not have 'view the component' policy, > the graph shows this component as a "black box" (no details such as name, > UUID, etc.) However, when querying Provenance, this component will show up > including the Component Type and the Component Name. This is in effect a > violation of the policy. These component details should be obscured in the > Provenance event displayed if user does not have the appropriate 'view the > component' policy. > 2) Data Visibility > For a Provenance query, all events should be visible as long as the user > performing the query belongs to the 'query provenance' global policy. As > mentioned above, some information about the component may be obscured > depending on 'view the component' policy, but the event itself should be > visible. Additionally, details of the event (clicking the View Details "i" > icon) should only be accessible if the user belongs to the 'view the data' > policy for the affected component. If the user is not in the appropriate > 'view the data' policy, a popup warning should be displayed indicating the > reason details are not visible with more specific detail than the current > "Contact the system administrator". > 3) Lineage Graphs > As with the Provenance table view recommendation above, the lineage graph > should display all events. Currently, if the lineage graph includes an event > belonging to a component which the user does not have 'view the data', it is > shown on the graph as "UNKNOWN". As with Data Visibility mentioned above, the > graph should indicate the event type as long as the user is in the 'view the > component'. Subsequent "View Details" on the event should only be visible if > the user is in the 'view the data' policy. > In summary, for Provenance query results and lineage graphs, all events > should be shown. Component Name and Component Type information should be > conditionally visible depending on the corresponding component policy 'view > the component' policy. Event details including Provenance event type and > FlowFile information should be conditionally available depending on the > corresponding component policy 'view the data'. Inability to display event > details should provide feedback to the user indicating the reason. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4907) Provenance authorization refactoring
[ https://issues.apache.org/jira/browse/NIFI-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512762#comment-16512762 ] ASF GitHub Bot commented on NIFI-4907: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2703 > Provenance authorization refactoring > > > Key: NIFI-4907 > URL: https://issues.apache.org/jira/browse/NIFI-4907 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > Fix For: 1.7.0 > > > Currently, the 'view the data' component policy is too tightly coupled with > Provenance queries. The 'query provenance' policy should be the only policy > required for viewing Provenance query results. Both 'view the component' and > 'view the data' policies should be used to refine the appropriate visibility > of event details - but not the event itself. > 1) Component Visibility > The authorization of Provenance events is inconsistent with the behavior of > the graph. For example, if a user does not have 'view the component' policy, > the graph shows this component as a "black box" (no details such as name, > UUID, etc.) However, when querying Provenance, this component will show up > including the Component Type and the Component Name. This is in effect a > violation of the policy. These component details should be obscured in the > Provenance event displayed if user does not have the appropriate 'view the > component' policy. > 2) Data Visibility > For a Provenance query, all events should be visible as long as the user > performing the query belongs to the 'query provenance' global policy. As > mentioned above, some information about the component may be obscured > depending on 'view the component' policy, but the event itself should be > visible. Additionally, details of the event (clicking the View Details "i" > icon) should only be accessible if the user belongs to the 'view the data' > policy for the affected component. If the user is not in the appropriate > 'view the data' policy, a popup warning should be displayed indicating the > reason details are not visible with more specific detail than the current > "Contact the system administrator". > 3) Lineage Graphs > As with the Provenance table view recommendation above, the lineage graph > should display all events. Currently, if the lineage graph includes an event > belonging to a component which the user does not have 'view the data', it is > shown on the graph as "UNKNOWN". As with Data Visibility mentioned above, the > graph should indicate the event type as long as the user is in the 'view the > component'. Subsequent "View Details" on the event should only be visible if > the user is in the 'view the data' policy. > In summary, for Provenance query results and lineage graphs, all events > should be shown. Component Name and Component Type information should be > conditionally visible depending on the corresponding component policy 'view > the component' policy. Event details including Provenance event type and > FlowFile information should be conditionally available depending on the > corresponding component policy 'view the data'. Inability to display event > details should provide feedback to the user indicating the reason. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4907) Provenance authorization refactoring
[ https://issues.apache.org/jira/browse/NIFI-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512759#comment-16512759 ] ASF GitHub Bot commented on NIFI-4907: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2703 Thanks @markobean! This has been merged to master. > Provenance authorization refactoring > > > Key: NIFI-4907 > URL: https://issues.apache.org/jira/browse/NIFI-4907 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > > Currently, the 'view the data' component policy is too tightly coupled with > Provenance queries. The 'query provenance' policy should be the only policy > required for viewing Provenance query results. Both 'view the component' and > 'view the data' policies should be used to refine the appropriate visibility > of event details - but not the event itself. > 1) Component Visibility > The authorization of Provenance events is inconsistent with the behavior of > the graph. For example, if a user does not have 'view the component' policy, > the graph shows this component as a "black box" (no details such as name, > UUID, etc.) However, when querying Provenance, this component will show up > including the Component Type and the Component Name. This is in effect a > violation of the policy. These component details should be obscured in the > Provenance event displayed if user does not have the appropriate 'view the > component' policy. > 2) Data Visibility > For a Provenance query, all events should be visible as long as the user > performing the query belongs to the 'query provenance' global policy. As > mentioned above, some information about the component may be obscured > depending on 'view the component' policy, but the event itself should be > visible. Additionally, details of the event (clicking the View Details "i" > icon) should only be accessible if the user belongs to the 'view the data' > policy for the affected component. If the user is not in the appropriate > 'view the data' policy, a popup warning should be displayed indicating the > reason details are not visible with more specific detail than the current > "Contact the system administrator". > 3) Lineage Graphs > As with the Provenance table view recommendation above, the lineage graph > should display all events. Currently, if the lineage graph includes an event > belonging to a component which the user does not have 'view the data', it is > shown on the graph as "UNKNOWN". As with Data Visibility mentioned above, the > graph should indicate the event type as long as the user is in the 'view the > component'. Subsequent "View Details" on the event should only be visible if > the user is in the 'view the data' policy. > In summary, for Provenance query results and lineage graphs, all events > should be shown. Component Name and Component Type information should be > conditionally visible depending on the corresponding component policy 'view > the component' policy. Event details including Provenance event type and > FlowFile information should be conditionally available depending on the > corresponding component policy 'view the data'. Inability to display event > details should provide feedback to the user indicating the reason. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2703: NIFI-4907: add 'view provenance' component policy
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2703 ---
[GitHub] nifi issue #2703: NIFI-4907: add 'view provenance' component policy
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2703 Thanks @markobean! This has been merged to master. ---
[jira] [Created] (NIFI-5313) Provenance Event Details
Matt Gilman created NIFI-5313: - Summary: Provenance Event Details Key: NIFI-5313 URL: https://issues.apache.org/jira/browse/NIFI-5313 Project: Apache NiFi Issue Type: Bug Components: Core Framework Reporter: Matt Gilman Review information entered into provenance event details to avoid possibly including component specific information. With changes introduced in NIFI-4907 we are implementing a more granular provenance event authorization. By leveraging the fields of the provenance event, we can ensure we're considering the user that is accessing a particular event when overriding or redacting fields. This is not possible when storing things in the Details field. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table
[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcio Sugar updated NIFI-4926: --- Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and Matt Burgess' [reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html], this particular exception could be avoided by adding this setting (semicolon
[jira] [Created] (NIFI-5312) QueryDatabaseTable updates state when an SQLException is thrown
Marcio Sugar created NIFI-5312: -- Summary: QueryDatabaseTable updates state when an SQLException is thrown Key: NIFI-5312 URL: https://issues.apache.org/jira/browse/NIFI-5312 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.6.0, 1.5.0 Environment: Ubuntu 16.04 Apache NiFi 1.5.0, 1.6.0 IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1) IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 4.19.26 / v10.5 FP6, 4.19.72 / v10.5 FP9 (2) Notes: (1) SELECT * FROM SYSIBMADM.ENV_INST_INFO (2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version Reporter: Marcio Sugar I noticed that when an SQLException is thrown, at least in the situation described by NIFI-4926, QueryDatabaseTable still updates the state of the Maximum-value Columns. It means that when something goes wrong, a potentially big number of rows will be skipped pretty much silently. (Well, it will appear in the Bulletin Board, but when the message disappears from the Bulletin Board there will be no indication of the problem left. The processor has no "terminate relationship" other than "Success".) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table
[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcio Sugar updated NIFI-4926: --- Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and Matt Burgess' [reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html], this particular exception could be avoided by adding this setting (semicolon
[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table
[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcio Sugar updated NIFI-4926: --- Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and Matt Burgess' [reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html], this particular exception could be avoided by adding this setting (semicolon
[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table
[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcio Sugar updated NIFI-4926: --- Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and Matt Burgess' [reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html], this particular exception could be avoided by adding this setting (semicolon
[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table
[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcio Sugar updated NIFI-4926: --- Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this particular exception could be avoided by adding this setting (semicolon included) to the JDBC connection URL: {code:java} allowNextOnExhaustedResultSet=1;{code} But it didn't make a difference. I believe the reason
[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table
[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcio Sugar updated NIFI-4926: --- Affects Version/s: 1.6.0 Environment: Ubuntu 16.04 Apache NiFi 1.5.0, 1.6.0 IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1) IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 4.19.26 / v10.5 FP6, 4.19.72 / v10.5 FP9 (2) Notes: (1) SELECT * FROM SYSIBMADM.ENV_INST_INFO (2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version was: ubuntu 16.04 nifi 1.5.0 db2 v10.5.0.7 JDBC driver db2jcc4-10.5.0.6 Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
[jira] [Commented] (NIFI-4907) Provenance authorization refactoring
[ https://issues.apache.org/jira/browse/NIFI-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512570#comment-16512570 ] ASF GitHub Bot commented on NIFI-4907: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2703 Thanks for having a look. I'll include these when I merge in your changes. > Provenance authorization refactoring > > > Key: NIFI-4907 > URL: https://issues.apache.org/jira/browse/NIFI-4907 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > > Currently, the 'view the data' component policy is too tightly coupled with > Provenance queries. The 'query provenance' policy should be the only policy > required for viewing Provenance query results. Both 'view the component' and > 'view the data' policies should be used to refine the appropriate visibility > of event details - but not the event itself. > 1) Component Visibility > The authorization of Provenance events is inconsistent with the behavior of > the graph. For example, if a user does not have 'view the component' policy, > the graph shows this component as a "black box" (no details such as name, > UUID, etc.) However, when querying Provenance, this component will show up > including the Component Type and the Component Name. This is in effect a > violation of the policy. These component details should be obscured in the > Provenance event displayed if user does not have the appropriate 'view the > component' policy. > 2) Data Visibility > For a Provenance query, all events should be visible as long as the user > performing the query belongs to the 'query provenance' global policy. As > mentioned above, some information about the component may be obscured > depending on 'view the component' policy, but the event itself should be > visible. Additionally, details of the event (clicking the View Details "i" > icon) should only be accessible if the user belongs to the 'view the data' > policy for the affected component. If the user is not in the appropriate > 'view the data' policy, a popup warning should be displayed indicating the > reason details are not visible with more specific detail than the current > "Contact the system administrator". > 3) Lineage Graphs > As with the Provenance table view recommendation above, the lineage graph > should display all events. Currently, if the lineage graph includes an event > belonging to a component which the user does not have 'view the data', it is > shown on the graph as "UNKNOWN". As with Data Visibility mentioned above, the > graph should indicate the event type as long as the user is in the 'view the > component'. Subsequent "View Details" on the event should only be visible if > the user is in the 'view the data' policy. > In summary, for Provenance query results and lineage graphs, all events > should be shown. Component Name and Component Type information should be > conditionally visible depending on the corresponding component policy 'view > the component' policy. Event details including Provenance event type and > FlowFile information should be conditionally available depending on the > corresponding component policy 'view the data'. Inability to display event > details should provide feedback to the user indicating the reason. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2703: NIFI-4907: add 'view provenance' component policy
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2703 Thanks for having a look. I'll include these when I merge in your changes. ---
[GitHub] nifi pull request #2750: NIFI-5054: Couchbase Authentication, NIFI-5257: Exp...
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2750#discussion_r195449260 --- Diff: nifi-nar-bundles/nifi-couchbase-bundle/nifi-couchbase-processors/src/main/java/org/apache/nifi/processors/couchbase/AbstractCouchbaseProcessor.java --- @@ -39,54 +38,32 @@ import com.couchbase.client.core.CouchbaseException; import com.couchbase.client.java.Bucket; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.BUCKET_NAME; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.COUCHBASE_CLUSTER_SERVICE; + /** - * Provides common functionalities for Couchbase processors. + * Provides common functionality for Couchbase processors. */ public abstract class AbstractCouchbaseProcessor extends AbstractProcessor { -public static final PropertyDescriptor DOCUMENT_TYPE = new PropertyDescriptor.Builder().name("Document Type") -.description("The type of contents.") -.required(true) -.allowableValues(DocumentType.values()) -.defaultValue(DocumentType.Json.toString()) -.build(); - -public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder().name("Document Id") +public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder() +.name("document-id") +.displayName("Document Id") .description("A static, fixed Couchbase document id, or an expression to construct the Couchbase document id.") .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .build(); -public static final Relationship REL_SUCCESS = new Relationship.Builder() -.name("success") -.description("All FlowFiles that are written to Couchbase Server are routed to this relationship.") -.build(); -public static final Relationship REL_ORIGINAL = new Relationship.Builder() -.name("original") -.description("The original input file will be routed to this destination when it has been successfully processed.") -.build(); -public static final Relationship REL_RETRY = new Relationship.Builder() -.name("retry") -.description("All FlowFiles that cannot written to Couchbase Server but can be retried are routed to this relationship.") -.build(); -public static final Relationship REL_FAILURE = new Relationship.Builder() -.name("failure") -.description("All FlowFiles that cannot written to Couchbase Server and can't be retried are routed to this relationship.") -.build(); - -public static final PropertyDescriptor COUCHBASE_CLUSTER_SERVICE = new PropertyDescriptor.Builder().name("Couchbase Cluster Controller Service") -.description("A Couchbase Cluster Controller Service which manages connections to a Couchbase cluster.") -.required(true) - .identifiesControllerService(CouchbaseClusterControllerService.class) -.build(); +public static final Relationship.Builder RELB_SUCCESS = new Relationship.Builder().name("success"); +public static final Relationship.Builder RELB_ORIGINAL = new Relationship.Builder().name("original"); +public static final Relationship.Builder RELB_RETRY = new Relationship.Builder().name("retry"); +public static final Relationship.Builder RELB_FAILURE = new Relationship.Builder().name("failure"); -public static final PropertyDescriptor BUCKET_NAME = new PropertyDescriptor.Builder().name("Bucket Name") -.description("The name of bucket to access.") -.required(true) -.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) -.defaultValue("default") -.build(); +public static final Relationship REL_ORIGINAL = RELB_ORIGINAL.build(); +public static final Relationship REL_SUCCESS = RELB_SUCCESS.build(); +public static final Relationship REL_RETRY = RELB_RETRY.build(); +public static final Relationship REL_FAILURE = RELB_FAILURE.build(); --- End diff -- While the issue does not surface due to the way `getRelationships` is invoked, as `static` fields I believe these `Relationship` and `Relationship.Builder` variables are shared across any implementations of an `AbstractCouchbaseProcessor`. Because they are shared, when one implementation set's a description it would be reflected in any other implementation. With an approach like this, it probably makes sense to have these be not `static`. ---
[jira] [Commented] (NIFI-5054) Nifi Couchbase Processors does not support User Authentication
[ https://issues.apache.org/jira/browse/NIFI-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512566#comment-16512566 ] ASF GitHub Bot commented on NIFI-5054: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2750#discussion_r195453378 --- Diff: nifi-nar-bundles/nifi-couchbase-bundle/nifi-couchbase-processors/src/main/java/org/apache/nifi/processors/couchbase/AbstractCouchbaseProcessor.java --- @@ -39,54 +38,32 @@ import com.couchbase.client.core.CouchbaseException; import com.couchbase.client.java.Bucket; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.BUCKET_NAME; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.COUCHBASE_CLUSTER_SERVICE; + /** - * Provides common functionalities for Couchbase processors. + * Provides common functionality for Couchbase processors. */ public abstract class AbstractCouchbaseProcessor extends AbstractProcessor { -public static final PropertyDescriptor DOCUMENT_TYPE = new PropertyDescriptor.Builder().name("Document Type") -.description("The type of contents.") -.required(true) -.allowableValues(DocumentType.values()) -.defaultValue(DocumentType.Json.toString()) -.build(); - -public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder().name("Document Id") +public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder() +.name("document-id") +.displayName("Document Id") .description("A static, fixed Couchbase document id, or an expression to construct the Couchbase document id.") .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .build(); -public static final Relationship REL_SUCCESS = new Relationship.Builder() -.name("success") -.description("All FlowFiles that are written to Couchbase Server are routed to this relationship.") -.build(); -public static final Relationship REL_ORIGINAL = new Relationship.Builder() -.name("original") -.description("The original input file will be routed to this destination when it has been successfully processed.") -.build(); -public static final Relationship REL_RETRY = new Relationship.Builder() -.name("retry") -.description("All FlowFiles that cannot written to Couchbase Server but can be retried are routed to this relationship.") -.build(); -public static final Relationship REL_FAILURE = new Relationship.Builder() -.name("failure") -.description("All FlowFiles that cannot written to Couchbase Server and can't be retried are routed to this relationship.") -.build(); - -public static final PropertyDescriptor COUCHBASE_CLUSTER_SERVICE = new PropertyDescriptor.Builder().name("Couchbase Cluster Controller Service") -.description("A Couchbase Cluster Controller Service which manages connections to a Couchbase cluster.") -.required(true) - .identifiesControllerService(CouchbaseClusterControllerService.class) -.build(); +public static final Relationship.Builder RELB_SUCCESS = new Relationship.Builder().name("success"); +public static final Relationship.Builder RELB_ORIGINAL = new Relationship.Builder().name("original"); +public static final Relationship.Builder RELB_RETRY = new Relationship.Builder().name("retry"); +public static final Relationship.Builder RELB_FAILURE = new Relationship.Builder().name("failure"); -public static final PropertyDescriptor BUCKET_NAME = new PropertyDescriptor.Builder().name("Bucket Name") -.description("The name of bucket to access.") -.required(true) -.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) -.defaultValue("default") -.build(); +public static final Relationship REL_ORIGINAL = RELB_ORIGINAL.build(); +public static final Relationship REL_SUCCESS = RELB_SUCCESS.build(); +public static final Relationship REL_RETRY = RELB_RETRY.build(); +public static final Relationship REL_FAILURE = RELB_FAILURE.build(); --- End diff -- Additionally, should the visibility of these fields need to be public? > Nifi Couchbase Processors does not support User Authentication > -- > > Key: NIFI-5054 > URL: https://issues.apache.org/jira/browse/NIFI-5054 > Project: Apache NiFi >
[jira] [Commented] (NIFI-5054) Nifi Couchbase Processors does not support User Authentication
[ https://issues.apache.org/jira/browse/NIFI-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512567#comment-16512567 ] ASF GitHub Bot commented on NIFI-5054: -- Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2750#discussion_r195449260 --- Diff: nifi-nar-bundles/nifi-couchbase-bundle/nifi-couchbase-processors/src/main/java/org/apache/nifi/processors/couchbase/AbstractCouchbaseProcessor.java --- @@ -39,54 +38,32 @@ import com.couchbase.client.core.CouchbaseException; import com.couchbase.client.java.Bucket; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.BUCKET_NAME; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.COUCHBASE_CLUSTER_SERVICE; + /** - * Provides common functionalities for Couchbase processors. + * Provides common functionality for Couchbase processors. */ public abstract class AbstractCouchbaseProcessor extends AbstractProcessor { -public static final PropertyDescriptor DOCUMENT_TYPE = new PropertyDescriptor.Builder().name("Document Type") -.description("The type of contents.") -.required(true) -.allowableValues(DocumentType.values()) -.defaultValue(DocumentType.Json.toString()) -.build(); - -public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder().name("Document Id") +public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder() +.name("document-id") +.displayName("Document Id") .description("A static, fixed Couchbase document id, or an expression to construct the Couchbase document id.") .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .build(); -public static final Relationship REL_SUCCESS = new Relationship.Builder() -.name("success") -.description("All FlowFiles that are written to Couchbase Server are routed to this relationship.") -.build(); -public static final Relationship REL_ORIGINAL = new Relationship.Builder() -.name("original") -.description("The original input file will be routed to this destination when it has been successfully processed.") -.build(); -public static final Relationship REL_RETRY = new Relationship.Builder() -.name("retry") -.description("All FlowFiles that cannot written to Couchbase Server but can be retried are routed to this relationship.") -.build(); -public static final Relationship REL_FAILURE = new Relationship.Builder() -.name("failure") -.description("All FlowFiles that cannot written to Couchbase Server and can't be retried are routed to this relationship.") -.build(); - -public static final PropertyDescriptor COUCHBASE_CLUSTER_SERVICE = new PropertyDescriptor.Builder().name("Couchbase Cluster Controller Service") -.description("A Couchbase Cluster Controller Service which manages connections to a Couchbase cluster.") -.required(true) - .identifiesControllerService(CouchbaseClusterControllerService.class) -.build(); +public static final Relationship.Builder RELB_SUCCESS = new Relationship.Builder().name("success"); +public static final Relationship.Builder RELB_ORIGINAL = new Relationship.Builder().name("original"); +public static final Relationship.Builder RELB_RETRY = new Relationship.Builder().name("retry"); +public static final Relationship.Builder RELB_FAILURE = new Relationship.Builder().name("failure"); -public static final PropertyDescriptor BUCKET_NAME = new PropertyDescriptor.Builder().name("Bucket Name") -.description("The name of bucket to access.") -.required(true) -.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) -.defaultValue("default") -.build(); +public static final Relationship REL_ORIGINAL = RELB_ORIGINAL.build(); +public static final Relationship REL_SUCCESS = RELB_SUCCESS.build(); +public static final Relationship REL_RETRY = RELB_RETRY.build(); +public static final Relationship REL_FAILURE = RELB_FAILURE.build(); --- End diff -- While the issue does not surface due to the way `getRelationships` is invoked, as `static` fields I believe these `Relationship` and `Relationship.Builder` variables are shared across any implementations of an `AbstractCouchbaseProcessor`. Because they are shared, when one implementation set's a description it would be reflected in any other
[GitHub] nifi pull request #2750: NIFI-5054: Couchbase Authentication, NIFI-5257: Exp...
Github user mcgilman commented on a diff in the pull request: https://github.com/apache/nifi/pull/2750#discussion_r195453378 --- Diff: nifi-nar-bundles/nifi-couchbase-bundle/nifi-couchbase-processors/src/main/java/org/apache/nifi/processors/couchbase/AbstractCouchbaseProcessor.java --- @@ -39,54 +38,32 @@ import com.couchbase.client.core.CouchbaseException; import com.couchbase.client.java.Bucket; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.BUCKET_NAME; +import static org.apache.nifi.couchbase.CouchbaseConfigurationProperties.COUCHBASE_CLUSTER_SERVICE; + /** - * Provides common functionalities for Couchbase processors. + * Provides common functionality for Couchbase processors. */ public abstract class AbstractCouchbaseProcessor extends AbstractProcessor { -public static final PropertyDescriptor DOCUMENT_TYPE = new PropertyDescriptor.Builder().name("Document Type") -.description("The type of contents.") -.required(true) -.allowableValues(DocumentType.values()) -.defaultValue(DocumentType.Json.toString()) -.build(); - -public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder().name("Document Id") +public static final PropertyDescriptor DOC_ID = new PropertyDescriptor.Builder() +.name("document-id") +.displayName("Document Id") .description("A static, fixed Couchbase document id, or an expression to construct the Couchbase document id.") .expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES) .addValidator(StandardValidators.NON_EMPTY_VALIDATOR) .build(); -public static final Relationship REL_SUCCESS = new Relationship.Builder() -.name("success") -.description("All FlowFiles that are written to Couchbase Server are routed to this relationship.") -.build(); -public static final Relationship REL_ORIGINAL = new Relationship.Builder() -.name("original") -.description("The original input file will be routed to this destination when it has been successfully processed.") -.build(); -public static final Relationship REL_RETRY = new Relationship.Builder() -.name("retry") -.description("All FlowFiles that cannot written to Couchbase Server but can be retried are routed to this relationship.") -.build(); -public static final Relationship REL_FAILURE = new Relationship.Builder() -.name("failure") -.description("All FlowFiles that cannot written to Couchbase Server and can't be retried are routed to this relationship.") -.build(); - -public static final PropertyDescriptor COUCHBASE_CLUSTER_SERVICE = new PropertyDescriptor.Builder().name("Couchbase Cluster Controller Service") -.description("A Couchbase Cluster Controller Service which manages connections to a Couchbase cluster.") -.required(true) - .identifiesControllerService(CouchbaseClusterControllerService.class) -.build(); +public static final Relationship.Builder RELB_SUCCESS = new Relationship.Builder().name("success"); +public static final Relationship.Builder RELB_ORIGINAL = new Relationship.Builder().name("original"); +public static final Relationship.Builder RELB_RETRY = new Relationship.Builder().name("retry"); +public static final Relationship.Builder RELB_FAILURE = new Relationship.Builder().name("failure"); -public static final PropertyDescriptor BUCKET_NAME = new PropertyDescriptor.Builder().name("Bucket Name") -.description("The name of bucket to access.") -.required(true) -.addValidator(StandardValidators.NON_EMPTY_VALIDATOR) -.defaultValue("default") -.build(); +public static final Relationship REL_ORIGINAL = RELB_ORIGINAL.build(); +public static final Relationship REL_SUCCESS = RELB_SUCCESS.build(); +public static final Relationship REL_RETRY = RELB_RETRY.build(); +public static final Relationship REL_FAILURE = RELB_FAILURE.build(); --- End diff -- Additionally, should the visibility of these fields need to be public? ---
[jira] [Commented] (NIFI-4907) Provenance authorization refactoring
[ https://issues.apache.org/jira/browse/NIFI-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512549#comment-16512549 ] ASF GitHub Bot commented on NIFI-4907: -- Github user markobean commented on the issue: https://github.com/apache/nifi/pull/2703 I like the proposed changes. It makes the authorization process a bit cleaner. +1 > Provenance authorization refactoring > > > Key: NIFI-4907 > URL: https://issues.apache.org/jira/browse/NIFI-4907 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.5.0 >Reporter: Mark Bean >Assignee: Mark Bean >Priority: Major > > Currently, the 'view the data' component policy is too tightly coupled with > Provenance queries. The 'query provenance' policy should be the only policy > required for viewing Provenance query results. Both 'view the component' and > 'view the data' policies should be used to refine the appropriate visibility > of event details - but not the event itself. > 1) Component Visibility > The authorization of Provenance events is inconsistent with the behavior of > the graph. For example, if a user does not have 'view the component' policy, > the graph shows this component as a "black box" (no details such as name, > UUID, etc.) However, when querying Provenance, this component will show up > including the Component Type and the Component Name. This is in effect a > violation of the policy. These component details should be obscured in the > Provenance event displayed if user does not have the appropriate 'view the > component' policy. > 2) Data Visibility > For a Provenance query, all events should be visible as long as the user > performing the query belongs to the 'query provenance' global policy. As > mentioned above, some information about the component may be obscured > depending on 'view the component' policy, but the event itself should be > visible. Additionally, details of the event (clicking the View Details "i" > icon) should only be accessible if the user belongs to the 'view the data' > policy for the affected component. If the user is not in the appropriate > 'view the data' policy, a popup warning should be displayed indicating the > reason details are not visible with more specific detail than the current > "Contact the system administrator". > 3) Lineage Graphs > As with the Provenance table view recommendation above, the lineage graph > should display all events. Currently, if the lineage graph includes an event > belonging to a component which the user does not have 'view the data', it is > shown on the graph as "UNKNOWN". As with Data Visibility mentioned above, the > graph should indicate the event type as long as the user is in the 'view the > component'. Subsequent "View Details" on the event should only be visible if > the user is in the 'view the data' policy. > In summary, for Provenance query results and lineage graphs, all events > should be shown. Component Name and Component Type information should be > conditionally visible depending on the corresponding component policy 'view > the component' policy. Event details including Provenance event type and > FlowFile information should be conditionally available depending on the > corresponding component policy 'view the data'. Inability to display event > details should provide feedback to the user indicating the reason. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2703: NIFI-4907: add 'view provenance' component policy
Github user markobean commented on the issue: https://github.com/apache/nifi/pull/2703 I like the proposed changes. It makes the authorization process a bit cleaner. +1 ---
[jira] [Commented] (NIFI-5311) Wait a bit for components to finish validation on creation
[ https://issues.apache.org/jira/browse/NIFI-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512537#comment-16512537 ] ASF GitHub Bot commented on NIFI-5311: -- GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2795 NIFI-5311: When creating a processor, controller service, or reportin… …g task, give the component up to 50 ms to complete validation before returning the DTO Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-5311 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2795.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2795 commit 86881a7b64128e149c70a3c08286de43807fc10a Author: Mark Payne Date: 2018-06-14T14:24:24Z NIFI-5311: When creating a processor, controller service, or reporting task, give the component up to 50 ms to complete validation before returning the DTO > Wait a bit for components to finish validation on creation > -- > > Key: NIFI-5311 > URL: https://issues.apache.org/jira/browse/NIFI-5311 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > In NIFI-5279, we updated the framework so that we won't return web requests > that update components until either the component's validation completes or > we wait 50 milliseconds. We should do the same when creating components. > Otherwise, we end up seeing "Validating..." quite often when a component is > created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-5311) Wait a bit for components to finish validation on creation
[ https://issues.apache.org/jira/browse/NIFI-5311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne updated NIFI-5311: - Fix Version/s: 1.7.0 Status: Patch Available (was: Open) > Wait a bit for components to finish validation on creation > -- > > Key: NIFI-5311 > URL: https://issues.apache.org/jira/browse/NIFI-5311 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Mark Payne >Assignee: Mark Payne >Priority: Major > Fix For: 1.7.0 > > > In NIFI-5279, we updated the framework so that we won't return web requests > that update components until either the component's validation completes or > we wait 50 milliseconds. We should do the same when creating components. > Otherwise, we end up seeing "Validating..." quite often when a component is > created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2795: NIFI-5311: When creating a processor, controller se...
GitHub user markap14 opened a pull request: https://github.com/apache/nifi/pull/2795 NIFI-5311: When creating a processor, controller service, or reportin⦠â¦g task, give the component up to 50 ms to complete validation before returning the DTO Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/markap14/nifi NIFI-5311 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2795.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2795 commit 86881a7b64128e149c70a3c08286de43807fc10a Author: Mark Payne Date: 2018-06-14T14:24:24Z NIFI-5311: When creating a processor, controller service, or reporting task, give the component up to 50 ms to complete validation before returning the DTO ---
[jira] [Created] (NIFI-5311) Wait a bit for components to finish validation on creation
Mark Payne created NIFI-5311: Summary: Wait a bit for components to finish validation on creation Key: NIFI-5311 URL: https://issues.apache.org/jira/browse/NIFI-5311 Project: Apache NiFi Issue Type: Improvement Components: Core Framework Reporter: Mark Payne Assignee: Mark Payne In NIFI-5279, we updated the framework so that we won't return web requests that update components until either the component's validation completes or we wait 50 milliseconds. We should do the same when creating components. Otherwise, we end up seeing "Validating..." quite often when a component is created. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (NIFI-3242) CRON scheduling can occur twice for the same trigger
[ https://issues.apache.org/jira/browse/NIFI-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Payne resolved NIFI-3242. -- Resolution: Fixed Fix Version/s: 1.7.0 > CRON scheduling can occur twice for the same trigger > > > Key: NIFI-3242 > URL: https://issues.apache.org/jira/browse/NIFI-3242 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Percivall >Priority: Critical > Fix For: 1.7.0 > > > Initially brought up in this message[1] to the user list. > The logic for CRON scheduling is done here[2]. The CRON expression is > evaluated and used to check when to schedule the next trigger[3]. I believe > problems arise due to the approximate nature of the java scheduler and > potentially the millisecond portion of quartz scheduler getting wiped here[4] > can lead to the behavior seen in the mailing list (an extra invocation right > before the correct time). > [1] > http://mail-archives.apache.org/mod_mbox/nifi-users/201612.mbox/%3C1482106268095-481.post%40n4.nabble.com%3E > [2] > https://github.com/apache/nifi/blob/c10d11d378ffd7c306830e24d50c5befc98a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/scheduling/QuartzSchedulingAgent.java#L177-L177 > [3] > https://github.com/apache/nifi/blob/c10d11d378ffd7c306830e24d50c5befc98a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/scheduling/QuartzSchedulingAgent.java#L180 > [4] > https://github.com/quartz-scheduler/quartz/blob/quartz-2.2.1/quartz-core/src/main/java/org/quartz/CronExpression.java#L1170 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-3242) CRON scheduling can occur twice for the same trigger
[ https://issues.apache.org/jira/browse/NIFI-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512517#comment-16512517 ] ASF GitHub Bot commented on NIFI-3242: -- Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2789 @mgaido91 thanks for the fix! The code looks good to me. I do believe it addresses the issue. Kinda hard to verify a timing issue but I can verify that the logic appears correct and that everything still seems to work. +1 merged to master. > CRON scheduling can occur twice for the same trigger > > > Key: NIFI-3242 > URL: https://issues.apache.org/jira/browse/NIFI-3242 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Percivall >Priority: Critical > > Initially brought up in this message[1] to the user list. > The logic for CRON scheduling is done here[2]. The CRON expression is > evaluated and used to check when to schedule the next trigger[3]. I believe > problems arise due to the approximate nature of the java scheduler and > potentially the millisecond portion of quartz scheduler getting wiped here[4] > can lead to the behavior seen in the mailing list (an extra invocation right > before the correct time). > [1] > http://mail-archives.apache.org/mod_mbox/nifi-users/201612.mbox/%3C1482106268095-481.post%40n4.nabble.com%3E > [2] > https://github.com/apache/nifi/blob/c10d11d378ffd7c306830e24d50c5befc98a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/scheduling/QuartzSchedulingAgent.java#L177-L177 > [3] > https://github.com/apache/nifi/blob/c10d11d378ffd7c306830e24d50c5befc98a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/scheduling/QuartzSchedulingAgent.java#L180 > [4] > https://github.com/quartz-scheduler/quartz/blob/quartz-2.2.1/quartz-core/src/main/java/org/quartz/CronExpression.java#L1170 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-3242) CRON scheduling can occur twice for the same trigger
[ https://issues.apache.org/jira/browse/NIFI-3242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512515#comment-16512515 ] ASF GitHub Bot commented on NIFI-3242: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2789 > CRON scheduling can occur twice for the same trigger > > > Key: NIFI-3242 > URL: https://issues.apache.org/jira/browse/NIFI-3242 > Project: Apache NiFi > Issue Type: Bug >Reporter: Joseph Percivall >Priority: Critical > > Initially brought up in this message[1] to the user list. > The logic for CRON scheduling is done here[2]. The CRON expression is > evaluated and used to check when to schedule the next trigger[3]. I believe > problems arise due to the approximate nature of the java scheduler and > potentially the millisecond portion of quartz scheduler getting wiped here[4] > can lead to the behavior seen in the mailing list (an extra invocation right > before the correct time). > [1] > http://mail-archives.apache.org/mod_mbox/nifi-users/201612.mbox/%3C1482106268095-481.post%40n4.nabble.com%3E > [2] > https://github.com/apache/nifi/blob/c10d11d378ffd7c306830e24d50c5befc98a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/scheduling/QuartzSchedulingAgent.java#L177-L177 > [3] > https://github.com/apache/nifi/blob/c10d11d378ffd7c306830e24d50c5befc98a/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-framework-core/src/main/java/org/apache/nifi/controller/scheduling/QuartzSchedulingAgent.java#L180 > [4] > https://github.com/quartz-scheduler/quartz/blob/quartz-2.2.1/quartz-core/src/main/java/org/quartz/CronExpression.java#L1170 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2789: NIFI-3242: Avoid double scheduling of a task due to...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2789 ---
[GitHub] nifi issue #2789: NIFI-3242: Avoid double scheduling of a task due to quartz...
Github user markap14 commented on the issue: https://github.com/apache/nifi/pull/2789 @mgaido91 thanks for the fix! The code looks good to me. I do believe it addresses the issue. Kinda hard to verify a timing issue but I can verify that the logic appears correct and that everything still seems to work. +1 merged to master. ---
[jira] [Commented] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
[ https://issues.apache.org/jira/browse/NIFI-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512505#comment-16512505 ] Otto Fowler commented on NIFI-5310: --- If I have the csv reader with the default of '\' as the escape character, I get errors in my sample flow. If I change the escape character to something else, like '^' it works. > Not able to read record as string type ending with \ (backslash) > > > Key: NIFI-5310 > URL: https://issues.apache.org/jira/browse/NIFI-5310 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 > Environment: Windows/Linux Both >Reporter: Nishant Gupta >Priority: Critical > Labels: BackSlash, CSV, Nifi, QueryRecord, > Attachments: IssueWithBackSlash.PNG > > > *Processor* - QueryRecord > *RecordReader* - CSVReader > *RecordWriter* - CSVRecordSetWriter > *Data* Type- String > { > "name": "Name", > "type": ["string","null"] > } > *Data - John\ (Failing), John\M(passing)* > *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
[ https://issues.apache.org/jira/browse/NIFI-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Otto Fowler reassigned NIFI-5310: - Assignee: Otto Fowler > Not able to read record as string type ending with \ (backslash) > > > Key: NIFI-5310 > URL: https://issues.apache.org/jira/browse/NIFI-5310 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 > Environment: Windows/Linux Both >Reporter: Nishant Gupta >Assignee: Otto Fowler >Priority: Critical > Labels: BackSlash, CSV, Nifi, QueryRecord, > Attachments: IssueWithBackSlash.PNG > > > *Processor* - QueryRecord > *RecordReader* - CSVReader > *RecordWriter* - CSVRecordSetWriter > *Data* Type- String > { > "name": "Name", > "type": ["string","null"] > } > *Data - John\ (Failing), John\M(passing)* > *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
[ https://issues.apache.org/jira/browse/NIFI-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512498#comment-16512498 ] Otto Fowler commented on NIFI-5310: --- The csv reader has a setting for the escape character, it defaults to '\'. Do you have that set? > Not able to read record as string type ending with \ (backslash) > > > Key: NIFI-5310 > URL: https://issues.apache.org/jira/browse/NIFI-5310 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 > Environment: Windows/Linux Both >Reporter: Nishant Gupta >Priority: Critical > Labels: BackSlash, CSV, Nifi, QueryRecord, > Attachments: IssueWithBackSlash.PNG > > > *Processor* - QueryRecord > *RecordReader* - CSVReader > *RecordWriter* - CSVRecordSetWriter > *Data* Type- String > { > "name": "Name", > "type": ["string","null"] > } > *Data - John\ (Failing), John\M(passing)* > *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
[ https://issues.apache.org/jira/browse/NIFI-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512478#comment-16512478 ] Otto Fowler commented on NIFI-5310: --- {code:java} @Test public void testSimpleParseWithSlashes() throws IOException, MalformedRecordException { final List fields = getDefaultFields(); fields.replaceAll(f -> f.getFieldName().equals("balance") ? new RecordField("balance", doubleDataType) : f); final RecordSchema schema = new SimpleRecordSchema(fields); try (final InputStream fis = new FileInputStream(new File("src/test/resources/csv/with_slashes.csv")); final CSVRecordReader reader = createReader(fis, schema, format)) { final Object[] record = reader.nextRecord().getValues(); final Object[] expectedValues = new Object[] {"1", "John Doe\\", 4750.89D, "123 My Street", "My City", "MS", "1", "USA"}; Assert.assertArrayEquals(expectedValues, record); assertNull(reader.nextRecord()); } } {code} Also works > Not able to read record as string type ending with \ (backslash) > > > Key: NIFI-5310 > URL: https://issues.apache.org/jira/browse/NIFI-5310 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 > Environment: Windows/Linux Both >Reporter: Nishant Gupta >Priority: Critical > Labels: BackSlash, CSV, Nifi, QueryRecord, > Attachments: IssueWithBackSlash.PNG > > > *Processor* - QueryRecord > *RecordReader* - CSVReader > *RecordWriter* - CSVRecordSetWriter > *Data* Type- String > { > "name": "Name", > "type": ["string","null"] > } > *Data - John\ (Failing), John\M(passing)* > *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
[ https://issues.apache.org/jira/browse/NIFI-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512468#comment-16512468 ] Otto Fowler edited comment on NIFI-5310 at 6/14/18 1:38 PM: {code:java} @Test public void testSimpleWithSlash() throws InitializationException, IOException, SQLException { final MockRecordParser parser = new MockRecordParser(); parser.addSchemaField("name", RecordFieldType.STRING); parser.addSchemaField("age", RecordFieldType.INT); parser.addRecord("Tom\\", 49); final MockRecordWriter writer = new MockRecordWriter("\"name\",\"points\""); TestRunner runner = getRunner(); runner.addControllerService("parser", parser); runner.enableControllerService(parser); runner.addControllerService("writer", writer); runner.enableControllerService(writer); runner.setProperty(REL_NAME, "select name, age from FLOWFILE WHERE name <> ''"); runner.setProperty(QueryRecord.RECORD_READER_FACTORY, "parser"); runner.setProperty(QueryRecord.RECORD_WRITER_FACTORY, "writer"); final int numIterations = 1; for (int i = 0; i < numIterations; i++) { runner.enqueue(new byte[0]); } runner.setThreadCount(4); runner.run(2 * numIterations); runner.assertTransferCount(REL_NAME, 1); final MockFlowFile out = runner.getFlowFilesForRelationship(REL_NAME).get(0); System.out.println(new String(out.toByteArray())); out.assertContentEquals("\"name\",\"points\"\n\"Tom\\\",\"49\"\n"); } {code} This test works, do you have the nifi-app.log with the full exception? was (Author: ottobackwards): {code:java} @Test public void testSimpleWithSlash() throws InitializationException, IOException, SQLException { final MockRecordParser parser = new MockRecordParser(); parser.addSchemaField("name", RecordFieldType.STRING); parser.addSchemaField("age", RecordFieldType.INT); parser.addRecord("Tom\\", 49); final MockRecordWriter writer = new MockRecordWriter("\"name\",\"points\""); TestRunner runner = getRunner(); runner.addControllerService("parser", parser); runner.enableControllerService(parser); runner.addControllerService("writer", writer); runner.enableControllerService(writer); runner.setProperty(REL_NAME, "select name, age from FLOWFILE WHERE name <> ''"); runner.setProperty(QueryRecord.RECORD_READER_FACTORY, "parser"); runner.setProperty(QueryRecord.RECORD_WRITER_FACTORY, "writer"); final int numIterations = 1; for (int i = 0; i < numIterations; i++) { runner.enqueue(new byte[0]); } runner.setThreadCount(4); runner.run(2 * numIterations); runner.assertTransferCount(REL_NAME, 1); final MockFlowFile out = runner.getFlowFilesForRelationship(REL_NAME).get(0); System.out.println(new String(out.toByteArray())); out.assertContentEquals("\"name\",\"points\"\n\"Tom\\\",\"49\"\n"); } {code} This test works, so the issue is probably with reader, do you have the nifi-app.log with the full exception? > Not able to read record as string type ending with \ (backslash) > > > Key: NIFI-5310 > URL: https://issues.apache.org/jira/browse/NIFI-5310 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 > Environment: Windows/Linux Both >Reporter: Nishant Gupta >Priority: Critical > Labels: BackSlash, CSV, Nifi, QueryRecord, > Attachments: IssueWithBackSlash.PNG > > > *Processor* - QueryRecord > *RecordReader* - CSVReader > *RecordWriter* - CSVRecordSetWriter > *Data* Type- String > { > "name": "Name", > "type": ["string","null"] > } > *Data - John\ (Failing), John\M(passing)* > *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
[ https://issues.apache.org/jira/browse/NIFI-5310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512468#comment-16512468 ] Otto Fowler commented on NIFI-5310: --- {code:java} @Test public void testSimpleWithSlash() throws InitializationException, IOException, SQLException { final MockRecordParser parser = new MockRecordParser(); parser.addSchemaField("name", RecordFieldType.STRING); parser.addSchemaField("age", RecordFieldType.INT); parser.addRecord("Tom\\", 49); final MockRecordWriter writer = new MockRecordWriter("\"name\",\"points\""); TestRunner runner = getRunner(); runner.addControllerService("parser", parser); runner.enableControllerService(parser); runner.addControllerService("writer", writer); runner.enableControllerService(writer); runner.setProperty(REL_NAME, "select name, age from FLOWFILE WHERE name <> ''"); runner.setProperty(QueryRecord.RECORD_READER_FACTORY, "parser"); runner.setProperty(QueryRecord.RECORD_WRITER_FACTORY, "writer"); final int numIterations = 1; for (int i = 0; i < numIterations; i++) { runner.enqueue(new byte[0]); } runner.setThreadCount(4); runner.run(2 * numIterations); runner.assertTransferCount(REL_NAME, 1); final MockFlowFile out = runner.getFlowFilesForRelationship(REL_NAME).get(0); System.out.println(new String(out.toByteArray())); out.assertContentEquals("\"name\",\"points\"\n\"Tom\\\",\"49\"\n"); } {code} This test works, so the issue is probably with reader, do you have the nifi-app.log with the full exception? > Not able to read record as string type ending with \ (backslash) > > > Key: NIFI-5310 > URL: https://issues.apache.org/jira/browse/NIFI-5310 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.6.0 > Environment: Windows/Linux Both >Reporter: Nishant Gupta >Priority: Critical > Labels: BackSlash, CSV, Nifi, QueryRecord, > Attachments: IssueWithBackSlash.PNG > > > *Processor* - QueryRecord > *RecordReader* - CSVReader > *RecordWriter* - CSVRecordSetWriter > *Data* Type- String > { > "name": "Name", > "type": ["string","null"] > } > *Data - John\ (Failing), John\M(passing)* > *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2750: NIFI-5054: Couchbase Authentication, NIFI-5257: Expand Cou...
Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2750 Thanks for the PR @ijokarumawak and thanks for assisting with the review @mgroves! I'll be happy to have a look and help get this merged. ---
[jira] [Commented] (NIFI-5054) Nifi Couchbase Processors does not support User Authentication
[ https://issues.apache.org/jira/browse/NIFI-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512438#comment-16512438 ] ASF GitHub Bot commented on NIFI-5054: -- Github user mcgilman commented on the issue: https://github.com/apache/nifi/pull/2750 Thanks for the PR @ijokarumawak and thanks for assisting with the review @mgroves! I'll be happy to have a look and help get this merged. > Nifi Couchbase Processors does not support User Authentication > -- > > Key: NIFI-5054 > URL: https://issues.apache.org/jira/browse/NIFI-5054 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.5.0, 1.6.0 >Reporter: Shagun Jaju >Assignee: Koji Kawamura >Priority: Major > Labels: authentication, security > > Issue Description: Nifi Couchbase processors don't work with new couchbase > versions 5.0 and 5.1. > New Couchbase Version 5.x has introduced *Role Based Access Control (RBAC),* > a ** new security feature. > # All buckets must now be accessed by a *user*/*password* combination that > has a *role with access rights* to the bucket. > # Buckets no longer use bucket-level passwords > # There is no default bucket and no sample buckets with blank passwords. > # You cannot create a user without a password. > *(Ref:* > https://developer.couchbase.com/documentation/server/5.0/introduction/whats-new.html > [https://blog.couchbase.com/new-sdk-authentication/] ) > > nifi-couchbase-processors : GetCouchbaseKey and PutCouchbaseKey using > Controller Service still uses old authentication mechanism. > * org.apache.nifi.processors.couchbase.GetCouchbaseKey > * org.apache.nifi.processors.couchbase.PutCouchbaseKey > Ref: > [https://github.com/apache/nifi/blob/master/nifi-nar-bundles/nifi-couchbase-bundle/nifi-couchbase-processors/src/main/java/org/apache/nifi/couchbase/CouchbaseClusterService.java#L116] > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5309) Specify Correct Output Format in logger when transferring records to success in SelectHiveQl Processor
[ https://issues.apache.org/jira/browse/NIFI-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512338#comment-16512338 ] ASF GitHub Bot commented on NIFI-5309: -- Github user ammitt90 commented on the issue: https://github.com/apache/nifi/pull/2793 @MikeThomsen Why the PR says "This pull request is closed, but the ammitt90:NIFI-5309 branch has unmerged commits." Is there anything which I am missing ?? > Specify Correct Output Format in logger when transferring records to success > in SelectHiveQl Processor > -- > > Key: NIFI-5309 > URL: https://issues.apache.org/jira/browse/NIFI-5309 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Aman Mittal >Priority: Trivial > Labels: easyfix > Fix For: 1.7.0 > > > Get the output Format printed in the app.log from outputFormat variable > instead of hardcoding "Avro" . > > Current logger.info prints : > logger.info("{} contains {} Avro records; transferring to 'success'", new > Object[]\{flowfile, nrOfRows.get()}); > > Instead It should take the outputFormat based on the option selected from the > processor like CSV/AVRO. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2793: NIFI-5309 update the logger message to get the output form...
Github user ammitt90 commented on the issue: https://github.com/apache/nifi/pull/2793 @MikeThomsen Why the PR says "This pull request is closed, but the ammitt90:NIFI-5309 branch has unmerged commits." Is there anything which I am missing ?? ---
[jira] [Commented] (NIFI-5309) Specify Correct Output Format in logger when transferring records to success in SelectHiveQl Processor
[ https://issues.apache.org/jira/browse/NIFI-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512322#comment-16512322 ] ASF GitHub Bot commented on NIFI-5309: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2793 > Specify Correct Output Format in logger when transferring records to success > in SelectHiveQl Processor > -- > > Key: NIFI-5309 > URL: https://issues.apache.org/jira/browse/NIFI-5309 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Aman Mittal >Priority: Trivial > Labels: easyfix > Fix For: 1.7.0 > > > Get the output Format printed in the app.log from outputFormat variable > instead of hardcoding "Avro" . > > Current logger.info prints : > logger.info("{} contains {} Avro records; transferring to 'success'", new > Object[]\{flowfile, nrOfRows.get()}); > > Instead It should take the outputFormat based on the option selected from the > processor like CSV/AVRO. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2793: NIFI-5309 update the logger message to get the outp...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2793 ---
[jira] [Resolved] (NIFI-5309) Specify Correct Output Format in logger when transferring records to success in SelectHiveQl Processor
[ https://issues.apache.org/jira/browse/NIFI-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Thomsen resolved NIFI-5309. Resolution: Fixed > Specify Correct Output Format in logger when transferring records to success > in SelectHiveQl Processor > -- > > Key: NIFI-5309 > URL: https://issues.apache.org/jira/browse/NIFI-5309 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Aman Mittal >Priority: Trivial > Labels: easyfix > Fix For: 1.7.0 > > > Get the output Format printed in the app.log from outputFormat variable > instead of hardcoding "Avro" . > > Current logger.info prints : > logger.info("{} contains {} Avro records; transferring to 'success'", new > Object[]\{flowfile, nrOfRows.get()}); > > Instead It should take the outputFormat based on the option selected from the > processor like CSV/AVRO. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2792: NIFI-5231: CalculateRecordCount should use 'record....
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2792 ---
[jira] [Commented] (NIFI-5231) Record stats processor
[ https://issues.apache.org/jira/browse/NIFI-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512318#comment-16512318 ] ASF GitHub Bot commented on NIFI-5231: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2792 > Record stats processor > -- > > Key: NIFI-5231 > URL: https://issues.apache.org/jira/browse/NIFI-5231 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > Fix For: 1.7.0 > > > Should the following: > > # Take a record reader. > # Count the # of records and add a record_count attribute to the flowfile. > # Allow user-defined properties that do the following: > ## Map attribute name -> record path. > ## Provide aggregate value counts for each record path statement. > ## Provide total count for record path operation. > ## Put those values on the flowfile as attributes. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2723: NIFI-5214 Added REST LookupService
Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2723 @ijokarumawak Added a new commit that should get it to close out. ---
[jira] [Commented] (NIFI-5214) Add a REST lookup service
[ https://issues.apache.org/jira/browse/NIFI-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512314#comment-16512314 ] ASF GitHub Bot commented on NIFI-5214: -- Github user MikeThomsen commented on the issue: https://github.com/apache/nifi/pull/2723 @ijokarumawak Added a new commit that should get it to close out. > Add a REST lookup service > - > > Key: NIFI-5214 > URL: https://issues.apache.org/jira/browse/NIFI-5214 > Project: Apache NiFi > Issue Type: New Feature >Reporter: Mike Thomsen >Assignee: Mike Thomsen >Priority: Major > > * Should have reader API support > * Should be able to drill down through complex XML and JSON responses to a > nested record. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-533) Integer comparison mismatch in ListenHTTP
[ https://issues.apache.org/jira/browse/MINIFICPP-533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512210#comment-16512210 ] ASF GitHub Bot commented on MINIFICPP-533: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/358 MINIFICPP-533 Fixed signed/unsigned integer comparison mismatch Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-533 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/358.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #358 commit 845fc9f1960c42fe5656a92e309c2eb64d8c8207 Author: Andrew I. Christianson Date: 2018-06-14T09:13:23Z MINIFICPP-533 Fixed signed/unsigned integer comparison mismatch > Integer comparison mismatch in ListenHTTP > - > > Key: MINIFICPP-533 > URL: https://issues.apache.org/jira/browse/MINIFICPP-533 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > Fix the mismatched size types: > > {{In file included from > /home/achristianson/workspace/nifi-minifi-cpp/extensions/civetweb/processors/ListenHTTP.cpp:21:0:}} > {{/home/achristianson/workspace/nifi-minifi-cpp/extensions/civetweb/processors/ListenHTTP.h: > In member function ‘virtual int64_t > org::apache::nifi::minifi::processors::ListenHTTP::ResponseBodyReadCallback::process(std::shared_ptr)’:}} > {{/home/achristianson/workspace/nifi-minifi-cpp/extensions/civetweb/processors/ListenHTTP.h:139:20: > warning: comparison between signed and unsigned integer expressions > [-Wsign-compare]}} > {{ if (num_read != stream->getSize()) {}} > {{ ~^~~~}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #358: MINIFICPP-533 Fixed signed/unsigned integ...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/358 MINIFICPP-533 Fixed signed/unsigned integer comparison mismatch Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-533 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/358.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #358 commit 845fc9f1960c42fe5656a92e309c2eb64d8c8207 Author: Andrew I. Christianson Date: 2018-06-14T09:13:23Z MINIFICPP-533 Fixed signed/unsigned integer comparison mismatch ---
[jira] [Commented] (MINIFICPP-534) Add EL support to ExecuteProcess
[ https://issues.apache.org/jira/browse/MINIFICPP-534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512207#comment-16512207 ] ASF GitHub Bot commented on MINIFICPP-534: -- GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/357 MINIFICPP-534 Added EL support to ExecuteProcess Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-534 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/357.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #357 commit 4a03fa9acee85def717bf7dd2a46672b73eb9860 Author: Andrew I. Christianson Date: 2018-06-14T09:07:56Z MINIFICPP-534 Added EL support to ExecuteProcess > Add EL support to ExecuteProcess > > > Key: MINIFICPP-534 > URL: https://issues.apache.org/jira/browse/MINIFICPP-534 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > ExecuteProcess needs EL support for the following properties: > * Command > * Command Arguments > * Working Directory -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #357: MINIFICPP-534 Added EL support to Execute...
GitHub user achristianson opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/357 MINIFICPP-534 Added EL support to ExecuteProcess Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with MINIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [x] If applicable, have you updated the LICENSE file? - [x] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [x] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/achristianson/nifi-minifi-cpp MINIFICPP-534 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/357.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #357 commit 4a03fa9acee85def717bf7dd2a46672b73eb9860 Author: Andrew I. Christianson Date: 2018-06-14T09:07:56Z MINIFICPP-534 Added EL support to ExecuteProcess ---
[jira] [Created] (NIFI-5310) Not able to read record as string type ending with \ (backslash)
Nishant Gupta created NIFI-5310: --- Summary: Not able to read record as string type ending with \ (backslash) Key: NIFI-5310 URL: https://issues.apache.org/jira/browse/NIFI-5310 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.6.0 Environment: Windows/Linux Both Reporter: Nishant Gupta Attachments: IssueWithBackSlash.PNG *Processor* - QueryRecord *RecordReader* - CSVReader *RecordWriter* - CSVRecordSetWriter *Data* Type- String { "name": "Name", "type": ["string","null"] } *Data - John\ (Failing), John\M(passing)* *Query* - select Name, ID from FLOWFILE -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (MINIFICPP-536) Add EL support to GetFile
[ https://issues.apache.org/jira/browse/MINIFICPP-536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Christianson resolved MINIFICPP-536. --- Resolution: Fixed > Add EL support to GetFile > - > > Key: MINIFICPP-536 > URL: https://issues.apache.org/jira/browse/MINIFICPP-536 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > GetFile needs EL support. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (MINIFICPP-535) EL should support absence of flow files
[ https://issues.apache.org/jira/browse/MINIFICPP-535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Christianson resolved MINIFICPP-535. --- Resolution: Fixed > EL should support absence of flow files > --- > > Key: MINIFICPP-535 > URL: https://issues.apache.org/jira/browse/MINIFICPP-535 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Andrew Christianson >Assignee: Andrew Christianson >Priority: Major > > EL needs to work in the case that there are no flow files; many EL functions > make sense in absence of a flow file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (NIFI-4579) When Strict Type Checking property is set to "false", ValidateRecord does not coerce fields into the correct type.
[ https://issues.apache.org/jira/browse/NIFI-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Koji Kawamura updated NIFI-4579: Status: Patch Available (was: In Progress) > When Strict Type Checking property is set to "false", ValidateRecord does not > coerce fields into the correct type. > -- > > Key: NIFI-4579 > URL: https://issues.apache.org/jira/browse/NIFI-4579 > Project: Apache NiFi > Issue Type: Bug > Components: Documentation Website, Extensions >Affects Versions: 1.4.0 >Reporter: Andrew Lim >Assignee: Koji Kawamura >Priority: Major > > The description of the Strict Type Checking property for the ValidateRecord > processor states: > _If false, the Record will be considered valid and the field will be coerced > into the correct type (if possible, according to the type coercion supported > by the Record Writer)._ > In my testing I've confirmed that in this scenario, the records are > considered valid. But, none of the record fields are coerced into the > correct type. > We should either correct the documentation or implement the promised coercion > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4579) When Strict Type Checking property is set to "false", ValidateRecord does not coerce fields into the correct type.
[ https://issues.apache.org/jira/browse/NIFI-4579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512041#comment-16512041 ] ASF GitHub Bot commented on NIFI-4579: -- GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/2794 NIFI-4579: Fix ValidateRecord type coercing Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-4579 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2794.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2794 commit e0d72610ea850415742edec22136f0fe25267918 Author: Koji Kawamura Date: 2018-06-14T06:39:17Z NIFI-4579: Fix ValidateRecord type coercing > When Strict Type Checking property is set to "false", ValidateRecord does not > coerce fields into the correct type. > -- > > Key: NIFI-4579 > URL: https://issues.apache.org/jira/browse/NIFI-4579 > Project: Apache NiFi > Issue Type: Bug > Components: Documentation Website, Extensions >Affects Versions: 1.4.0 >Reporter: Andrew Lim >Assignee: Koji Kawamura >Priority: Major > > The description of the Strict Type Checking property for the ValidateRecord > processor states: > _If false, the Record will be considered valid and the field will be coerced > into the correct type (if possible, according to the type coercion supported > by the Record Writer)._ > In my testing I've confirmed that in this scenario, the records are > considered valid. But, none of the record fields are coerced into the > correct type. > We should either correct the documentation or implement the promised coercion > functionality. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2794: NIFI-4579: Fix ValidateRecord type coercing
GitHub user ijokarumawak opened a pull request: https://github.com/apache/nifi/pull/2794 NIFI-4579: Fix ValidateRecord type coercing Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [x] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [x] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [x] Has your PR been rebased against the latest commit within the target branch (typically master)? - [x] Is your initial contribution a single, squashed commit? ### For code changes: - [x] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [x] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ijokarumawak/nifi nifi-4579 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2794.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2794 commit e0d72610ea850415742edec22136f0fe25267918 Author: Koji Kawamura Date: 2018-06-14T06:39:17Z NIFI-4579: Fix ValidateRecord type coercing ---
[jira] [Commented] (NIFI-5309) Specify Correct Output Format in logger when transferring records to success in SelectHiveQl Processor
[ https://issues.apache.org/jira/browse/NIFI-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512036#comment-16512036 ] Aman Mittal commented on NIFI-5309: --- https://github.com/apache/nifi/pull/2793 > Specify Correct Output Format in logger when transferring records to success > in SelectHiveQl Processor > -- > > Key: NIFI-5309 > URL: https://issues.apache.org/jira/browse/NIFI-5309 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Aman Mittal >Priority: Trivial > Labels: easyfix > Fix For: 1.7.0 > > > Get the output Format printed in the app.log from outputFormat variable > instead of hardcoding "Avro" . > > Current logger.info prints : > logger.info("{} contains {} Avro records; transferring to 'success'", new > Object[]\{flowfile, nrOfRows.get()}); > > Instead It should take the outputFormat based on the option selected from the > processor like CSV/AVRO. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (NIFI-5309) Specify Correct Output Format in logger when transferring records to success in SelectHiveQl Processor
[ https://issues.apache.org/jira/browse/NIFI-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aman Mittal updated NIFI-5309: -- Comment: was deleted (was: https://github.com/apache/nifi/pull/2793) > Specify Correct Output Format in logger when transferring records to success > in SelectHiveQl Processor > -- > > Key: NIFI-5309 > URL: https://issues.apache.org/jira/browse/NIFI-5309 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Aman Mittal >Priority: Trivial > Labels: easyfix > Fix For: 1.7.0 > > > Get the output Format printed in the app.log from outputFormat variable > instead of hardcoding "Avro" . > > Current logger.info prints : > logger.info("{} contains {} Avro records; transferring to 'success'", new > Object[]\{flowfile, nrOfRows.get()}); > > Instead It should take the outputFormat based on the option selected from the > processor like CSV/AVRO. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5309) Specify Correct Output Format in logger when transferring records to success in SelectHiveQl Processor
[ https://issues.apache.org/jira/browse/NIFI-5309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512034#comment-16512034 ] ASF GitHub Bot commented on NIFI-5309: -- GitHub user ammitt90 opened a pull request: https://github.com/apache/nifi/pull/2793 NIFI-5309 update the logger message to get the output format from outputFormat Variable Updated the info message of logger printing the output format of flow files records. This change will print the output format in the logs of selectHiveQl processor based on the outputFormat variable value You can merge this pull request into a Git repository by running: $ git pull https://github.com/ammitt90/nifi NIFI-5309 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2793.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2793 commit f1ae2761d3221016665768d6f59f58d9212910dd Author: amitt90 Date: 2018-06-14T06:16:46Z NIFI-5309 update the logger message to get the output format from the outputFormat variable > Specify Correct Output Format in logger when transferring records to success > in SelectHiveQl Processor > -- > > Key: NIFI-5309 > URL: https://issues.apache.org/jira/browse/NIFI-5309 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Reporter: Aman Mittal >Priority: Trivial > Labels: easyfix > Fix For: 1.7.0 > > > Get the output Format printed in the app.log from outputFormat variable > instead of hardcoding "Avro" . > > Current logger.info prints : > logger.info("{} contains {} Avro records; transferring to 'success'", new > Object[]\{flowfile, nrOfRows.get()}); > > Instead It should take the outputFormat based on the option selected from the > processor like CSV/AVRO. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #2793: NIFI-5309 update the logger message to get the outp...
GitHub user ammitt90 opened a pull request: https://github.com/apache/nifi/pull/2793 NIFI-5309 update the logger message to get the output format from outputFormat Variable Updated the info message of logger printing the output format of flow files records. This change will print the output format in the logs of selectHiveQl processor based on the outputFormat variable value You can merge this pull request into a Git repository by running: $ git pull https://github.com/ammitt90/nifi NIFI-5309 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2793.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2793 commit f1ae2761d3221016665768d6f59f58d9212910dd Author: amitt90 Date: 2018-06-14T06:16:46Z NIFI-5309 update the logger message to get the output format from the outputFormat variable ---
[GitHub] nifi pull request #2747: NIFI-5249 Dockerfile enhancements
Github user pepov commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r195312869 --- Diff: nifi-docker/dockermaven/Dockerfile --- @@ -26,23 +26,33 @@ ARG NIFI_BINARY ENV NIFI_BASE_DIR /opt/nifi ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION - -# Setup NiFi user -RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: -f1` \ -&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \ -&& mkdir -p $NIFI_HOME/conf/templates \ -&& chown -R nifi:nifi $NIFI_BASE_DIR +ENV NIFI_PID_DIR=${NIFI_HOME}/run +ENV NIFI_LOG_DIR=${NIFI_HOME}/logs ADD $NIFI_BINARY $NIFI_BASE_DIR -RUN chown -R nifi:nifi $NIFI_HOME +# Setup NiFi user and create necessary directories +RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \ +&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \ +&& mkdir -p ${NIFI_HOME}/conf/templates \ +&& mkdir -p $NIFI_BASE_DIR/data \ +&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \ +&& mkdir -p $NIFI_BASE_DIR/content_repository \ +&& mkdir -p $NIFI_BASE_DIR/provenance_repository \ +&& mkdir -p $NIFI_LOG_DIR \ +&& chown -R nifi:nifi ${NIFI_BASE_DIR} \ +&& apt-get update \ +&& apt-get install -y jq xmlstarlet procps USER nifi -# Web HTTP Port & Remote Site-to-Site Ports -EXPOSE 8080 8181 +# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile +RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh + +# Web HTTP(s) & Socket Site-to-Site Ports +EXPOSE 8080 8443 1 -WORKDIR $NIFI_HOME +WORKDIR ${NIFI_HOME} # Startup NiFi ENTRYPOINT ["bin/nifi.sh"] -CMD ["run"] +CMD ["run"] --- End diff -- I've just pushed what hopefully fixes this issue once and for all. Entrypoint and command instructions have their gotchas, and we have to be careful with those. Also added a comment what and why I changed around the entrypoint, please see in the commit: https://github.com/apache/nifi/pull/2747/commits/8aef89bfd3ec7d1771e6dd835c53a1ba1f61dda3#diff-2cef119cd914e1b710d41b387a0b72b2R61 ---
[jira] [Commented] (NIFI-5249) Dockerfile enhancements
[ https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512025#comment-16512025 ] ASF GitHub Bot commented on NIFI-5249: -- Github user pepov commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r195312869 --- Diff: nifi-docker/dockermaven/Dockerfile --- @@ -26,23 +26,33 @@ ARG NIFI_BINARY ENV NIFI_BASE_DIR /opt/nifi ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION - -# Setup NiFi user -RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: -f1` \ -&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \ -&& mkdir -p $NIFI_HOME/conf/templates \ -&& chown -R nifi:nifi $NIFI_BASE_DIR +ENV NIFI_PID_DIR=${NIFI_HOME}/run +ENV NIFI_LOG_DIR=${NIFI_HOME}/logs ADD $NIFI_BINARY $NIFI_BASE_DIR -RUN chown -R nifi:nifi $NIFI_HOME +# Setup NiFi user and create necessary directories +RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \ +&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \ +&& mkdir -p ${NIFI_HOME}/conf/templates \ +&& mkdir -p $NIFI_BASE_DIR/data \ +&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \ +&& mkdir -p $NIFI_BASE_DIR/content_repository \ +&& mkdir -p $NIFI_BASE_DIR/provenance_repository \ +&& mkdir -p $NIFI_LOG_DIR \ +&& chown -R nifi:nifi ${NIFI_BASE_DIR} \ +&& apt-get update \ +&& apt-get install -y jq xmlstarlet procps USER nifi -# Web HTTP Port & Remote Site-to-Site Ports -EXPOSE 8080 8181 +# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile +RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh + +# Web HTTP(s) & Socket Site-to-Site Ports +EXPOSE 8080 8443 1 -WORKDIR $NIFI_HOME +WORKDIR ${NIFI_HOME} # Startup NiFi ENTRYPOINT ["bin/nifi.sh"] -CMD ["run"] +CMD ["run"] --- End diff -- I've just pushed what hopefully fixes this issue once and for all. Entrypoint and command instructions have their gotchas, and we have to be careful with those. Also added a comment what and why I changed around the entrypoint, please see in the commit: https://github.com/apache/nifi/pull/2747/commits/8aef89bfd3ec7d1771e6dd835c53a1ba1f61dda3#diff-2cef119cd914e1b710d41b387a0b72b2R61 > Dockerfile enhancements > --- > > Key: NIFI-5249 > URL: https://issues.apache.org/jira/browse/NIFI-5249 > Project: Apache NiFi > Issue Type: Improvement > Components: Docker >Reporter: Peter Wilcsinszky >Priority: Minor > > * make environment variables more explicit > * create data and log directories > * add procps for process visibility inside the container -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5249) Dockerfile enhancements
[ https://issues.apache.org/jira/browse/NIFI-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16512018#comment-16512018 ] ASF GitHub Bot commented on NIFI-5249: -- Github user pepov commented on a diff in the pull request: https://github.com/apache/nifi/pull/2747#discussion_r195311227 --- Diff: nifi-docker/dockermaven/Dockerfile --- @@ -26,23 +26,33 @@ ARG NIFI_BINARY ENV NIFI_BASE_DIR /opt/nifi ENV NIFI_HOME $NIFI_BASE_DIR/nifi-$NIFI_VERSION - -# Setup NiFi user -RUN groupadd -g $GID nifi || groupmod -n nifi `getent group $GID | cut -d: -f1` \ -&& useradd --shell /bin/bash -u $UID -g $GID -m nifi \ -&& mkdir -p $NIFI_HOME/conf/templates \ -&& chown -R nifi:nifi $NIFI_BASE_DIR +ENV NIFI_PID_DIR=${NIFI_HOME}/run +ENV NIFI_LOG_DIR=${NIFI_HOME}/logs ADD $NIFI_BINARY $NIFI_BASE_DIR -RUN chown -R nifi:nifi $NIFI_HOME +# Setup NiFi user and create necessary directories +RUN groupadd -g ${GID} nifi || groupmod -n nifi `getent group ${GID} | cut -d: -f1` \ +&& useradd --shell /bin/bash -u ${UID} -g ${GID} -m nifi \ +&& mkdir -p ${NIFI_HOME}/conf/templates \ +&& mkdir -p $NIFI_BASE_DIR/data \ +&& mkdir -p $NIFI_BASE_DIR/flowfile_repository \ +&& mkdir -p $NIFI_BASE_DIR/content_repository \ +&& mkdir -p $NIFI_BASE_DIR/provenance_repository \ +&& mkdir -p $NIFI_LOG_DIR \ +&& chown -R nifi:nifi ${NIFI_BASE_DIR} \ +&& apt-get update \ +&& apt-get install -y jq xmlstarlet procps USER nifi -# Web HTTP Port & Remote Site-to-Site Ports -EXPOSE 8080 8181 +# Clear nifi-env.sh in favour of configuring all environment variables in the Dockerfile +RUN echo "#!/bin/sh\n" > $NIFI_HOME/bin/nifi-env.sh + +# Web HTTP(s) & Socket Site-to-Site Ports +EXPOSE 8080 8443 1 -WORKDIR $NIFI_HOME +WORKDIR ${NIFI_HOME} # Startup NiFi ENTRYPOINT ["bin/nifi.sh"] -CMD ["run"] +CMD ["run"] --- End diff -- You're right I totally missed that. The biggest problem is that it is even the case with the dockerhub image, which is much more painful. I think I know what the problem is and working on the fix. > Dockerfile enhancements > --- > > Key: NIFI-5249 > URL: https://issues.apache.org/jira/browse/NIFI-5249 > Project: Apache NiFi > Issue Type: Improvement > Components: Docker >Reporter: Peter Wilcsinszky >Priority: Minor > > * make environment variables more explicit > * create data and log directories > * add procps for process visibility inside the container -- This message was sent by Atlassian JIRA (v7.6.3#76005)