[GitHub] nifi-registry pull request #26: [NIFIREG-41] increase disconnect tolerance a...
GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/26 [NIFIREG-41] increase disconnect tolerance and use trusty os, add tra⦠â¦vis_wait 30 You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry travis Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/26.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #26 commit 7b1333efa3a51f3d00203c4849c7f76db8eaf169 Author: Scott Aslan Date: 2017-10-20T20:35:10Z [NIFIREG-41] increase disconnect tolerance and use trusty os, add travis_wait 30 ---
[jira] [Commented] (NIFIREG-41) Update Travis config to eliminate Chrome disconnection issue
[ https://issues.apache.org/jira/browse/NIFIREG-41?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213192#comment-16213192 ] ASF GitHub Bot commented on NIFIREG-41: --- GitHub user scottyaslan opened a pull request: https://github.com/apache/nifi-registry/pull/26 [NIFIREG-41] increase disconnect tolerance and use trusty os, add tra… …vis_wait 30 You can merge this pull request into a Git repository by running: $ git pull https://github.com/scottyaslan/nifi-registry travis Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-registry/pull/26.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #26 commit 7b1333efa3a51f3d00203c4849c7f76db8eaf169 Author: Scott Aslan Date: 2017-10-20T20:35:10Z [NIFIREG-41] increase disconnect tolerance and use trusty os, add travis_wait 30 > Update Travis config to eliminate Chrome disconnection issue > > > Key: NIFIREG-41 > URL: https://issues.apache.org/jira/browse/NIFIREG-41 > Project: NiFi Registry > Issue Type: Sub-task >Reporter: Scott Aslan >Assignee: Scott Aslan >Priority: Minor > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFIREG-41) Update Travis config to eliminate Chrome disconnection issue
Scott Aslan created NIFIREG-41: -- Summary: Update Travis config to eliminate Chrome disconnection issue Key: NIFIREG-41 URL: https://issues.apache.org/jira/browse/NIFIREG-41 Project: NiFi Registry Issue Type: Sub-task Reporter: Scott Aslan Assignee: Scott Aslan Priority: Minor -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.
[ https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213170#comment-16213170 ] ASF subversion and git services commented on MINIFICPP-113: --- Commit dd1024081e7b9b55656b60876354484869b37d8f in nifi-minifi-cpp's branch refs/heads/master from Marc Parisi [ https://git-wip-us.apache.org/repos/asf?p=nifi-minifi-cpp.git;h=dd10240 ] MINIFI-372: Resolve issues with missed commits MINIFI-372: remove shared build from rocksdb This closes #150 Signed-off-by: Bin Qiu > Move from LevelDB to Rocks DB for all repositories. > > > Key: MINIFICPP-113 > URL: https://issues.apache.org/jira/browse/MINIFICPP-113 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: marco polo >Priority: Minor > > Can also be used as a file system repo where we want to minimize the number > of inodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.
[ https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213172#comment-16213172 ] ASF GitHub Bot commented on MINIFICPP-113: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/150 > Move from LevelDB to Rocks DB for all repositories. > > > Key: MINIFICPP-113 > URL: https://issues.apache.org/jira/browse/MINIFICPP-113 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: marco polo >Priority: Minor > > Can also be used as a file system repo where we want to minimize the number > of inodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.
[ https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213171#comment-16213171 ] ASF subversion and git services commented on MINIFICPP-113: --- Commit dd1024081e7b9b55656b60876354484869b37d8f in nifi-minifi-cpp's branch refs/heads/master from Marc Parisi [ https://git-wip-us.apache.org/repos/asf?p=nifi-minifi-cpp.git;h=dd10240 ] MINIFI-372: Resolve issues with missed commits MINIFI-372: remove shared build from rocksdb This closes #150 Signed-off-by: Bin Qiu > Move from LevelDB to Rocks DB for all repositories. > > > Key: MINIFICPP-113 > URL: https://issues.apache.org/jira/browse/MINIFICPP-113 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: marco polo >Priority: Minor > > Can also be used as a file system repo where we want to minimize the number > of inodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #150: MINIFI-372: Resolve issues with missed co...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/150 ---
[jira] [Commented] (MINIFICPP-72) Add tar and compression support for MergeContent
[ https://issues.apache.org/jira/browse/MINIFICPP-72?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213159#comment-16213159 ] ASF GitHub Bot commented on MINIFICPP-72: - Github user minifirocks closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/146 > Add tar and compression support for MergeContent > > > Key: MINIFICPP-72 > URL: https://issues.apache.org/jira/browse/MINIFICPP-72 > Project: NiFi MiNiFi C++ > Issue Type: New Feature >Affects Versions: 1.0.0 >Reporter: bqiu > Fix For: 1.0.0 > > > Add tar and compression support for MergeContent > will use the https://www.libarchive.org -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.
[ https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213157#comment-16213157 ] ASF GitHub Bot commented on MINIFICPP-113: -- Github user minifirocks commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/150 @phrocker please merge these two commits into one > Move from LevelDB to Rocks DB for all repositories. > > > Key: MINIFICPP-113 > URL: https://issues.apache.org/jira/browse/MINIFICPP-113 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: marco polo >Priority: Minor > > Can also be used as a file system repo where we want to minimize the number > of inodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp pull request #146: MINIFICPP-72: Add Tar and Zip Support for...
Github user minifirocks closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/146 ---
[GitHub] nifi-minifi-cpp issue #150: MINIFI-372: Resolve issues with missed commits
Github user minifirocks commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/150 @phrocker please merge these two commits into one ---
[jira] [Commented] (MINIFICPP-113) Move from LevelDB to Rocks DB for all repositories.
[ https://issues.apache.org/jira/browse/MINIFICPP-113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213152#comment-16213152 ] ASF GitHub Bot commented on MINIFICPP-113: -- Github user minifirocks commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/150 @phrocker looks good. > Move from LevelDB to Rocks DB for all repositories. > > > Key: MINIFICPP-113 > URL: https://issues.apache.org/jira/browse/MINIFICPP-113 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: marco polo >Assignee: marco polo >Priority: Minor > > Can also be used as a file system repo where we want to minimize the number > of inodes. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi-minifi-cpp issue #150: MINIFI-372: Resolve issues with missed commits
Github user minifirocks commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/150 @phrocker looks good. ---
[jira] [Assigned] (NIFI-4506) Add date functions to Record Path
[ https://issues.apache.org/jira/browse/NIFI-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende reassigned NIFI-4506: - Assignee: Bryan Bende > Add date functions to Record Path > - > > Key: NIFI-4506 > URL: https://issues.apache.org/jira/browse/NIFI-4506 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0, 1.4.0 >Reporter: Bryan Bende >Assignee: Bryan Bende >Priority: Minor > > We should support some date related functions in record path. At a minimum I > think having a format function like: > {code} > format( /someField, '-MM-dd', defaultValue) > {code} > The main use case for this is using PartitionRecord to partition by month, > day, or hour on a date field. > Currently you have treat the date as a string and use a sub-string operation > to get the part you are interested in, which also assumes the date is in a > string form in the first place. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4506) Add date functions to Record Path
[ https://issues.apache.org/jira/browse/NIFI-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16213099#comment-16213099 ] ASF GitHub Bot commented on NIFI-4506: -- GitHub user bbende opened a pull request: https://github.com/apache/nifi/pull/2221 NIFI-4506 Adding toDate and format functions to record path Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi NIFI-4506 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2221.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2221 commit 228c129fd8ead682a730505d996e70f9fb761e0f Author: Bryan Bende Date: 2017-10-20T18:08:56Z NIFI-4506 Adding toDate and format functions to record path > Add date functions to Record Path > - > > Key: NIFI-4506 > URL: https://issues.apache.org/jira/browse/NIFI-4506 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.2.0, 1.3.0, 1.4.0 >Reporter: Bryan Bende >Priority: Minor > > We should support some date related functions in record path. At a minimum I > think having a format function like: > {code} > format( /someField, '-MM-dd', defaultValue) > {code} > The main use case for this is using PartitionRecord to partition by month, > day, or hour on a date field. > Currently you have treat the date as a string and use a sub-string operation > to get the part you are interested in, which also assumes the date is in a > string form in the first place. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4506) Add date functions to Record Path
[ https://issues.apache.org/jira/browse/NIFI-4506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bryan Bende updated NIFI-4506: -- Status: Patch Available (was: Open) > Add date functions to Record Path > - > > Key: NIFI-4506 > URL: https://issues.apache.org/jira/browse/NIFI-4506 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.4.0, 1.3.0, 1.2.0 >Reporter: Bryan Bende >Priority: Minor > > We should support some date related functions in record path. At a minimum I > think having a format function like: > {code} > format( /someField, '-MM-dd', defaultValue) > {code} > The main use case for this is using PartitionRecord to partition by month, > day, or hour on a date field. > Currently you have treat the date as a string and use a sub-string operation > to get the part you are interested in, which also assumes the date is in a > string form in the first place. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2221: NIFI-4506 Adding toDate and format functions to rec...
GitHub user bbende opened a pull request: https://github.com/apache/nifi/pull/2221 NIFI-4506 Adding toDate and format functions to record path Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [ ] Have you written or updated unit tests to verify your changes? - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [ ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [ ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/bbende/nifi NIFI-4506 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2221.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2221 commit 228c129fd8ead682a730505d996e70f9fb761e0f Author: Bryan Bende Date: 2017-10-20T18:08:56Z NIFI-4506 Adding toDate and format functions to record path ---
[jira] [Commented] (MINIFICPP-262) Rocksdb fails to link
[ https://issues.apache.org/jira/browse/MINIFICPP-262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212972#comment-16212972 ] Caleb Johnson commented on MINIFICPP-262: - [~phrocker] can you test the [civetweb-1.10 branch|https://github.com/NiFiLocal/nifi-minifi-cpp/tree/civetweb-1.10]? I initially updated civet, but got carried away and updated rocksdb as well. I had to change some CMakeLists because both registered the "check" target. I also need to move the civet and rocksdb tests out of the "test" target, and the prevent the rocksdb tests from building in the default target. Other than those issues, it builds and runs OK for me. > Rocksdb fails to link > - > > Key: MINIFICPP-262 > URL: https://issues.apache.org/jira/browse/MINIFICPP-262 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson > > Rocksdb fails to link when building on CentOS 7.4. [~calebj] seems to be > having the same issue on Ubuntu 14.04 as part of MINIFI-244. > {code} > [ 38%] Linking CXX static library librocksdb.a > [ 38%] Linking CXX shared library librocksdb.so > [ 60%] Built target rocksdb > [ 61%] Building CXX object > CMakeFiles/Tests.dir/thirdparty/spdlog-20170710/include/spdlog/dummy.cpp.o > [ 61%] Building CXX object > CMakeFiles/ControllerServiceIntegrationTests.dir/libminifi/test/TestBase.cpp.o > /usr/bin/ld: CMakeFiles/build_version.dir/__/__/build_version.cc.o: > relocation R_X86_64_32 against `.data' can not be used when making a shared > object; recompile with -fPIC > CMakeFiles/build_version.dir/__/__/build_version.cc.o: error adding symbols: > Bad value > collect2: error: ld returned 1 exit status > gmake[2]: *** [thirdparty/rocksdb/librocksdb.so.5.7.0] Error 1 > gmake[1]: *** [thirdparty/rocksdb/CMakeFiles/rocksdb-shared.dir/all] Error 2 > gmake[1]: *** Waiting for unfinished jobs > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3689) TestWriteAheadStorePartition frequently causes travis failures
[ https://issues.apache.org/jira/browse/NIFI-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212834#comment-16212834 ] ASF GitHub Bot commented on NIFI-3689: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2214 > TestWriteAheadStorePartition frequently causes travis failures > -- > > Key: NIFI-3689 > URL: https://issues.apache.org/jira/browse/NIFI-3689 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre F de Miranda >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.5.0 > > > Apologies if this happens to be a duplicate (I did a search but could not > find anything similar) > I notice a number of travis builds seem to fail during the execution of > TestWriteAheadStorePartition > https://travis-ci.org/apache/nifi/jobs/220619894 > https://travis-ci.org/apache/nifi/jobs/220671755 > https://travis-ci.org/apache/nifi/jobs/217435194 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-3689) TestWriteAheadStorePartition frequently causes travis failures
[ https://issues.apache.org/jira/browse/NIFI-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-3689: - Resolution: Fixed Status: Resolved (was: Patch Available) > TestWriteAheadStorePartition frequently causes travis failures > -- > > Key: NIFI-3689 > URL: https://issues.apache.org/jira/browse/NIFI-3689 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre F de Miranda >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.5.0 > > > Apologies if this happens to be a duplicate (I did a search but could not > find anything similar) > I notice a number of travis builds seem to fail during the execution of > TestWriteAheadStorePartition > https://travis-ci.org/apache/nifi/jobs/220619894 > https://travis-ci.org/apache/nifi/jobs/220671755 > https://travis-ci.org/apache/nifi/jobs/217435194 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi pull request #2214: NIFI-3689: Fixed threading bug in TestWriteAheadSto...
Github user asfgit closed the pull request at: https://github.com/apache/nifi/pull/2214 ---
[jira] [Commented] (NIFI-3689) TestWriteAheadStorePartition frequently causes travis failures
[ https://issues.apache.org/jira/browse/NIFI-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212833#comment-16212833 ] ASF subversion and git services commented on NIFI-3689: --- Commit 2acf6bdf7ad66a3927c99e5f767f4007eecd9274 in nifi's branch refs/heads/master from [~markap14] [ https://git-wip-us.apache.org/repos/asf?p=nifi.git;h=2acf6bd ] NIFI-3689: Fixed threading bug in TestWriteAheadStorePartition - multiple threads were simultaneously attempting to update HashMap. Changed impl to ConcurrentHashMap. Signed-off-by: Pierre Villard This closes #2214. > TestWriteAheadStorePartition frequently causes travis failures > -- > > Key: NIFI-3689 > URL: https://issues.apache.org/jira/browse/NIFI-3689 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre F de Miranda >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.5.0 > > > Apologies if this happens to be a duplicate (I did a search but could not > find anything similar) > I notice a number of travis builds seem to fail during the execution of > TestWriteAheadStorePartition > https://travis-ci.org/apache/nifi/jobs/220619894 > https://travis-ci.org/apache/nifi/jobs/220671755 > https://travis-ci.org/apache/nifi/jobs/217435194 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3689) TestWriteAheadStorePartition frequently causes travis failures
[ https://issues.apache.org/jira/browse/NIFI-3689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212832#comment-16212832 ] ASF GitHub Bot commented on NIFI-3689: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2214 Ran few builds and confirmed the issue without the fix. Then applied the fix and didn't get any exception after few builds. Merging to master. Thanks! > TestWriteAheadStorePartition frequently causes travis failures > -- > > Key: NIFI-3689 > URL: https://issues.apache.org/jira/browse/NIFI-3689 > Project: Apache NiFi > Issue Type: Bug >Reporter: Andre F de Miranda >Assignee: Mark Payne >Priority: Blocker > Fix For: 1.5.0 > > > Apologies if this happens to be a duplicate (I did a search but could not > find anything similar) > I notice a number of travis builds seem to fail during the execution of > TestWriteAheadStorePartition > https://travis-ci.org/apache/nifi/jobs/220619894 > https://travis-ci.org/apache/nifi/jobs/220671755 > https://travis-ci.org/apache/nifi/jobs/217435194 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2214: NIFI-3689: Fixed threading bug in TestWriteAheadStoreParti...
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2214 Ran few builds and confirmed the issue without the fix. Then applied the fix and didn't get any exception after few builds. Merging to master. Thanks! ---
[jira] [Commented] (NIFI-4227) Create a ForkRecord processor
[ https://issues.apache.org/jira/browse/NIFI-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212830#comment-16212830 ] ASF GitHub Bot commented on NIFI-4227: -- Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2037 Hey @markap14, I finally found some time to get back on this guy. I added the "split" mode as you suggested with some unit tests and an example in the additional details doc. Let me know if it looks like what you had in mind. > Create a ForkRecord processor > - > > Key: NIFI-4227 > URL: https://issues.apache.org/jira/browse/NIFI-4227 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Reporter: Pierre Villard >Assignee: Pierre Villard > Attachments: TestForkRecord.xml > > > I'd like a way to fork a record containing an array of records into multiple > records, each one being an element of the array. In addition, if configured > to, I'd like the option to add to each new record the parent fields. > For example, if I've: > {noformat} > [{ > "id": 1, > "name": "John Doe", > "address": "123 My Street", > "city": "My City", > "state": "MS", > "zipCode": "1", > "country": "USA", > "accounts": [{ > "id": 42, > "balance": 4750.89 > }, { > "id": 43, > "balance": 48212.38 > }] > }, > { > "id": 2, > "name": "Jane Doe", > "address": "345 My Street", > "city": "Her City", > "state": "NY", > "zipCode": "2", > "country": "USA", > "accounts": [{ > "id": 45, > "balance": 6578.45 > }, { > "id": 46, > "balance": 34567.21 > }] > }] > {noformat} > Then, I want to generate records looking like: > {noformat} > [{ > "id": 42, > "balance": 4750.89 > }, { > "id": 43, > "balance": 48212.38 > }, { > "id": 45, > "balance": 6578.45 > }, { > "id": 46, > "balance": 34567.21 > }] > {noformat} > Or, if parent fields are included, looking like: > {noformat} > [{ > "name": "John Doe", > "address": "123 My Street", > "city": "My City", > "state": "MS", > "zipCode": "1", > "country": "USA", > "id": 42, > "balance": 4750.89 > }, { > "name": "John Doe", > "address": "123 My Street", > "city": "My City", > "state": "MS", > "zipCode": "1", > "country": "USA", > "id": 43, > "balance": 48212.38 > }, { > "name": "Jane Doe", > "address": "345 My Street", > "city": "Her City", > "state": "NY", > "zipCode": "2", > "country": "USA", > "id": 45, > "balance": 6578.45 > }, { > "name": "Jane Doe", > "address": "345 My Street", > "city": "Her City", > "state": "NY", > "zipCode": "2", > "country": "USA", > "id": 46, > "balance": 34567.21 > }] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] nifi issue #2037: NIFI-4227 - add a ForkRecord processor
Github user pvillard31 commented on the issue: https://github.com/apache/nifi/pull/2037 Hey @markap14, I finally found some time to get back on this guy. I added the "split" mode as you suggested with some unit tests and an example in the additional details doc. Let me know if it looks like what you had in mind. ---
[jira] [Commented] (NIFI-2979) PriorityAttributePrioritizer violates Comparator contract
[ https://issues.apache.org/jira/browse/NIFI-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212821#comment-16212821 ] ASF GitHub Bot commented on NIFI-2979: -- GitHub user jmark99 opened a pull request: https://github.com/apache/nifi/pull/2220 NIFI-2979 PriorityAttributePrioritizer violates Comparator contract NIFI-2979 PriorityAttributePrioritizer violates Comparator contract Modified the return value when both objects priority values are null to zero in order to match the expected return value based upon the Comparator contract. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X ] Is your initial contribution a single, squashed commit? ### For code changes: - [X ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [X ] Have you written or updated unit tests to verify your changes? - [X ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [X ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [X ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [X ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jmark99/nifi NIFI-2979 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2220.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2220 commit cd611a717a6a988985a30a4c0dc5a0dc283278fd Author: Mark Owens Date: 2017-10-20T15:54:54Z NIFI-2979 PriorityAttributePrioritizer violates Comparator contract Modified the return value when both objects priority values are null to zero to match the expected return value based upon the Comparator contract. > PriorityAttributePrioritizer violates Comparator contract > - > > Key: NIFI-2979 > URL: https://issues.apache.org/jira/browse/NIFI-2979 > Project: Apache NiFi > Issue Type: Bug >Reporter: Brandon DeVries >Assignee: Mark Owens > > The documentation for the compare() method of the Comparator interface\[1] > states: > {quote} > The implementor must ensure that sgn(compare(x, y)) == -sgn(compare(y, x)) > for all x and y. > {quote} > However, in the PriorityAttributePrioritizer\[2], we have the following > snippet: > {code} > String o1Priority = o1.getAttribute(CoreAttributes.PRIORITY.key()); > String o2Priority = o2.getAttribute(CoreAttributes.PRIORITY.key()); > if (o1Priority == null && o2Priority == null) { > return -1; // this is not 0 to match FirstInFirstOut > } else if (o2Priority == null) { > return -1; > } else if (o1Priority == null) { > return 1; > } > {code} > This implies that for two non-null FlowFiles f1 and f2, both with null > "priority" attributes, we would have: > {code} > compare(f1, f2) == -1 > compare(f2, f1) == -1 > {code} > This would appear to violate the contract of the Comparator interface. The > comment suggests this was done to preserve FIFO ordering, however a return of > 0 should do the same, and satisfy the contract. > \[1] > https://docs.oracle.com/javase/7/docs/api/java/util/Comparator.html#compare%28T,%20T%29 > \[2] > https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-prioritizers/src/main/java/org/apache/nifi/prioritizer/PriorityAttributePrioritizer.java#L49-L57 -- This message was sent by A
[GitHub] nifi pull request #2220: NIFI-2979 PriorityAttributePrioritizer violates Com...
GitHub user jmark99 opened a pull request: https://github.com/apache/nifi/pull/2220 NIFI-2979 PriorityAttributePrioritizer violates Comparator contract NIFI-2979 PriorityAttributePrioritizer violates Comparator contract Modified the return value when both objects priority values are null to zero in order to match the expected return value based upon the Comparator contract. Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [X ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [X ] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [X ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [X ] Is your initial contribution a single, squashed commit? ### For code changes: - [X ] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [X ] Have you written or updated unit tests to verify your changes? - [X ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [X ] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [X ] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [X ] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [X ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/jmark99/nifi NIFI-2979 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/2220.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2220 commit cd611a717a6a988985a30a4c0dc5a0dc283278fd Author: Mark Owens Date: 2017-10-20T15:54:54Z NIFI-2979 PriorityAttributePrioritizer violates Comparator contract Modified the return value when both objects priority values are null to zero to match the expected return value based upon the Comparator contract. ---
[jira] [Created] (NIFI-4511) InvokeHTTP processor fails with TLS truststore config without keystore config
Richard Midwinter created NIFI-4511: --- Summary: InvokeHTTP processor fails with TLS truststore config without keystore config Key: NIFI-4511 URL: https://issues.apache.org/jira/browse/NIFI-4511 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.4.0 Reporter: Richard Midwinter Attachments: trace.txt I think there's a new issue in the InvokeHTTP processor (probably introduced with the changes to move to OkHttp3) in 1.4.0. If you setup a TLS connection with a truststore but not a keystore the InvokeHTTP processors break with the stacktrace below. I'd suggest it perhaps shouldn't attempt to load the keystore if there isn't a path to one provided. A brief glance over the code suggests the issue would apply in reverse too (keystore provided, but not truststore), although that's perhaps a less common scenario. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (NIFI-2979) PriorityAttributePrioritizer violates Comparator contract
[ https://issues.apache.org/jira/browse/NIFI-2979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Owens reassigned NIFI-2979: Assignee: Mark Owens > PriorityAttributePrioritizer violates Comparator contract > - > > Key: NIFI-2979 > URL: https://issues.apache.org/jira/browse/NIFI-2979 > Project: Apache NiFi > Issue Type: Bug >Reporter: Brandon DeVries >Assignee: Mark Owens > > The documentation for the compare() method of the Comparator interface\[1] > states: > {quote} > The implementor must ensure that sgn(compare(x, y)) == -sgn(compare(y, x)) > for all x and y. > {quote} > However, in the PriorityAttributePrioritizer\[2], we have the following > snippet: > {code} > String o1Priority = o1.getAttribute(CoreAttributes.PRIORITY.key()); > String o2Priority = o2.getAttribute(CoreAttributes.PRIORITY.key()); > if (o1Priority == null && o2Priority == null) { > return -1; // this is not 0 to match FirstInFirstOut > } else if (o2Priority == null) { > return -1; > } else if (o1Priority == null) { > return 1; > } > {code} > This implies that for two non-null FlowFiles f1 and f2, both with null > "priority" attributes, we would have: > {code} > compare(f1, f2) == -1 > compare(f2, f1) == -1 > {code} > This would appear to violate the contract of the Comparator interface. The > comment suggests this was done to preserve FIFO ordering, however a return of > 0 should do the same, and satisfy the contract. > \[1] > https://docs.oracle.com/javase/7/docs/api/java/util/Comparator.html#compare%28T,%20T%29 > \[2] > https://github.com/apache/nifi/blob/d838f61291d2582592754a37314911b701c6891b/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-prioritizers/src/main/java/org/apache/nifi/prioritizer/PriorityAttributePrioritizer.java#L49-L57 -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-4510) ValidateRecord does not work properly with AvroRecordSetWriter
[ https://issues.apache.org/jira/browse/NIFI-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212625#comment-16212625 ] David Doran commented on NIFI-4510: --- I wonder whether https://issues.apache.org/jira/browse/NIFI-4509 is related to this. > ValidateRecord does not work properly with AvroRecordSetWriter > -- > > Key: NIFI-4510 > URL: https://issues.apache.org/jira/browse/NIFI-4510 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.4.0 > Environment: Hortonworks HDF Sandbox with inbuilt NiFi 1.2 disabled, > and NiFi 1.4 downloaded & running >Reporter: David Doran > Attachments: ValidateRecordTest.xml > > > When using CSVReader and JsonRecordSetWriter, the ValidateRecord processor > works as expected: Valid records are emitted as a flowfile on the valid > queue, invalid ones on the invalid queue. > However, when using CSVReader and AvroRecordSetWriter, the presence of an > invalid record causes the ValidateRecord processor to fail: Nothing is > emitted on any of the downstream connectors (failure, invalid or valid). > Instead the session is rolled back and the input file is left in the upstream > queue. > Here's the simple schema I've been using: > { >"type": "record", >"name": "test", >"fields": [ > { > "name": "Key", > "type": "string" > }, > { > "name": "ShouldBeLong", > "type": "long" > }] > } > And here's some sample CSV data: > TheKey,123 > TheKey,456 > TheKey,NotALong1 > TheKey,NotALong2 > TheKey,NotALong3 > TheKey,321 > TheKey,654 > Using CSVReader->JsonRecordSetWriter results in a flowfile in the valid path: > [ { > "Key" : "TheKey", > "ShouldBeLong" : 123 > }, { > "Key" : "TheKey", > "ShouldBeLong" : 456 > }, { > "Key" : "TheKey", > "ShouldBeLong" : 321 > }, { > "Key" : "TheKey", > "ShouldBeLong" : 654 > } ] > and in invalid path: > [ { > "Key" : "TheKey", > "ShouldBeLong" : "NotALong1" > }, { > "Key" : "TheKey", > "ShouldBeLong" : "NotALong2" > }, { > "Key" : "TheKey", > "ShouldBeLong" : "NotALong3" > } ] > … as expected. > With CSVReader->AvroRecordSetWriter, the ValidateRecord processor bulletins > errors repeatedly (because it keeps retrying) and the incoming flow file > remains in the input queue: > 22:40:22 UTC ERROR 015f100a-3b6f-1638-43d1-143f4ca4a816 > ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] > ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] failed to process due > to java.lang.NumberFormatException: For input string: "NotALong1"; rolling > back session: For input string: "NotALong1" > > 22:40:22 UTC ERROR 015f100a-3b6f-1638-43d1-143f4ca4a816 > ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] > ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] failed to process > session due to java.lang.NumberFormatException: For input string: > "NotALong1": For input string: "NotALong1" > Thanks, > Dave. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4510) ValidateRecord does not work properly with AvroRecordSetWriter
David Doran created NIFI-4510: - Summary: ValidateRecord does not work properly with AvroRecordSetWriter Key: NIFI-4510 URL: https://issues.apache.org/jira/browse/NIFI-4510 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.4.0 Environment: Hortonworks HDF Sandbox with inbuilt NiFi 1.2 disabled, and NiFi 1.4 downloaded & running Reporter: David Doran Attachments: ValidateRecordTest.xml When using CSVReader and JsonRecordSetWriter, the ValidateRecord processor works as expected: Valid records are emitted as a flowfile on the valid queue, invalid ones on the invalid queue. However, when using CSVReader and AvroRecordSetWriter, the presence of an invalid record causes the ValidateRecord processor to fail: Nothing is emitted on any of the downstream connectors (failure, invalid or valid). Instead the session is rolled back and the input file is left in the upstream queue. Here's the simple schema I've been using: { "type": "record", "name": "test", "fields": [ { "name": "Key", "type": "string" }, { "name": "ShouldBeLong", "type": "long" }] } And here's some sample CSV data: TheKey,123 TheKey,456 TheKey,NotALong1 TheKey,NotALong2 TheKey,NotALong3 TheKey,321 TheKey,654 Using CSVReader->JsonRecordSetWriter results in a flowfile in the valid path: [ { "Key" : "TheKey", "ShouldBeLong" : 123 }, { "Key" : "TheKey", "ShouldBeLong" : 456 }, { "Key" : "TheKey", "ShouldBeLong" : 321 }, { "Key" : "TheKey", "ShouldBeLong" : 654 } ] and in invalid path: [ { "Key" : "TheKey", "ShouldBeLong" : "NotALong1" }, { "Key" : "TheKey", "ShouldBeLong" : "NotALong2" }, { "Key" : "TheKey", "ShouldBeLong" : "NotALong3" } ] … as expected. With CSVReader->AvroRecordSetWriter, the ValidateRecord processor bulletins errors repeatedly (because it keeps retrying) and the incoming flow file remains in the input queue: 22:40:22 UTC ERROR 015f100a-3b6f-1638-43d1-143f4ca4a816 ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] failed to process due to java.lang.NumberFormatException: For input string: "NotALong1"; rolling back session: For input string: "NotALong1" 22:40:22 UTC ERROR 015f100a-3b6f-1638-43d1-143f4ca4a816 ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] ValidateRecord[id=015f100a-3b6f-1638-43d1-143f4ca4a816] failed to process session due to java.lang.NumberFormatException: For input string: "NotALong1": For input string: "NotALong1" Thanks, Dave. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (MINIFICPP-262) Rocksdb fails to link
[ https://issues.apache.org/jira/browse/MINIFICPP-262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212612#comment-16212612 ] Andrew Christianson commented on MINIFICPP-262: --- I think we're OK on CentOS 7.4. I isolated the issue down to the cmake target. My CLion was set to 'Build All,' which was failing. When I did a fresh cmake/make from the command line, with the default target, everything succeeded. It looks like librocksdb.so is triggered by 'Build All,' but we don't need the shared library. With the default target, or with the minifiexe target, only the static lib is built, which finishes successfully. > Rocksdb fails to link > - > > Key: MINIFICPP-262 > URL: https://issues.apache.org/jira/browse/MINIFICPP-262 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson > > Rocksdb fails to link when building on CentOS 7.4. [~calebj] seems to be > having the same issue on Ubuntu 14.04 as part of MINIFI-244. > {code} > [ 38%] Linking CXX static library librocksdb.a > [ 38%] Linking CXX shared library librocksdb.so > [ 60%] Built target rocksdb > [ 61%] Building CXX object > CMakeFiles/Tests.dir/thirdparty/spdlog-20170710/include/spdlog/dummy.cpp.o > [ 61%] Building CXX object > CMakeFiles/ControllerServiceIntegrationTests.dir/libminifi/test/TestBase.cpp.o > /usr/bin/ld: CMakeFiles/build_version.dir/__/__/build_version.cc.o: > relocation R_X86_64_32 against `.data' can not be used when making a shared > object; recompile with -fPIC > CMakeFiles/build_version.dir/__/__/build_version.cc.o: error adding symbols: > Bad value > collect2: error: ld returned 1 exit status > gmake[2]: *** [thirdparty/rocksdb/librocksdb.so.5.7.0] Error 1 > gmake[1]: *** [thirdparty/rocksdb/CMakeFiles/rocksdb-shared.dir/all] Error 2 > gmake[1]: *** Waiting for unfinished jobs > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (NIFI-3248) GetSolr can miss recently updated documents
[ https://issues.apache.org/jira/browse/NIFI-3248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212533#comment-16212533 ] ASF GitHub Bot commented on NIFI-3248: -- Github user JohannesDaniel commented on a diff in the pull request: https://github.com/apache/nifi/pull/2199#discussion_r145946147 --- Diff: nifi-nar-bundles/nifi-solr-bundle/nifi-solr-processors/src/main/java/org/apache/nifi/processors/solr/GetSolr.java --- @@ -66,42 +79,72 @@ import org.apache.solr.common.SolrDocument; import org.apache.solr.common.SolrDocumentList; import org.apache.solr.common.SolrInputDocument; +import org.apache.solr.common.params.CursorMarkParams; -@Tags({"Apache", "Solr", "Get", "Pull"}) +@Tags({"Apache", "Solr", "Get", "Pull", "Records"}) @InputRequirement(Requirement.INPUT_FORBIDDEN) -@CapabilityDescription("Queries Solr and outputs the results as a FlowFile") +@CapabilityDescription("Queries Solr and outputs the results as a FlowFile in the format of XML or using a Record Writer") +@Stateful(scopes = {Scope.CLUSTER}, description = "Stores latest date of Date Field so that the same data will not be fetched multiple times.") public class GetSolr extends SolrProcessor { -public static final PropertyDescriptor SOLR_QUERY = new PropertyDescriptor -.Builder().name("Solr Query") -.description("A query to execute against Solr") +public static final String STATE_MANAGER_FILTER = "stateManager_filter"; +public static final String STATE_MANAGER_CURSOR_MARK = "stateManager_cursorMark"; +public static final AllowableValue MODE_XML = new AllowableValue("XML"); +public static final AllowableValue MODE_REC = new AllowableValue("Records"); --- End diff -- Hmm, and dynamic fields could become a problem... I think this is not possible. > GetSolr can miss recently updated documents > --- > > Key: NIFI-3248 > URL: https://issues.apache.org/jira/browse/NIFI-3248 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.0.0, 0.5.0, 0.6.0, 0.5.1, 0.7.0, 0.6.1, 1.1.0, 0.7.1, > 1.0.1 >Reporter: Koji Kawamura >Assignee: Johannes Peter > Attachments: nifi-flow.png, query-result-with-curly-bracket.png, > query-result-with-square-bracket.png > > > GetSolr holds the last query timestamp so that it only fetches documents > those have been added or updated since the last query. > However, GetSolr misses some of those updated documents, and once the > documents date field value becomes older than last query timestamp, the > document won't be able to be queried by GetSolr any more. > This JIRA is for tracking the process of investigating this behavior, and > discussion on them. > Here are things that can be a cause of this behavior: > |#|Short description|Should we address it?| > |1|Timestamp range filter, curly or square bracket?|No| > |2|Timezone difference between update and query|Additional docs might be > helpful| > |3|Lag comes from NearRealTIme nature of Solr|Should be documented at least, > add 'commit lag-time'?| > h2. 1. Timestamp range filter, curly or square bracket? > At the first glance, using curly and square bracket in mix looked strange > ([source > code|https://github.com/apache/nifi/blob/support/nifi-0.5.x/nifi-nar-bundles/nifi-solr-bundle/nifi-solr-processors/src/main/java/org/apache/nifi/processors/solr/GetSolr.java#L202]). > But these difference has a meaning. > The square bracket on the range query is inclusive and the curly bracket is > exclusive. If we use inclusive on both sides and a document has a time stamp > exactly on the boundary then it could be returned in two consecutive > executions, and we only want it in one. > This is intentional, and it should be as it is. > h2. 2. Timezone difference between update and query > Solr treats date fields as [UTC > representation|https://cwiki.apache.org/confluence/display/solr/Working+with+Dates|]. > If date field String value of an updated document represents time without > timezone, and NiFi is running on an environment using timezone other than > UTC, GetSolr can't perform date range query as users expect. > Let's say NiFi is running with JST(UTC+9). A process added a document to Solr > at 15:00 JST. But the date field doesn't have timezone. So, Solr indexed it > as 15:00 UTC. Then GetSolr performs range query at 15:10 JST, targeting any > documents updated from 15:00 to 15:10 JST. GetSolr formatted dates using UTC, > i.e. 6:00 to 6:10 UTC. The updated document won't be matched with the date > range filter. > To avoid this, updated documents must have proper timezone in date field > string representation. > If one uses NiFi expressio
[GitHub] nifi pull request #2199: NIFI-3248: Improvement of GetSolr Processor
Github user JohannesDaniel commented on a diff in the pull request: https://github.com/apache/nifi/pull/2199#discussion_r145946147 --- Diff: nifi-nar-bundles/nifi-solr-bundle/nifi-solr-processors/src/main/java/org/apache/nifi/processors/solr/GetSolr.java --- @@ -66,42 +79,72 @@ import org.apache.solr.common.SolrDocument; import org.apache.solr.common.SolrDocumentList; import org.apache.solr.common.SolrInputDocument; +import org.apache.solr.common.params.CursorMarkParams; -@Tags({"Apache", "Solr", "Get", "Pull"}) +@Tags({"Apache", "Solr", "Get", "Pull", "Records"}) @InputRequirement(Requirement.INPUT_FORBIDDEN) -@CapabilityDescription("Queries Solr and outputs the results as a FlowFile") +@CapabilityDescription("Queries Solr and outputs the results as a FlowFile in the format of XML or using a Record Writer") +@Stateful(scopes = {Scope.CLUSTER}, description = "Stores latest date of Date Field so that the same data will not be fetched multiple times.") public class GetSolr extends SolrProcessor { -public static final PropertyDescriptor SOLR_QUERY = new PropertyDescriptor -.Builder().name("Solr Query") -.description("A query to execute against Solr") +public static final String STATE_MANAGER_FILTER = "stateManager_filter"; +public static final String STATE_MANAGER_CURSOR_MARK = "stateManager_cursorMark"; +public static final AllowableValue MODE_XML = new AllowableValue("XML"); +public static final AllowableValue MODE_REC = new AllowableValue("Records"); --- End diff -- Hmm, and dynamic fields could become a problem... I think this is not possible. ---
[jira] [Updated] (NIFI-4509) Validate DATA with record
[ https://issues.apache.org/jira/browse/NIFI-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4509: - Component/s: (was: Core Framework) Extensions > Validate DATA with record > - > > Key: NIFI-4509 > URL: https://issues.apache.org/jira/browse/NIFI-4509 > Project: Apache NiFi > Issue Type: Wish > Components: Extensions >Affects Versions: 1.4.0 > Environment: Nifi 1.4.0 Windows >Reporter: Alexandre Côté > > Hello, > We cannot validate date format with the validateRecord. > Alex -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (NIFI-4509) Validate date format in records
[ https://issues.apache.org/jira/browse/NIFI-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard updated NIFI-4509: - Summary: Validate date format in records (was: Validate DATA with record) > Validate date format in records > --- > > Key: NIFI-4509 > URL: https://issues.apache.org/jira/browse/NIFI-4509 > Project: Apache NiFi > Issue Type: Wish > Components: Extensions >Affects Versions: 1.4.0 > Environment: Nifi 1.4.0 Windows >Reporter: Alexandre Côté > > Hello, > We cannot validate date format with the validateRecord. > Alex -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (MINIFICPP-262) Rocksdb fails to link
[ https://issues.apache.org/jira/browse/MINIFICPP-262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16212020#comment-16212020 ] Caleb Johnson edited comment on MINIFICPP-262 at 10/20/17 11:31 AM: I think that updating civetweb would be for the best, but its tests need to be excluded from the `make tests` list and/or moved to its own civet-tests target. auto_ptr is only present in jsoncpp, yaml-cpp and catch. There aren't any in the MiNiFi codebase, so those issues for the third-party maintainers to address. Those headers are just included in so many places that it floods my build log. >From this point on, I might just use an independently bootstrapped debian or >ubuntu chroot to do dev builds. It's the second best thing to Docker, which >Cloud9 doesn't support. EDIT: I can't do _that_ either, because bind mounts are forbidden. was (Author: calebj): I think that updating civetweb would be for the best, but its tests need to be excluded from the `make tests` list and/or moved to its own civet-tests target. auto_ptr is only present in jsoncpp, yaml-cpp and catch. There aren't any in the MiNiFi codebase, so those issues for the third-party maintainers to address. Those headers are just included in so many places that it floods my build log. >From this point on, I might just use an independently bootstrapped debian or >ubuntu chroot to do dev builds. It's the second best thing to Docker, which >Cloud9 doesn't support. > Rocksdb fails to link > - > > Key: MINIFICPP-262 > URL: https://issues.apache.org/jira/browse/MINIFICPP-262 > Project: NiFi MiNiFi C++ > Issue Type: Bug >Reporter: Andrew Christianson > > Rocksdb fails to link when building on CentOS 7.4. [~calebj] seems to be > having the same issue on Ubuntu 14.04 as part of MINIFI-244. > {code} > [ 38%] Linking CXX static library librocksdb.a > [ 38%] Linking CXX shared library librocksdb.so > [ 60%] Built target rocksdb > [ 61%] Building CXX object > CMakeFiles/Tests.dir/thirdparty/spdlog-20170710/include/spdlog/dummy.cpp.o > [ 61%] Building CXX object > CMakeFiles/ControllerServiceIntegrationTests.dir/libminifi/test/TestBase.cpp.o > /usr/bin/ld: CMakeFiles/build_version.dir/__/__/build_version.cc.o: > relocation R_X86_64_32 against `.data' can not be used when making a shared > object; recompile with -fPIC > CMakeFiles/build_version.dir/__/__/build_version.cc.o: error adding symbols: > Bad value > collect2: error: ld returned 1 exit status > gmake[2]: *** [thirdparty/rocksdb/librocksdb.so.5.7.0] Error 1 > gmake[1]: *** [thirdparty/rocksdb/CMakeFiles/rocksdb-shared.dir/all] Error 2 > gmake[1]: *** Waiting for unfinished jobs > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (NIFI-4509) Validate DATA with record
Alexandre Côté created NIFI-4509: Summary: Validate DATA with record Key: NIFI-4509 URL: https://issues.apache.org/jira/browse/NIFI-4509 Project: Apache NiFi Issue Type: Wish Components: Core Framework Affects Versions: 1.4.0 Environment: Nifi 1.4.0 Windows Reporter: Alexandre Côté Hello, We cannot validate date format with the validateRecord. Alex -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFI-1625) ExtractText - Description of Capture Group is not clear
[ https://issues.apache.org/jira/browse/NIFI-1625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-1625. -- Resolution: Done > ExtractText - Description of Capture Group is not clear > --- > > Key: NIFI-1625 > URL: https://issues.apache.org/jira/browse/NIFI-1625 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 0.4.1 >Reporter: Randy Bovay >Priority: Trivial > > ExtractText ONLY captures the first 1024 (default) characters. > The help text says this applies to the capture group values. It's not clear > that this is on the 'input', but leads one to believe it's on the actual new > properties that are being captured. > > Better wording should be > "Specifies the Maximum length of the input record that will be evaluated for > the capture. The input record will only be evaluated Up TO this length, and > the rest will be ignored" -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (NIFI-3348) G1 Values Not staying in correct column on refresh. In Cluster UI JVM Tab.
[ https://issues.apache.org/jira/browse/NIFI-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pierre Villard resolved NIFI-3348. -- Resolution: Duplicate > G1 Values Not staying in correct column on refresh. In Cluster UI JVM Tab. > -- > > Key: NIFI-3348 > URL: https://issues.apache.org/jira/browse/NIFI-3348 > Project: Apache NiFi > Issue Type: Bug >Affects Versions: 1.1.1 >Reporter: Randy Bovay >Priority: Minor > > The Values in the G1 Old Generation and G1 Young Generation will flip back > and forth in each column for the node as you hit refresh from the UI. > These are in the Cluster UI, JVM Tab. -- This message was sent by Atlassian JIRA (v6.4.14#64029)