[jira] [Commented] (NIFI-5830) RedisConnectionPoolService does not work with Standalone Redis using non-localhost deployment
[ https://issues.apache.org/jira/browse/NIFI-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16690426#comment-16690426 ] ASF GitHub Bot commented on NIFI-5830: -- GitHub user javajefe opened a pull request: https://github.com/apache/nifi/pull/3176 NIFI-5830 RedisConnectionPoolService does not work with Standal… …one Redis using non-localhost deployment Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [Y] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? NIFI-5830 - [Y] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [Y] Has your PR been rebased against the latest commit within the target branch (typically master)? - [Y] Is your initial contribution a single, squashed commit? ### For code changes: - [Y] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [N] Have you written or updated unit tests to verify your changes? - [N/A] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [N/A] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [N/A] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [N/A] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [Y] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/javajefe/nifi NIFI-5830 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3176.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3176 commit 979da56175d2943813c866afb52bca6704d91f35 Author: Alexander Bukarev Date: 2018-11-17T06:39:52Z Fixed NIFI-5830 RedisConnectionPoolService does not work with Standalone Redis using non-localhost deployment > RedisConnectionPoolService does not work with Standalone Redis using > non-localhost deployment > - > > Key: NIFI-5830 > URL: https://issues.apache.org/jira/browse/NIFI-5830 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.8.0 > Environment: Ubuntu 16 LTS, NiFi 1.8.0 >Reporter: Alexander Bukarev >Priority: Major > > The controller service {{RedisConnectionPoolService}} does not work with > Standalone Redis which is deployed on a host other than {{localhost}} (or if > Redis uses the port other than {{6379}}). So the only way to use > {{RedisConnectionPoolService}} is to deploy Redis on {{localhost}} and run it > with default port {{6379}}. > *Steps*: > Let's assume our Redis is deployed on host *redis* (not {{localhost}}) and it > listens port 6379 (I use docker for that) > # Create a {{PutDistributedMapCache}} processor > # Configure the processor with {{RedisDistributedMapCacheClientService}} > # Create a new controller service: {{RedisConnectionPoolService}} > #* Choose *Standalone* Redis Mode > #* Use *redis:6379* as a Connection String > # Connect some incoming flow to the {{PutDistributedMapCache}} processor > (I've used {{GetFile}} as a producer) and run the whole flow. Allow > {{GetFile}} to consume some file (if you use {{GetFile}} ro reproduce) and > wait some time till the {{PutDistributedMapCache}} will be schedulled. > Result: > The processor is failed to run. And we can see the error in logs: > {panel}2018-11-17 06:06:47,572 WARN [Timer-Driven Process Thread-2] > o.a.n.controller.tasks.ConnectableTask Administratively Yielding PutDistribu > tedMapCache[id=2041c81f-0167-1000-c82f-d7da2155dfb4] due to uncaught > Exception: org.springframework.data.redis.RedisConnectionFailureExce > ption: Cannot get Jedis connection; nested exception is > redis.clients.jedis.exceptions.JedisConnectionException: Could not ge
[jira] [Updated] (NIFI-5830) RedisConnectionPoolService does not work with Standalone Redis using non-localhost deployment
[ https://issues.apache.org/jira/browse/NIFI-5830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Bukarev updated NIFI-5830: Status: Patch Available (was: Open) I have implemented a PR to fix the issue https://github.com/apache/nifi/pull/3176 > RedisConnectionPoolService does not work with Standalone Redis using > non-localhost deployment > - > > Key: NIFI-5830 > URL: https://issues.apache.org/jira/browse/NIFI-5830 > Project: Apache NiFi > Issue Type: Bug > Components: Core Framework >Affects Versions: 1.8.0 > Environment: Ubuntu 16 LTS, NiFi 1.8.0 >Reporter: Alexander Bukarev >Priority: Major > > The controller service {{RedisConnectionPoolService}} does not work with > Standalone Redis which is deployed on a host other than {{localhost}} (or if > Redis uses the port other than {{6379}}). So the only way to use > {{RedisConnectionPoolService}} is to deploy Redis on {{localhost}} and run it > with default port {{6379}}. > *Steps*: > Let's assume our Redis is deployed on host *redis* (not {{localhost}}) and it > listens port 6379 (I use docker for that) > # Create a {{PutDistributedMapCache}} processor > # Configure the processor with {{RedisDistributedMapCacheClientService}} > # Create a new controller service: {{RedisConnectionPoolService}} > #* Choose *Standalone* Redis Mode > #* Use *redis:6379* as a Connection String > # Connect some incoming flow to the {{PutDistributedMapCache}} processor > (I've used {{GetFile}} as a producer) and run the whole flow. Allow > {{GetFile}} to consume some file (if you use {{GetFile}} ro reproduce) and > wait some time till the {{PutDistributedMapCache}} will be schedulled. > Result: > The processor is failed to run. And we can see the error in logs: > {panel}2018-11-17 06:06:47,572 WARN [Timer-Driven Process Thread-2] > o.a.n.controller.tasks.ConnectableTask Administratively Yielding PutDistribu > tedMapCache[id=2041c81f-0167-1000-c82f-d7da2155dfb4] due to uncaught > Exception: org.springframework.data.redis.RedisConnectionFailureExce > ption: Cannot get Jedis connection; nested exception is > redis.clients.jedis.exceptions.JedisConnectionException: Could not get a > resource > from the pool > org.springframework.data.redis.RedisConnectionFailureException: Cannot get > Jedis connection; nested exception is > redis.clients.jedis.exceptions.JedisConnectionException: Could not get a > resource from the pool > at > org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:281) > at > org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:464) > at > org.apache.nifi.redis.service.RedisConnectionPoolService.getConnection(RedisConnectionPoolService.java:89) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) > at com.sun.proxy.$Proxy128.getConnection(Unknown Source) > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConnection(RedisDistributedMapCacheClientService.java:343) > at > org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(RedisDistributedMapCacheClientService.java:189) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) > at com.sun.proxy.$Proxy124.put(Unknown Source) > at > org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDistributedMapCache.java:202){panel} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3176: NIFI-5830 RedisConnectionPoolService does not work ...
GitHub user javajefe opened a pull request: https://github.com/apache/nifi/pull/3176 NIFI-5830 RedisConnectionPoolService does not work with Standal⦠â¦one Redis using non-localhost deployment Thank you for submitting a contribution to Apache NiFi. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [Y] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? NIFI-5830 - [Y] Does your PR title start with NIFI- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [Y] Has your PR been rebased against the latest commit within the target branch (typically master)? - [Y] Is your initial contribution a single, squashed commit? ### For code changes: - [Y] Have you ensured that the full suite of tests is executed via mvn -Pcontrib-check clean install at the root nifi folder? - [N] Have you written or updated unit tests to verify your changes? - [N/A] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [N/A] If applicable, have you updated the LICENSE file, including the main LICENSE file under nifi-assembly? - [N/A] If applicable, have you updated the NOTICE file, including the main NOTICE file found under nifi-assembly? - [N/A] If adding new Properties, have you added .displayName in addition to .name (programmatic access) for each of the new properties? ### For documentation related changes: - [Y] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/javajefe/nifi NIFI-5830 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3176.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3176 commit 979da56175d2943813c866afb52bca6704d91f35 Author: Alexander Bukarev Date: 2018-11-17T06:39:52Z Fixed NIFI-5830 RedisConnectionPoolService does not work with Standalone Redis using non-localhost deployment ---
[jira] [Created] (NIFI-5830) RedisConnectionPoolService does not work with Standalone Redis using non-localhost deployment
Alexander Bukarev created NIFI-5830: --- Summary: RedisConnectionPoolService does not work with Standalone Redis using non-localhost deployment Key: NIFI-5830 URL: https://issues.apache.org/jira/browse/NIFI-5830 Project: Apache NiFi Issue Type: Bug Components: Core Framework Affects Versions: 1.8.0 Environment: Ubuntu 16 LTS, NiFi 1.8.0 Reporter: Alexander Bukarev The controller service {{RedisConnectionPoolService}} does not work with Standalone Redis which is deployed on a host other than {{localhost}} (or if Redis uses the port other than {{6379}}). So the only way to use {{RedisConnectionPoolService}} is to deploy Redis on {{localhost}} and run it with default port {{6379}}. *Steps*: Let's assume our Redis is deployed on host *redis* (not {{localhost}}) and it listens port 6379 (I use docker for that) # Create a {{PutDistributedMapCache}} processor # Configure the processor with {{RedisDistributedMapCacheClientService}} # Create a new controller service: {{RedisConnectionPoolService}} #* Choose *Standalone* Redis Mode #* Use *redis:6379* as a Connection String # Connect some incoming flow to the {{PutDistributedMapCache}} processor (I've used {{GetFile}} as a producer) and run the whole flow. Allow {{GetFile}} to consume some file (if you use {{GetFile}} ro reproduce) and wait some time till the {{PutDistributedMapCache}} will be schedulled. Result: The processor is failed to run. And we can see the error in logs: {panel}2018-11-17 06:06:47,572 WARN [Timer-Driven Process Thread-2] o.a.n.controller.tasks.ConnectableTask Administratively Yielding PutDistribu tedMapCache[id=2041c81f-0167-1000-c82f-d7da2155dfb4] due to uncaught Exception: org.springframework.data.redis.RedisConnectionFailureExce ption: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:281) at org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:464) at org.apache.nifi.redis.service.RedisConnectionPoolService.getConnection(RedisConnectionPoolService.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) at com.sun.proxy.$Proxy128.getConnection(Unknown Source) at org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.withConnection(RedisDistributedMapCacheClientService.java:343) at org.apache.nifi.redis.service.RedisDistributedMapCacheClientService.put(RedisDistributedMapCacheClientService.java:189) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84) at com.sun.proxy.$Proxy124.put(Unknown Source) at org.apache.nifi.processors.standard.PutDistributedMapCache.onTrigger(PutDistributedMapCache.java:202){panel} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5829) Create Lookup Controller Services for RecordSetWriter and RecordReader
Peter Wicks created NIFI-5829: - Summary: Create Lookup Controller Services for RecordSetWriter and RecordReader Key: NIFI-5829 URL: https://issues.apache.org/jira/browse/NIFI-5829 Project: Apache NiFi Issue Type: Improvement Reporter: Peter Wicks Assignee: Peter Wicks In the same way as the new DBCP Lookup Service, create lookup services for RecordSetWriter and RecordReader. This will help to make flows much more generic, and allow for some very flexible processors. Example: A processor like PutDatabaseRecord will be able to push to any database in any readable record format from a single processor by configuring everything through the lookup services. Example: ConvertRecord will be able to convert from any number of various input formats to a constant output format by using the RecordReader Lookup, and a fixed RecordSetWriter. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16690008#comment-16690008 ] ASF GitHub Bot commented on NIFI-4914: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2882 am building against master - is 1.9.0. changing it to 1.9.0-SNAPSHOT and ignoring last commit addresses it. travis fails for a similar reason i'd guess. Probably makes sense to ditch the merge commits, squash, rebase to latest. But in any case is building now i think > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2882: NIFI-4914
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2882 am building against master - is 1.9.0. changing it to 1.9.0-SNAPSHOT and ignoring last commit addresses it. travis fails for a similar reason i'd guess. Probably makes sense to ditch the merge commits, squash, rebase to latest. But in any case is building now i think ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689991#comment-16689991 ] ASF GitHub Bot commented on NIFI-4914: -- Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2882 As a sanity check I did the following on my local MacBook, and was able to build NiFi successfully from my branch: cd /tmp git clone g...@github.com:david-streamlio/nifi.git cd nifi/ git checkout NIFI-4914 mvn clean install -DskipTests ... [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 09:48 min [INFO] Finished at: 2018-11-16T12:45:01-08:00 [INFO] Final Memory: 318M/1815M [INFO] > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2882: NIFI-4914
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2882 As a sanity check I did the following on my local MacBook, and was able to build NiFi successfully from my branch: cd /tmp git clone g...@github.com:david-streamlio/nifi.git cd nifi/ git checkout NIFI-4914 mvn clean install -DskipTests ... [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 09:48 min [INFO] Finished at: 2018-11-16T12:45:01-08:00 [INFO] Final Memory: 318M/1815M [INFO] ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689975#comment-16689975 ] ASF GitHub Bot commented on NIFI-4914: -- Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2882 @joewitt So the build doesn't work for you locally? I am able to build it locally. The latest commit was trying suggestion number 2 from the following wiki page that is given in the build output: https://cwiki.apache.org//confluence/display/MAVEN/ProjectBuildingException https://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException "You intent to use a parent POM from the local filesystem but Maven didn't use that. Please verify that the element in the child is properly set and that the POM at that location has actually the version you want to use." > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2882: NIFI-4914
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2882 @joewitt So the build doesn't work for you locally? I am able to build it locally. The latest commit was trying suggestion number 2 from the following wiki page that is given in the build output: https://cwiki.apache.org//confluence/display/MAVEN/ProjectBuildingException https://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException "You intent to use a parent POM from the local filesystem but Maven didn't use that. Please verify that the element in the child is properly set and that the POM at that location has actually the version you want to use." ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689959#comment-16689959 ] ASF GitHub Bot commented on NIFI-4914: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2882 the build doesn't work...at least for me or travis. also not with the new patch. i've not looked into why yet but we dont use relative paths anywhere that i know of so not sure what the latest change was for/doing > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2882: NIFI-4914
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2882 the build doesn't work...at least for me or travis. also not with the new patch. i've not looked into why yet but we dont use relative paths anywhere that i know of so not sure what the latest change was for/doing ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689934#comment-16689934 ] ASF GitHub Bot commented on NIFI-4914: -- Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2882 Thanks @joewitt I was double comparing the nifi-pulsar-bundle/pom.xml to other bundles in the nifi-nar-bundles, e.g. nifi-redis-bundle/pom.xml, nifi-rethinkdb-bundle/pom.xml, and nidi-parquet-bundle/pom.xml and they all define their parent module as follows: org.apache.nifi nifi-parquet-bundle 1.8.0-SNAPSHOT Which is what I have as the definition in the nifi-pulsar-bundle/pom.xml file > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2882: NIFI-4914
Github user david-streamlio commented on the issue: https://github.com/apache/nifi/pull/2882 Thanks @joewitt I was double comparing the nifi-pulsar-bundle/pom.xml to other bundles in the nifi-nar-bundles, e.g. nifi-redis-bundle/pom.xml, nifi-rethinkdb-bundle/pom.xml, and nidi-parquet-bundle/pom.xml and they all define their parent module as follows: org.apache.nifi nifi-parquet-bundle 1.8.0-SNAPSHOT Which is what I have as the definition in the nifi-pulsar-bundle/pom.xml file ---
[jira] [Commented] (NIFI-4914) Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, PublishPulsarRecord
[ https://issues.apache.org/jira/browse/NIFI-4914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689931#comment-16689931 ] ASF GitHub Bot commented on NIFI-4914: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2882 travis and local builds dont work due to some parent relative path issues. can you please confirm you're following the pattern that other components follow with regards to pom contents > Implement record model processor for Pulsar, i.e. ConsumePulsarRecord, > PublishPulsarRecord > -- > > Key: NIFI-4914 > URL: https://issues.apache.org/jira/browse/NIFI-4914 > Project: Apache NiFi > Issue Type: New Feature > Components: Extensions >Affects Versions: 1.6.0 >Reporter: David Kjerrumgaard >Priority: Minor > Original Estimate: 168h > Remaining Estimate: 168h > > Create record-based processors for Apache Pulsar -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #2882: NIFI-4914
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/2882 travis and local builds dont work due to some parent relative path issues. can you please confirm you're following the pattern that other components follow with regards to pom contents ---
[jira] [Commented] (MINIFICPP-680) Remove XCode 7.3 from travis builds
[ https://issues.apache.org/jira/browse/MINIFICPP-680?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689905#comment-16689905 ] ASF GitHub Bot commented on MINIFICPP-680: -- GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/443 MINIFICPP-680: Remove Xcode 7.3 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-680 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/443.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #443 commit cff647d4b20931c2614614b5ef058544d138826b Author: Marc Parisi Date: 2018-11-16T18:29:29Z MINIFICPP-680: Remove Xcode 7.3 > Remove XCode 7.3 from travis builds > --- > > Key: MINIFICPP-680 > URL: https://issues.apache.org/jira/browse/MINIFICPP-680 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > > > Per an email exchange with travis support it seems that we may not want to > perform 7.3 builds in travis. Not sure we need to, but it seems that the > software we use for deps is dropping binary support. > "Hey there, Marc! Thanks for writing in today. > So it looks like what happened here is Homebrew no longer has a pre-complied > binary for that version of Xcode- it looks like it was discontinued due to > age. Since it's not finding one, it's trying to compile a new version from > the source, which is what's resulting in that timeout. I've gone ahead and > increased your log silence timeout to 30 minutes, but even that might not be > enough- since this was an upstream change that is unfortunately outside of > our control, the only real option here to avoid it trying to compile a new > binary from the source would be to switch to a newer version of Xcode. > Sorry I"m not able to do more here, but I hope this at least helps. Just let > me know if you have any other questions or concerns I can help you with! > Best, > Grant" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #443: MINIFICPP-680: Remove Xcode 7.3
GitHub user phrocker opened a pull request: https://github.com/apache/nifi-minifi-cpp/pull/443 MINIFICPP-680: Remove Xcode 7.3 Thank you for submitting a contribution to Apache NiFi - MiNiFi C++. In order to streamline the review of the contribution we ask you to ensure the following steps have been taken: ### For all changes: - [ ] Is there a JIRA ticket associated with this PR? Is it referenced in the commit message? - [ ] Does your PR title start with MINIFICPP- where is the JIRA number you are trying to resolve? Pay particular attention to the hyphen "-" character. - [ ] Has your PR been rebased against the latest commit within the target branch (typically master)? - [ ] Is your initial contribution a single, squashed commit? ### For code changes: - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)? - [ ] If applicable, have you updated the LICENSE file? - [ ] If applicable, have you updated the NOTICE file? ### For documentation related changes: - [ ] Have you ensured that format looks appropriate for the output in which it is rendered? ### Note: Please ensure that once the PR is submitted, you check travis-ci for build issues and submit an update to your PR as soon as possible. You can merge this pull request into a Git repository by running: $ git pull https://github.com/phrocker/nifi-minifi-cpp MINIFICPP-680 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi-minifi-cpp/pull/443.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #443 commit cff647d4b20931c2614614b5ef058544d138826b Author: Marc Parisi Date: 2018-11-16T18:29:29Z MINIFICPP-680: Remove Xcode 7.3 ---
[jira] [Commented] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail
[ https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689854#comment-16689854 ] ASF GitHub Bot commented on NIFI-5744: -- Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3107#discussion_r234310982 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestExecuteSQLRecord.java --- @@ -350,6 +357,37 @@ public void invokeOnTriggerRecords(final Integer queryTimeout, final String quer assertEquals(durationTime, fetchTime + executionTime); } +@SuppressWarnings("unchecked") +@Test +public void testWithSqlExceptionErrorProcessingResultSet() throws Exception { +DBCPService dbcp = mock(DBCPService.class); +Connection conn = mock(Connection.class); +when(dbcp.getConnection(any(Map.class))).thenReturn(conn); +when(dbcp.getIdentifier()).thenReturn("mockdbcp"); +PreparedStatement statement = mock(PreparedStatement.class); +when(conn.prepareStatement(anyString())).thenReturn(statement); +when(statement.execute()).thenReturn(true); +ResultSet rs = mock(ResultSet.class); +when(statement.getResultSet()).thenReturn(rs); +// Throw an exception the first time you access the ResultSet, this is after the flow file to hold the results has been created. +when(rs.getMetaData()).thenThrow(new SQLException("test execute statement failed")); + --- End diff -- I ran the tests, but this one failed because the required `RecordWriter` is missing. I think you need: ``` MockRecordWriter recordWriter = new MockRecordWriter(null, true, -1); runner.addControllerService("writer", recordWriter); runner.setProperty(ExecuteSQLRecord.RECORD_WRITER_FACTORY, "writer"); runner.enableControllerService(recordWriter); ``` Here is the error text: > java.lang.AssertionError: Processor has 1 validation failures: > 'Record Writer' is invalid because Record Writer is required > Put exception message to attribute while ExecuteSQL fail > > > Key: NIFI-5744 > URL: https://issues.apache.org/jira/browse/NIFI-5744 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.7.1 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > > In some scenario, it would be great if we could have different behavior based > on exception. > Better error tracking afterwards in attribute format instead of tracking in > log. > For example, if it’s connection refused exception due to wrong url. > We won’t want to retry and error message attribute would be helpful to keep > track of. > While it’s other scenario that database temporary unavailable, we should > retry it based on should retry exception. > Should be a quick fix at AbstractExecuteSQL before transfer flowfile to > failure relationship > {code:java} > session.transfer(fileToProcess, REL_FAILURE); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3107: NIFI-5744: Put exception message to attribute while...
Github user patricker commented on a diff in the pull request: https://github.com/apache/nifi/pull/3107#discussion_r234310982 --- Diff: nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/test/java/org/apache/nifi/processors/standard/TestExecuteSQLRecord.java --- @@ -350,6 +357,37 @@ public void invokeOnTriggerRecords(final Integer queryTimeout, final String quer assertEquals(durationTime, fetchTime + executionTime); } +@SuppressWarnings("unchecked") +@Test +public void testWithSqlExceptionErrorProcessingResultSet() throws Exception { +DBCPService dbcp = mock(DBCPService.class); +Connection conn = mock(Connection.class); +when(dbcp.getConnection(any(Map.class))).thenReturn(conn); +when(dbcp.getIdentifier()).thenReturn("mockdbcp"); +PreparedStatement statement = mock(PreparedStatement.class); +when(conn.prepareStatement(anyString())).thenReturn(statement); +when(statement.execute()).thenReturn(true); +ResultSet rs = mock(ResultSet.class); +when(statement.getResultSet()).thenReturn(rs); +// Throw an exception the first time you access the ResultSet, this is after the flow file to hold the results has been created. +when(rs.getMetaData()).thenThrow(new SQLException("test execute statement failed")); + --- End diff -- I ran the tests, but this one failed because the required `RecordWriter` is missing. I think you need: ``` MockRecordWriter recordWriter = new MockRecordWriter(null, true, -1); runner.addControllerService("writer", recordWriter); runner.setProperty(ExecuteSQLRecord.RECORD_WRITER_FACTORY, "writer"); runner.enableControllerService(recordWriter); ``` Here is the error text: > java.lang.AssertionError: Processor has 1 validation failures: > 'Record Writer' is invalid because Record Writer is required ---
[jira] [Commented] (NIFI-5828) ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > 0
[ https://issues.apache.org/jira/browse/NIFI-5828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689837#comment-16689837 ] Colin Dean commented on NIFI-5828: -- I think more of the {{executesql.*}} attributes [set|https://github.com/apache/nifi/blob/102a5288efb2a22cd54815dd7331dfc5826aee91/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java#L280-L284] are similarly affected: {code:java} attributesToAdd.put(RESULT_ROW_COUNT, String.valueOf(nrOfRows.get())); attributesToAdd.put(RESULT_QUERY_DURATION, String.valueOf(executionTimeElapsed + fetchTimeElapsed)); attributesToAdd.put(RESULT_QUERY_EXECUTION_TIME, String.valueOf(executionTimeElapsed)); attributesToAdd.put(RESULT_QUERY_FETCH_TIME, String.valueOf(fetchTimeElapsed)); attributesToAdd.put(RESULTSET_INDEX, String.valueOf(resultCount)); {code} The row count, query duration, fetch time, and index will be what they are as of that flowfile, not for the full result set. Execution time will be for the full set. > ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > > 0 > --- > > Key: NIFI-5828 > URL: https://issues.apache.org/jira/browse/NIFI-5828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 > Environment: Linux, MSSQL 2016 >Reporter: Colin Dean >Priority: Major > Labels: regression > > When *Max Rows Per Flow File* ({{esql-max-rows}}) is set greater than 0 to > enable it, the {{executesql.row.count}} attribute on the resulting FlowFiles > is not the number of rows in the result set but rather the number of rows in > the FlowFile. > This is a deviation from documented behavior, which is "Contains the number > of rows returned in the select query". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5828) ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > 0
[ https://issues.apache.org/jira/browse/NIFI-5828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689821#comment-16689821 ] Colin Dean commented on NIFI-5828: -- Alternatively, {{executesql.row.count}}'s doc needs to reflect that the row count will not be the row count but rather the number of rows in this FlowFile's Avro content. > ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > > 0 > --- > > Key: NIFI-5828 > URL: https://issues.apache.org/jira/browse/NIFI-5828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 > Environment: Linux, MSSQL 2016 >Reporter: Colin Dean >Priority: Major > Labels: regression > > When *Max Rows Per Flow File* ({{esql-max-rows}}) is set greater than 0 to > enable it, the {{executesql.row.count}} attribute on the resulting FlowFiles > is not the number of rows in the result set but rather the number of rows in > the FlowFile. > This is a deviation from documented behavior, which is "Contains the number > of rows returned in the select query". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5828) ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > 0
[ https://issues.apache.org/jira/browse/NIFI-5828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689819#comment-16689819 ] Colin Dean commented on NIFI-5828: -- I've tracked this down. The number of rows put into {{executesql.row.count}} comes from {{nrOfRows}}, an AtomicLong that's created early and set [here|https://github.com/apache/nifi/blob/102a5288efb2a22cd54815dd7331dfc5826aee91/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java#L270]:: {code:java} nrOfRows.set(sqlWriter.writeResultSet(resultSet, out, getLogger(), null)); {code} and the attribute is set [here|https://github.com/apache/nifi/blob/102a5288efb2a22cd54815dd7331dfc5826aee91/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java#L280] 10 lines later: {code:java} attributesToAdd.put(RESULT_ROW_COUNT, String.valueOf(nrOfRows.get())); {code} It's pretty clear that {{SqlWriter#writeResultSet}} returns the number of rows it wrote. Normally, this would be the whole set. The number of max rows is passed to the DefaultAvroSqlWriter [here|https://github.com/apache/nifi/blob/102a5288efb2a22cd54815dd7331dfc5826aee91/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/ExecuteSQL.java#L145]. I think that the {{executesql.row.count}} should be in the same boat as the fragment count and added [near the end of the onTrigger|https://github.com/apache/nifi/blob/102a5288efb2a22cd54815dd7331dfc5826aee91/nifi-nar-bundles/nifi-standard-bundle/nifi-standard-processors/src/main/java/org/apache/nifi/processors/standard/AbstractExecuteSQL.java#L334]. > ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > > 0 > --- > > Key: NIFI-5828 > URL: https://issues.apache.org/jira/browse/NIFI-5828 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions >Affects Versions: 1.8.0 > Environment: Linux, MSSQL 2016 >Reporter: Colin Dean >Priority: Major > Labels: regression > > When *Max Rows Per Flow File* ({{esql-max-rows}}) is set greater than 0 to > enable it, the {{executesql.row.count}} attribute on the resulting FlowFiles > is not the number of rows in the result set but rather the number of rows in > the FlowFile. > This is a deviation from documented behavior, which is "Contains the number > of rows returned in the select query". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (MINIFICPP-679) Improve Const correctness in core, configurable component, and ID
[ https://issues.apache.org/jira/browse/MINIFICPP-679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mr TheSegfault updated MINIFICPP-679: - Summary: Improve Const correctness in core, configurable component, and ID (was: Improve Const correctness in core and ID) > Improve Const correctness in core, configurable component, and ID > - > > Key: MINIFICPP-679 > URL: https://issues.apache.org/jira/browse/MINIFICPP-679 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > Fix For: 0.6.0 > > > Improve Const correctness in core and ID as a way to kick off MINIFICPP-678 . > > This one scares me a tad. anytime we touch base classes , this will have an > impact on release cadence. So this one will require a lot of testing across > platforms, compilers, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (NIFI-5828) ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > 0
Colin Dean created NIFI-5828: Summary: ExecuteSQL executesql.row.count meaning changes when Max Rows Per Flow File > 0 Key: NIFI-5828 URL: https://issues.apache.org/jira/browse/NIFI-5828 Project: Apache NiFi Issue Type: Bug Components: Extensions Affects Versions: 1.8.0 Environment: Linux, MSSQL 2016 Reporter: Colin Dean When *Max Rows Per Flow File* ({{esql-max-rows}}) is set greater than 0 to enable it, the {{executesql.row.count}} attribute on the resulting FlowFiles is not the number of rows in the result set but rather the number of rows in the FlowFile. This is a deviation from documented behavior, which is "Contains the number of rows returned in the select query". -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C
[ https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mr TheSegfault resolved MINIFICPP-645. -- Resolution: Fixed > Move from new to malloc in CAPI to facilitate eventual change from C++ to C > --- > > Key: MINIFICPP-645 > URL: https://issues.apache.org/jira/browse/MINIFICPP-645 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Arpad Boda >Priority: Blocker > Labels: CAPI, nanofi > Fix For: 0.6.0 > > > As gradually move to C we should move out of libminifi and remove the linter. > Nothing that is returned via the API that is not an opaque pointer should use > new, and conversely nothing that is passed in as a non-opaque pointer should > be deleted versus freed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C
[ https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689765#comment-16689765 ] ASF GitHub Bot commented on MINIFICPP-645: -- Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/439 > Move from new to malloc in CAPI to facilitate eventual change from C++ to C > --- > > Key: MINIFICPP-645 > URL: https://issues.apache.org/jira/browse/MINIFICPP-645 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Arpad Boda >Priority: Blocker > Labels: CAPI, nanofi > Fix For: 0.6.0 > > > As gradually move to C we should move out of libminifi and remove the linter. > Nothing that is returned via the API that is not an opaque pointer should use > new, and conversely nothing that is passed in as a non-opaque pointer should > be deleted versus freed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #439: MINIFICPP-645 - Move from new to malloc i...
Github user asfgit closed the pull request at: https://github.com/apache/nifi-minifi-cpp/pull/439 ---
[jira] [Created] (MINIFICPP-680) Remove XCode 7.3 from travis builds
Mr TheSegfault created MINIFICPP-680: Summary: Remove XCode 7.3 from travis builds Key: MINIFICPP-680 URL: https://issues.apache.org/jira/browse/MINIFICPP-680 Project: NiFi MiNiFi C++ Issue Type: Improvement Reporter: Mr TheSegfault Per an email exchange with travis support it seems that we may not want to perform 7.3 builds in travis. Not sure we need to, but it seems that the software we use for deps is dropping binary support. "Hey there, Marc! Thanks for writing in today. So it looks like what happened here is Homebrew no longer has a pre-complied binary for that version of Xcode- it looks like it was discontinued due to age. Since it's not finding one, it's trying to compile a new version from the source, which is what's resulting in that timeout. I've gone ahead and increased your log silence timeout to 30 minutes, but even that might not be enough- since this was an upstream change that is unfortunately outside of our control, the only real option here to avoid it trying to compile a new binary from the source would be to switch to a newer version of Xcode. Sorry I"m not able to do more here, but I hope this at least helps. Just let me know if you have any other questions or concerns I can help you with! Best, Grant" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (MINIFICPP-680) Remove XCode 7.3 from travis builds
[ https://issues.apache.org/jira/browse/MINIFICPP-680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mr TheSegfault reassigned MINIFICPP-680: Assignee: Mr TheSegfault > Remove XCode 7.3 from travis builds > --- > > Key: MINIFICPP-680 > URL: https://issues.apache.org/jira/browse/MINIFICPP-680 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Mr TheSegfault >Priority: Major > > > Per an email exchange with travis support it seems that we may not want to > perform 7.3 builds in travis. Not sure we need to, but it seems that the > software we use for deps is dropping binary support. > "Hey there, Marc! Thanks for writing in today. > So it looks like what happened here is Homebrew no longer has a pre-complied > binary for that version of Xcode- it looks like it was discontinued due to > age. Since it's not finding one, it's trying to compile a new version from > the source, which is what's resulting in that timeout. I've gone ahead and > increased your log silence timeout to 30 minutes, but even that might not be > enough- since this was an upstream change that is unfortunately outside of > our control, the only real option here to avoid it trying to compile a new > binary from the source would be to switch to a newer version of Xcode. > Sorry I"m not able to do more here, but I hope this at least helps. Just let > me know if you have any other questions or concerns I can help you with! > Best, > Grant" -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5744) Put exception message to attribute while ExecuteSQL fail
[ https://issues.apache.org/jira/browse/NIFI-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689754#comment-16689754 ] ASF GitHub Bot commented on NIFI-5744: -- Github user yjhyjhyjh0 commented on the issue: https://github.com/apache/nifi/pull/3107 Thanks for the suggestion. Rebase against master, solve conflicts and squash into single commit. > Put exception message to attribute while ExecuteSQL fail > > > Key: NIFI-5744 > URL: https://issues.apache.org/jira/browse/NIFI-5744 > Project: Apache NiFi > Issue Type: Improvement >Affects Versions: 1.7.1 >Reporter: Deon Huang >Assignee: Deon Huang >Priority: Minor > > In some scenario, it would be great if we could have different behavior based > on exception. > Better error tracking afterwards in attribute format instead of tracking in > log. > For example, if it’s connection refused exception due to wrong url. > We won’t want to retry and error message attribute would be helpful to keep > track of. > While it’s other scenario that database temporary unavailable, we should > retry it based on should retry exception. > Should be a quick fix at AbstractExecuteSQL before transfer flowfile to > failure relationship > {code:java} > session.transfer(fileToProcess, REL_FAILURE); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3107: NIFI-5744: Put exception message to attribute while Execut...
Github user yjhyjhyjh0 commented on the issue: https://github.com/apache/nifi/pull/3107 Thanks for the suggestion. Rebase against master, solve conflicts and squash into single commit. ---
[jira] [Commented] (NIFI-5820) NiFi built with Java 1.8 needs to run on Java 11
[ https://issues.apache.org/jira/browse/NIFI-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689614#comment-16689614 ] ASF GitHub Bot commented on NIFI-5820: -- Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3174 Thanks for taking a look at the PR, @joewitt. That warning is due to accessing, via reflection, the `pid` method on the Process API, which was added in Java 9. The code that does this was added by NIFI-5175, to allow NiFI built on Java 1.8 to run on Java 9. There's a comment in the code detailing why the use of reflection is necessary. Please see https://github.com/apache/nifi/blob/master/nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/util/OSUtils.java#L111. The warning is expected, and when we have a minimum requirement of Java 11, we can refactor OSUtils, or probably remove the class entirely, since the Process API (as of Java 9) provides a platform independent way to get a PID. I don't think we'll need to have the methods in OSUtils for getting the PID based on which platform on which NiFi is running. > NiFi built with Java 1.8 needs to run on Java 11 > > > Key: NIFI-5820 > URL: https://issues.apache.org/jira/browse/NIFI-5820 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi issue #3174: [WIP] NIFI-5820 NiFi built on Java 1.8 can run on Java 9/1...
Github user jtstorck commented on the issue: https://github.com/apache/nifi/pull/3174 Thanks for taking a look at the PR, @joewitt. That warning is due to accessing, via reflection, the `pid` method on the Process API, which was added in Java 9. The code that does this was added by NIFI-5175, to allow NiFI built on Java 1.8 to run on Java 9. There's a comment in the code detailing why the use of reflection is necessary. Please see https://github.com/apache/nifi/blob/master/nifi-bootstrap/src/main/java/org/apache/nifi/bootstrap/util/OSUtils.java#L111. The warning is expected, and when we have a minimum requirement of Java 11, we can refactor OSUtils, or probably remove the class entirely, since the Process API (as of Java 9) provides a platform independent way to get a PID. I don't think we'll need to have the methods in OSUtils for getting the PID based on which platform on which NiFi is running. ---
[jira] [Updated] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C
[ https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mr TheSegfault updated MINIFICPP-645: - Description: As gradually move to C we should move out of libminifi and remove the linter. Nothing that is returned via the API that is not an opaque pointer should use new, and conversely nothing that is passed in as a non-opaque pointer should be deleted versus freed was: As gradually move to C we should move out of libminifi and remove the linter. Nothing that is returned via the API that is not an opaque pointer should use new > Move from new to malloc in CAPI to facilitate eventual change from C++ to C > --- > > Key: MINIFICPP-645 > URL: https://issues.apache.org/jira/browse/MINIFICPP-645 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Arpad Boda >Priority: Blocker > Labels: CAPI, nanofi > Fix For: 0.6.0 > > > As gradually move to C we should move out of libminifi and remove the linter. > Nothing that is returned via the API that is not an opaque pointer should use > new, and conversely nothing that is passed in as a non-opaque pointer should > be deleted versus freed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C
[ https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689490#comment-16689490 ] ASF GitHub Bot commented on MINIFICPP-645: -- Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/439 > There is no delete, it was wrong before, this PR just fixes: > > ``` > void free_flowfile(flow_file_record *ff) { > if (ff == nullptr) { > return; > } > auto content_repo_ptr = static_cast*>(ff->crp); > if (content_repo_ptr->get()) { > std::shared_ptr claim = std::make_shared(ff->contentLocation, *content_repo_ptr); > (*content_repo_ptr)->remove(claim); > } > if (ff->ffp == nullptr) { > auto map = static_cast(ff->attributes); > delete map; > } > free(ff->contentLocation); > free(ff); > ``` > The last line is the one that frees. Ah sorry, I was referencing the fact that over the course of PRs we've gone back and forth a little between malloc/new. There is a free_flow(flow *) that still uses delete. Happy to see a different PR if you prefer to do that, but it all falls under the guise of this ticket IMO. Would you prefer I merge this and then keep the ticket open as a blocker for the free? No real preference on my part. > Move from new to malloc in CAPI to facilitate eventual change from C++ to C > --- > > Key: MINIFICPP-645 > URL: https://issues.apache.org/jira/browse/MINIFICPP-645 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Arpad Boda >Priority: Blocker > Labels: CAPI, nanofi > Fix For: 0.6.0 > > > As gradually move to C we should move out of libminifi and remove the linter. > Nothing that is returned via the API that is not an opaque pointer should use > new, and conversely nothing that is passed in as a non-opaque pointer should > be deleted versus freed -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #439: MINIFICPP-645 - Move from new to malloc in CAPI ...
Github user phrocker commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/439 > There is no delete, it was wrong before, this PR just fixes: > > ``` > void free_flowfile(flow_file_record *ff) { > if (ff == nullptr) { > return; > } > auto content_repo_ptr = static_cast*>(ff->crp); > if (content_repo_ptr->get()) { > std::shared_ptr claim = std::make_shared(ff->contentLocation, *content_repo_ptr); > (*content_repo_ptr)->remove(claim); > } > if (ff->ffp == nullptr) { > auto map = static_cast(ff->attributes); > delete map; > } > free(ff->contentLocation); > free(ff); > ``` > The last line is the one that frees. Ah sorry, I was referencing the fact that over the course of PRs we've gone back and forth a little between malloc/new. There is a free_flow(flow *) that still uses delete. Happy to see a different PR if you prefer to do that, but it all falls under the guise of this ticket IMO. Would you prefer I merge this and then keep the ticket open as a blocker for the free? No real preference on my part. ---
[GitHub] nifi issue #3174: [WIP] NIFI-5820 NiFi built on Java 1.8 can run on Java 9/1...
Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/3174 it built with java 8. switched to 11. nifi starts up and appears to be working great. I did notice this on startup nifi.sh: JAVA_HOME not set; results may vary Java home: NiFi home: /Users/joe/development/nifi.git/nifi-assembly/target/nifi-1.9.0-SNAPSHOT-bin/nifi-1.9.0-SNAPSHOT Bootstrap Config File: /../development/nifi.git/nifi-assembly/target/nifi-1.9.0-SNAPSHOT-bin/nifi-1.9.0-SNAPSHOT/conf/bootstrap.conf WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.nifi.bootstrap.util.OSUtils (file:/../development/nifi.git/nifi-assembly/target/nifi-1.9.0-SNAPSHOT-bin/nifi-1.9.0-SNAPSHOT/lib/bootstrap/nifi-bootstrap-1.9.0-SNAPSHOT.jar) to method java.lang.ProcessImpl.pid() WARNING: Please consider reporting this to the maintainers of org.apache.nifi.bootstrap.util.OSUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ---
[jira] [Commented] (NIFI-5820) NiFi built with Java 1.8 needs to run on Java 11
[ https://issues.apache.org/jira/browse/NIFI-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689402#comment-16689402 ] ASF GitHub Bot commented on NIFI-5820: -- Github user joewitt commented on the issue: https://github.com/apache/nifi/pull/3174 it built with java 8. switched to 11. nifi starts up and appears to be working great. I did notice this on startup nifi.sh: JAVA_HOME not set; results may vary Java home: NiFi home: /Users/joe/development/nifi.git/nifi-assembly/target/nifi-1.9.0-SNAPSHOT-bin/nifi-1.9.0-SNAPSHOT Bootstrap Config File: /../development/nifi.git/nifi-assembly/target/nifi-1.9.0-SNAPSHOT-bin/nifi-1.9.0-SNAPSHOT/conf/bootstrap.conf WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.nifi.bootstrap.util.OSUtils (file:/../development/nifi.git/nifi-assembly/target/nifi-1.9.0-SNAPSHOT-bin/nifi-1.9.0-SNAPSHOT/lib/bootstrap/nifi-bootstrap-1.9.0-SNAPSHOT.jar) to method java.lang.ProcessImpl.pid() WARNING: Please consider reporting this to the maintainers of org.apache.nifi.bootstrap.util.OSUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release > NiFi built with Java 1.8 needs to run on Java 11 > > > Key: NIFI-5820 > URL: https://issues.apache.org/jira/browse/NIFI-5820 > Project: Apache NiFi > Issue Type: Sub-task >Reporter: Jeff Storck >Assignee: Jeff Storck >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (NIFI-5662) AvroTypeUtil Decimal support using Fixed Error
[ https://issues.apache.org/jira/browse/NIFI-5662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689387#comment-16689387 ] ASF GitHub Bot commented on NIFI-5662: -- GitHub user gideonkorir opened a pull request: https://github.com/apache/nifi/pull/3175 Support for generic fixed when using decimal logical type Fix for [Avro decimal conversion](https://jira.apache.org/jira/browse/NIFI-5662) You can merge this pull request into a Git repository by running: $ git pull https://github.com/gideonkorir/nifi nifi-5662 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3175.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3175 commit b52a3a240fcfbbec288deb37a0ef43acd79e42d2 Author: gkkorir Date: 2018-11-16T12:49:24Z Support for generic fixed when using decimal logical type > AvroTypeUtil Decimal support using Fixed Error > -- > > Key: NIFI-5662 > URL: https://issues.apache.org/jira/browse/NIFI-5662 > Project: Apache NiFi > Issue Type: Improvement > Components: Core Framework >Affects Versions: 1.7.1 > Environment: RHEL 7.5 > JDK 1.8.182 >Reporter: Gideon Korir >Priority: Major > > When the decimal is specified as fixed in the Avro schema, AvroTypeUtils > converts the decimal into a ByteBuffer instead of a GenericFixed. > The code: > {code:java} > return new Conversions.DecimalConversion().toBytes(decimal, fieldSchema, > logicalType) > {code} > Should be: > {code:java} > return fieldSchema.getType() == Type.BYTES > ? new Conversions.DecimalConversion().toBytes(decimal, fieldSchema, > logicalType) > : new Conversions.DecimalConversion().toFixed(decimal, fieldSchema, > logicalType); > {code} > The former causes the AvroRecordSetWriter to fail with the error: > _org.apache.avro.file.DataFileWriter$AppendWriteException: > java.lang.ClassCastException: java.nio.HeapByteBuffer cannot be cast to > org.apache.avro.generic.GenericFixed_ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi pull request #3175: Support for generic fixed when using decimal logica...
GitHub user gideonkorir opened a pull request: https://github.com/apache/nifi/pull/3175 Support for generic fixed when using decimal logical type Fix for [Avro decimal conversion](https://jira.apache.org/jira/browse/NIFI-5662) You can merge this pull request into a Git repository by running: $ git pull https://github.com/gideonkorir/nifi nifi-5662 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/nifi/pull/3175.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #3175 commit b52a3a240fcfbbec288deb37a0ef43acd79e42d2 Author: gkkorir Date: 2018-11-16T12:49:24Z Support for generic fixed when using decimal logical type ---
[jira] [Created] (NIFI-5827) CaptureChangeMySQL fails when table has been drop before log processing
Fabien Sarcel created NIFI-5827: --- Summary: CaptureChangeMySQL fails when table has been drop before log processing Key: NIFI-5827 URL: https://issues.apache.org/jira/browse/NIFI-5827 Project: Apache NiFi Issue Type: Bug Affects Versions: 1.8.0 Environment: MySQL 5.7.16 Reporter: Fabien Sarcel Attachments: nifi-app.log If I create a MySQL table, then drop it and launch a new flow with a CaptureChangeMySQL processor using the "Distributed Map Cache Client" parameter, the flow is running into an error. I think CaptureChangeMySQL is executing "loadTableInfo" function for a table that is not in the database anymore and this creates a NiFi error. But the worst is that after this error the flow loses database connection and keep going create new connections that can disturb MySQL database. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (MINIFICPP-676) Cleanup and fix serializable interface implementation
[ https://issues.apache.org/jira/browse/MINIFICPP-676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689313#comment-16689313 ] ASF GitHub Bot commented on MINIFICPP-676: -- Github user arpadboda commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/440#discussion_r234176325 --- Diff: libminifi/include/io/Serializable.h --- @@ -22,11 +22,36 @@ #include #include "EndianCheck.h" #include "DataStream.h" + +namespace { + template Cleanup and fix serializable interface implementation > - > > Key: MINIFICPP-676 > URL: https://issues.apache.org/jira/browse/MINIFICPP-676 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Arpad Boda >Assignee: Arpad Boda >Priority: Minor > > Serializable interface contains a couple of issues: > Type-unsafe template functions > Code duplication > Needless functions > The goal of this ticket is to make the interface cleaner and simpler. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp pull request #440: MINIFICPP-676 - Cleanup and fix serializa...
Github user arpadboda commented on a diff in the pull request: https://github.com/apache/nifi-minifi-cpp/pull/440#discussion_r234176325 --- Diff: libminifi/include/io/Serializable.h --- @@ -22,11 +22,36 @@ #include #include "EndianCheck.h" #include "DataStream.h" + +namespace { + template
[jira] [Commented] (MINIFICPP-645) Move from new to malloc in CAPI to facilitate eventual change from C++ to C
[ https://issues.apache.org/jira/browse/MINIFICPP-645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16689301#comment-16689301 ] ASF GitHub Bot commented on MINIFICPP-645: -- Github user arpadboda commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/439 There is no delete, it was wrong before: ``` void free_flowfile(flow_file_record *ff) { if (ff == nullptr) { return; } auto content_repo_ptr = static_cast*>(ff->crp); if (content_repo_ptr->get()) { std::shared_ptr claim = std::make_shared(ff->contentLocation, *content_repo_ptr); (*content_repo_ptr)->remove(claim); } if (ff->ffp == nullptr) { auto map = static_cast(ff->attributes); delete map; } free(ff->contentLocation); free(ff); ``` The last line is the one that frees. > Move from new to malloc in CAPI to facilitate eventual change from C++ to C > --- > > Key: MINIFICPP-645 > URL: https://issues.apache.org/jira/browse/MINIFICPP-645 > Project: NiFi MiNiFi C++ > Issue Type: Improvement >Reporter: Mr TheSegfault >Assignee: Arpad Boda >Priority: Blocker > Labels: CAPI, nanofi > Fix For: 0.6.0 > > > As gradually move to C we should move out of libminifi and remove the linter. > Nothing that is returned via the API that is not an opaque pointer should use > new -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] nifi-minifi-cpp issue #439: MINIFICPP-645 - Move from new to malloc in CAPI ...
Github user arpadboda commented on the issue: https://github.com/apache/nifi-minifi-cpp/pull/439 There is no delete, it was wrong before: ``` void free_flowfile(flow_file_record *ff) { if (ff == nullptr) { return; } auto content_repo_ptr = static_cast*>(ff->crp); if (content_repo_ptr->get()) { std::shared_ptr claim = std::make_shared(ff->contentLocation, *content_repo_ptr); (*content_repo_ptr)->remove(claim); } if (ff->ffp == nullptr) { auto map = static_cast(ff->attributes); delete map; } free(ff->contentLocation); free(ff); ``` The last line is the one that frees. ---