[jira] [Commented] (NIFI-11791) PutBigQuery processor lacks functionality found in PutBigQueryBatch

2023-07-12 Thread Marcio Sugar (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17742514#comment-17742514
 ] 

Marcio Sugar commented on NIFI-11791:
-

I agree. It might also be a good idea to check if PutBigQueryStreaming is in a 
similar situation.

> PutBigQuery processor lacks functionality found in PutBigQueryBatch
> ---
>
> Key: NIFI-11791
> URL: https://issues.apache.org/jira/browse/NIFI-11791
> Project: Apache NiFi
>  Issue Type: Improvement
>  Components: Extensions
>Affects Versions: 2.0.0, 1.22.0
>Reporter: Marcio Sugar
>Priority: Major
>
> Before PutBigQuery, we had PutBigQueryBatch and PutBigQueryStream, both now 
> deprecated. Not sure if PutBigQuery was designed to completely replace its 
> older brothers, but it cannot do that yet because of some missing features. 
> For example, we can't use PubBigQuery alone to create snapshot tables, 
> something that was easy to do with PutBigQueryBatch. 
> A snapshot table is a recent copy of a table from a database or a subset of 
> rows/columns of a table. It is used to dynamically replicate data between 
> distributed databases. Using PutBigQueryBatch, we can achieve that by setting 
> the following properties:
>  * Create Disposition = CREATE_IF_NEEDED
>  * Write Disposition = WRITE_TRUNCATE
> I understand that PutBigQuery uses the newer [BigQuery Storage Write 
> API|https://cloud.google.com/bigquery/docs/write-api], so adding the missing 
> functionality might not be possible. 
> But please note the older BigQuery (core) API (the one I believe 
> PutBigQueryBatch uses) allows the user to submit jobs to load data into 
> BigQuery in a very convenient way. That is something I'd like to see 
> preserved in future versions of NiFi



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-11791) PutBigQuery processor lacks functionality found in PutBigQueryBatch

2023-07-10 Thread Marcio Sugar (Jira)
Marcio Sugar created NIFI-11791:
---

 Summary: PutBigQuery processor lacks functionality found in 
PutBigQueryBatch
 Key: NIFI-11791
 URL: https://issues.apache.org/jira/browse/NIFI-11791
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.22.0, 2.0.0
Reporter: Marcio Sugar


Before PutBigQuery, we had PutBigQueryBatch and PutBigQueryStream, both now 
deprecated. Not sure if PutBigQuery was designed to completely replace its 
older brothers, but it cannot do that yet because of some missing features. For 
example, we can't use PubBigQuery alone to create snapshot tables, something 
that was easy to do with PutBigQueryBatch. 

A snapshot table is a recent copy of a table from a database or a subset of 
rows/columns of a table. It is used to dynamically replicate data between 
distributed databases. Using PutBigQueryBatch, we can achieve that by setting 
the following properties:
 * Create Disposition = CREATE_IF_NEEDED
 * Write Disposition = WRITE_TRUNCATE

I understand that PutBigQuery uses the newer [BigQuery Storage Write 
API|https://cloud.google.com/bigquery/docs/write-api], so adding the missing 
functionality might not be possible. 

But please note the older BigQuery (core) API (the one I believe 
PutBigQueryBatch uses) allows the user to submit jobs to load data into 
BigQuery in a very convenient way. That is something I'd like to see preserved 
in future versions of NiFi



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (NIFI-10712) External Account Credentials (Workload Identity Federation) support for GCP credential controller service

2022-10-27 Thread Marcio Sugar (Jira)
Marcio Sugar created NIFI-10712:
---

 Summary: External Account Credentials (Workload Identity 
Federation) support for GCP credential controller service
 Key: NIFI-10712
 URL: https://issues.apache.org/jira/browse/NIFI-10712
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Reporter: Marcio Sugar


So far with NiFi (1.18.0 is the latest release at the time of writing), we have 
been able to use only [service account 
keys|https://cloud.google.com/iam/docs/service-accounts#service_account_keys] 
as credentials when setting a GCPCredentialsControllerService. 

Unfortunately, service account keys are powerful credentials, and can represent 
a security risk if they are not managed correctly.

To avoid such security vulnerability, organizations that use Google Cloud are 
starting to move away from sharing service accounts keys with vendors and other 
external parties, and demanding that [Workload Identity 
Federation|https://cloud.google.com/iam/docs/using-workload-identity-federation]
 be used instead.

Using Workload Identity Federation, one can access Google Cloud resources from 
Amazon Web Services (AWS), Microsoft Azure or any identity provider that 
supports OpenID Connect (OIDC) or SAML 2.0.

The goal of this improvement is to allow all GCP processors in NiFi to work 
with Workload Identity Federation. That most likely will require changes in the 
{{{}GCPCredentialsControllerService{}}}, or maybe even the creation of a new, 
more specialized credentials controller service. 

Note there is another ticket open for a similar improvement: NIFI-8332, 
although that one doesn't mention Workflow Identity Federation so they might 
not overlap entirely.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (NIFI-6879) Variable Update Error when trying to change outside variable used inside a Process Group

2020-01-23 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Status: Resolved  (was: Closed)

> Variable Update Error when trying to change outside variable used inside a 
> Process Group
> 
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}", 
> and hit "Apply". NiFi starts the steps to make the change, but fails during 
> the "Applying Updates" step with the above error message. In the log, the 
> following error message appears:
> {noformat}
> ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
> Failed to update variable registry for Process Group with ID 
> 7f16c8da-016e-1000-aeb0-e65ea5e5f889
> java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
> referenced by 1 components that are currently running. {noformat}
> Images and log file attached. The log file has the full exception trace.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (NIFI-6879) Variable Update Error when trying to change outside variable used inside a Process Group

2020-01-23 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar closed NIFI-6879.
--

Thank you!

> Variable Update Error when trying to change outside variable used inside a 
> Process Group
> 
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Assignee: Mark Payne
>Priority: Major
> Fix For: 1.11.0
>
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}", 
> and hit "Apply". NiFi starts the steps to make the change, but fails during 
> the "Applying Updates" step with the above error message. In the log, the 
> following error message appears:
> {noformat}
> ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
> Failed to update variable registry for Process Group with ID 
> 7f16c8da-016e-1000-aeb0-e65ea5e5f889
> java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
> referenced by 1 components that are currently running. {noformat}
> Images and log file attached. The log file has the full exception trace.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-23 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar resolved NIFI-7061.

Resolution: Duplicate

> TLS Toolkit in client mode errors out when --subjectAlternativeNames option 
> is set
> --
>
> Key: NIFI-7061
> URL: https://issues.apache.org/jira/browse/NIFI-7061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.10.0
> Environment: Ubuntu 16.04, Java 1.8.0_201
>Reporter: Marcio Sugar
>Priority: Critical
> Fix For: 1.11.0
>
>
> Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
> option set gives an error:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
> --subjectAlternativeNames nifi.mydomain.com
> Service client error: null
> Usage: tls-toolkit service [-h] [args]
> Services:
>standalone: Creates certificates and config files for nifi cluster.
>server: Acts as a Certificate Authority that can be used by clients to get 
> Certificates
>client: Generates a private key and gets it signed by the certificate 
> authority.
>status: Checks the status of an HTTPS endpoint by making a GET request 
> using a supplied keystore and truststore.
> {noformat}
> But the same command works fine with the TLS Toolkit 1.7.1 client:
> {noformat}
> $ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> --subjectAlternativeNames nifi.mydomain.com
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:16:58 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client 
> runs with no issues:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> nifi.mydomain.com
> 2020/01/22 13:22:47 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
> the same machine (msugar) as the clients:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -t 0123456789abcdef -p 1
> {noformat}
> *Update:*
> I found the bug:
> - In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
> method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
> but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
> -  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
> {{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
> Supplier that doesn't exist in this case, causing the NPE to be thrown.  
> - The NPE is not caught by any of the above mentioned methods, so no 
> CommandLineParseException is ever created and it's eventually wrapped in an 
> {{InvocationTargetException}}.
> - In the method {{doMain(String[])}}, the code after 
> {{catch(InvocationTargetException)}} throws another NPE because 
> {{e.getCause()}} returns null. Not all exceptions have a cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-23 Thread Marcio Sugar (Jira)


[ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17022686#comment-17022686
 ] 

Marcio Sugar commented on NIFI-7061:


The problem was fixed in the 1.11.0 version released today. 

> TLS Toolkit in client mode errors out when --subjectAlternativeNames option 
> is set
> --
>
> Key: NIFI-7061
> URL: https://issues.apache.org/jira/browse/NIFI-7061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.10.0
> Environment: Ubuntu 16.04, Java 1.8.0_201
>Reporter: Marcio Sugar
>Priority: Critical
> Fix For: 1.11.0
>
>
> Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
> option set gives an error:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
> --subjectAlternativeNames nifi.mydomain.com
> Service client error: null
> Usage: tls-toolkit service [-h] [args]
> Services:
>standalone: Creates certificates and config files for nifi cluster.
>server: Acts as a Certificate Authority that can be used by clients to get 
> Certificates
>client: Generates a private key and gets it signed by the certificate 
> authority.
>status: Checks the status of an HTTPS endpoint by making a GET request 
> using a supplied keystore and truststore.
> {noformat}
> But the same command works fine with the TLS Toolkit 1.7.1 client:
> {noformat}
> $ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> --subjectAlternativeNames nifi.mydomain.com
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:16:58 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client 
> runs with no issues:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> nifi.mydomain.com
> 2020/01/22 13:22:47 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
> the same machine (msugar) as the clients:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -t 0123456789abcdef -p 1
> {noformat}
> *Update:*
> I found the bug:
> - In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
> method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
> but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
> -  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
> {{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
> Supplier that doesn't exist in this case, causing the NPE to be thrown.  
> - The NPE is not caught by any of the above mentioned methods, so no 
> CommandLineParseException is ever created and it's eventually wrapped in an 
> {{InvocationTargetException}}.
> - In the method {{doMain(String[])}}, the code after 
> {{catch(InvocationTargetException)}} throws another NPE because 
> {{e.getCause()}} returns null. Not all exceptions have a cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-23 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Fix Version/s: 1.11.0
 Priority: Critical  (was: Major)

> TLS Toolkit in client mode errors out when --subjectAlternativeNames option 
> is set
> --
>
> Key: NIFI-7061
> URL: https://issues.apache.org/jira/browse/NIFI-7061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.10.0
> Environment: Ubuntu 16.04, Java 1.8.0_201
>Reporter: Marcio Sugar
>Priority: Critical
> Fix For: 1.11.0
>
>
> Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
> option set gives an error:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
> --subjectAlternativeNames nifi.mydomain.com
> Service client error: null
> Usage: tls-toolkit service [-h] [args]
> Services:
>standalone: Creates certificates and config files for nifi cluster.
>server: Acts as a Certificate Authority that can be used by clients to get 
> Certificates
>client: Generates a private key and gets it signed by the certificate 
> authority.
>status: Checks the status of an HTTPS endpoint by making a GET request 
> using a supplied keystore and truststore.
> {noformat}
> But the same command works fine with the TLS Toolkit 1.7.1 client:
> {noformat}
> $ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> --subjectAlternativeNames nifi.mydomain.com
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:16:58 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client 
> runs with no issues:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> nifi.mydomain.com
> 2020/01/22 13:22:47 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
> the same machine (msugar) as the clients:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -t 0123456789abcdef -p 1
> {noformat}
> *Update:*
> I found the bug:
> - In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
> method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
> but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
> -  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
> {{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
> Supplier that doesn't exist in this case, causing the NPE to be thrown.  
> - The NPE is not caught by any of the above mentioned methods, so no 
> CommandLineParseException is ever created and it's eventually wrapped in an 
> {{InvocationTargetException}}.
> - In the method {{doMain(String[])}}, the code after 
> {{catch(InvocationTargetException)}} throws another NPE because 
> {{e.getCause()}} returns null. Not all exceptions have a cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Description: 
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
nifi.mydomain.com
2020/01/22 13:22:47 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
the same machine (msugar) as the clients:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -t 0123456789abcdef -p 1
{noformat}

*Update:*

I found the bug:
- In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
-  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
{{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
Supplier that doesn't exist in this case, causing the NPE to be thrown.  
- The NPE is not caught by any of the above mentioned methods, so no 
CommandLineParseException is ever created and it's eventually wrapped in an 
{{InvocationTargetException}}.
- In the method {{doMain(String[])}}, the code after 
{{catch(InvocationTargetException)}} throws another NPE because 
{{e.getCause()}} returns null. Not all exceptions have a cause.

  was:
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 

[jira] [Updated] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Description: 
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
nifi.mydomain.com
2020/01/22 13:22:47 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
the same machine (msugar) as the clients:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -t 0123456789abcdef -p 1
{noformat}

*Update:*

I've tried to debug {{TlsToolkitMain}} and found out that:
- In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
-  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
{{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
Supplier that doesn't exist in this case, causing the NPE to be thrown.  
- The NPE is not caught by any of the above mentioned methods, so no 
CommandLineParseException is ever created and it's eventually wrapped in an 
{{InvocationTargetException}}.
- In the method {{doMain(String[])}}, the code after 
{{catch(InvocationTargetException)}} throws another NPE because 
{{e.getCause()}} returns null. Not all exceptions have a cause.

  was:
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ 

[jira] [Updated] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Environment: Ubuntu 16.04, Java 1.8.0_201  (was: Ubuntu 16.04, Java 1.8)

> TLS Toolkit in client mode errors out when --subjectAlternativeNames option 
> is set
> --
>
> Key: NIFI-7061
> URL: https://issues.apache.org/jira/browse/NIFI-7061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.10.0
> Environment: Ubuntu 16.04, Java 1.8.0_201
>Reporter: Marcio Sugar
>Priority: Major
>
> Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
> option set gives an error:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
> --subjectAlternativeNames nifi.mydomain.com
> Service client error: null
> Usage: tls-toolkit service [-h] [args]
> Services:
>standalone: Creates certificates and config files for nifi cluster.
>server: Acts as a Certificate Authority that can be used by clients to get 
> Certificates
>client: Generates a private key and gets it signed by the certificate 
> authority.
>status: Checks the status of an HTTPS endpoint by making a GET request 
> using a supplied keystore and truststore.
> {noformat}
> But the same command works fine with the TLS Toolkit 1.7.1 client:
> {noformat}
> $ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> --subjectAlternativeNames nifi.mydomain.com
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:16:58 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client 
> runs with no issues:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> nifi.mydomain.com
> 2020/01/22 13:22:47 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
> the same machine (msugar) as the clients:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
> {noformat}
> *Update:*
> I've tried to debug {{TlsToolkitMain}} and found out that:
> - In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
> method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
> but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
> -  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
> {{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
> Supplier that doesn't exist in this case, causing the NPE to be thrown.  
> - The NPE is not caught by any of the above mentioned methods, so no 
> CommandLineParseException is ever created and it's eventually wrapped in an 
> {{InvocationTargetException}}.
> - In the method {{doMain(String[])}}, the code after 
> {{catch(InvocationTargetException)}} throws another NPE because 
> {{e.getCause()}} returns null. Not all exceptions have a cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Environment: Ubuntu 16.04, Java 1.8  (was: Ubuntu 16.04)

> TLS Toolkit in client mode errors out when --subjectAlternativeNames option 
> is set
> --
>
> Key: NIFI-7061
> URL: https://issues.apache.org/jira/browse/NIFI-7061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.10.0
> Environment: Ubuntu 16.04, Java 1.8
>Reporter: Marcio Sugar
>Priority: Major
>
> Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
> option set gives an error:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
> --subjectAlternativeNames nifi.mydomain.com
> Service client error: null
> Usage: tls-toolkit service [-h] [args]
> Services:
>standalone: Creates certificates and config files for nifi cluster.
>server: Acts as a Certificate Authority that can be used by clients to get 
> Certificates
>client: Generates a private key and gets it signed by the certificate 
> authority.
>status: Checks the status of an HTTPS endpoint by making a GET request 
> using a supplied keystore and truststore.
> {noformat}
> But the same command works fine with the TLS Toolkit 1.7.1 client:
> {noformat}
> $ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> --subjectAlternativeNames nifi.mydomain.com
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:16:58 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client 
> runs with no issues:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> nifi.mydomain.com
> 2020/01/22 13:22:47 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
> the same machine (msugar) as the clients:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
> {noformat}
> *Update:*
> I've tried to debug {{TlsToolkitMain}} and found out that:
> - In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
> method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
> but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
> -  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
> {{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
> Supplier that doesn't exist in this case, causing the NPE to be thrown.  
> - The NPE is not caught by any of the above mentioned methods, so no 
> CommandLineParseException is ever created and it's eventually wrapped in an 
> {{InvocationTargetException}}.
> - In the method {{doMain(String[])}}, the code after 
> {{catch(InvocationTargetException)}} throws another NPE because 
> {{e.getCause()}} returns null. Not all exceptions have a cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7061) TLS Toolkit in client mode errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Summary: TLS Toolkit in client mode errors out when 
--subjectAlternativeNames option is set  (was: TLS Toolkit errors out when 
--subjectAlternativeNames option is set)

> TLS Toolkit in client mode errors out when --subjectAlternativeNames option 
> is set
> --
>
> Key: NIFI-7061
> URL: https://issues.apache.org/jira/browse/NIFI-7061
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Tools and Build
>Affects Versions: 1.10.0
> Environment: Ubuntu 16.04
>Reporter: Marcio Sugar
>Priority: Major
>
> Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
> option set gives an error:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
> --subjectAlternativeNames nifi.mydomain.com
> Service client error: null
> Usage: tls-toolkit service [-h] [args]
> Services:
>standalone: Creates certificates and config files for nifi cluster.
>server: Acts as a Certificate Authority that can be used by clients to get 
> Certificates
>client: Generates a private key and gets it signed by the certificate 
> authority.
>status: Checks the status of an HTTPS endpoint by making a GET request 
> using a supplied keystore and truststore.
> {noformat}
> But the same command works fine with the TLS Toolkit 1.7.1 client:
> {noformat}
> $ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> --subjectAlternativeNames nifi.mydomain.com
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:16:57 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:16:58 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client 
> runs with no issues:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
> nifi.mydomain.com
> 2020/01/22 13:22:47 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
> Requesting new certificate from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
> 2020/01/22 13:22:48 INFO [main] 
> org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
>  Got certificate with dn CN=msugar, OU=NIFI
> {noformat}
> Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
> the same machine (msugar) as the clients:
> {noformat}
> $ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
> {noformat}
> *Update:*
> I've tried to debug {{TlsToolkitMain}} and found out that:
> - In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
> method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
> but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
> -  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
> {{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
> Supplier that doesn't exist in this case, causing the NPE to be thrown.  
> - The NPE is not caught by any of the above mentioned methods, so no 
> CommandLineParseException is ever created and it's eventually wrapped in an 
> {{InvocationTargetException}}.
> - In the method {{doMain(String[])}}, the code after 
> {{catch(InvocationTargetException)}} throws another NPE because 
> {{e.getCause()}} returns null. Not all exceptions have a cause.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-7061) TLS Toolkit errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Description: 
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
nifi.mydomain.com
2020/01/22 13:22:47 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
the same machine (msugar) as the clients:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
{noformat}

*Update:*

I've tried to debug {{TlsToolkitMain}} and found out that:
- In the {{TlsCertificateAuthorityClientCommandLine}} class, the {{doParse}} 
method calls {{InstanceDefinition.createDefinitions}} with all the arguments 
but the 2nd set to null. So the {{keyStorePasswords}} argument is always null.
-  In the {{InstanceDefinition}} class, {{createDefinitions}} then calls 
{{createDefinition}}, which tries to get a value from a {{keyStorePasswords}} 
Supplier that doesn't exist in this case, causing the NPE to be thrown.  
- The NPE is not caught by any of the above mentioned methods, so no 
CommandLineParseException is ever created and it's eventually wrapped in an 
{{InvocationTargetException}}.
- In the method {{doMain(String[])}}, the code after 
{{catch(InvocationTargetException)}} throws another NPE because 
{{e.getCause()}} returns null. Not all exceptions have a cause.

  was:
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ 

[jira] [Updated] (NIFI-7061) TLS Toolkit errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-7061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-7061:
---
Description: 
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames nifi.mydomain.com
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
nifi.mydomain.com
2020/01/22 13:22:47 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
the same machine (msugar) as the clients:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
{noformat}


  was:
Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames "nifi.mydomain.com"
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
nifi.mydomain.com
2020/01/22 13:22:47 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
the same machine (msugar) as the clients:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
{noformat}



> TLS Toolkit errors out when --subjectAlternativeNames option is set
> 

[jira] [Created] (NIFI-7061) TLS Toolkit errors out when --subjectAlternativeNames option is set

2020-01-22 Thread Marcio Sugar (Jira)
Marcio Sugar created NIFI-7061:
--

 Summary: TLS Toolkit errors out when --subjectAlternativeNames 
option is set
 Key: NIFI-7061
 URL: https://issues.apache.org/jira/browse/NIFI-7061
 Project: Apache NiFi
  Issue Type: Bug
  Components: Tools and Build
Affects Versions: 1.10.0
 Environment: Ubuntu 16.04
Reporter: Marcio Sugar


Running the TLS Tookit 1.10.0 client with the {{–subjectAlternativeNames}} 
option set gives an error:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client  -t 0123456789abcdef -p 1 
--subjectAlternativeNames "nifi.mydomain.com"
Service client error: null

Usage: tls-toolkit service [-h] [args]

Services:
   standalone: Creates certificates and config files for nifi cluster.
   server: Acts as a Certificate Authority that can be used by clients to get 
Certificates
   client: Generates a private key and gets it signed by the certificate 
authority.
   status: Checks the status of an HTTPS endpoint by making a GET request using 
a supplied keystore and truststore.
{noformat}

But the same command works fine with the TLS Toolkit 1.7.1 client:
{noformat}
$ nifi-toolkit-1.7.1/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
--subjectAlternativeNames nifi.mydomain.com
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:16:57 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:16:58 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

When the {{–subjectAlternativeNames}} option is not set, the 1.10.0 client runs 
with no issues:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh client -t 0123456789abcdef -p 1  
nifi.mydomain.com
2020/01/22 13:22:47 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateAuthorityClient: 
Requesting new certificate from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Requesting certificate with dn CN=msugar,OU=NIFI from localhost:1
2020/01/22 13:22:48 INFO [main] 
org.apache.nifi.toolkit.tls.service.client.TlsCertificateSigningRequestPerformer:
 Got certificate with dn CN=msugar, OU=NIFI
{noformat}

Note that, in all cases, the server is a TLS Tookit 1.10.0 process running on 
the same machine (msugar) as the clients:
{noformat}
$ nifi-toolkit-1.10.0/bin/tls-toolkit.sh server -0123456789abcdef -p 1
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6879) Variable Update Error when trying to change outside variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Summary: Variable Update Error when trying to change outside variable used 
inside a Process Group  (was: Variable Updater Error when trying to change 
variable used inside a Process Group)

> Variable Update Error when trying to change outside variable used inside a 
> Process Group
> 
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}", 
> and hit "Apply". NiFi starts the steps to make the change, but fails during 
> the "Applying Updates" step with the above error message. In the log, the 
> following error message appears:
> {noformat}
> ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
> Failed to update variable registry for Process Group with ID 
> 7f16c8da-016e-1000-aeb0-e65ea5e5f889
> java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
> referenced by 1 components that are currently running. {noformat}
> Images and log file attached. The log file has the full exception trace.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Description: 
This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.

Whenever I try to change the value of a variable that's defined outside a 
Process Group where it's used, NiFi fails during the Applying Updates with the 
following message (image also attached):
{noformat}
Variable Update Error
Unable to complete variable update request: Failed to update Variable Registry 
because failed while performing step: Applying updates to Variable 
Registry.{noformat}
To reproduce the problem:
 # On the top-level canvas, create a variable named "myvar", and set it to 
"{{blah}}".
 # Create a Process Group named "mypg".  Enter the group. 
 # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 day" 
to get just one flow file when it's started.
 # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} " 
and its "Log message" to "{{My message is: '${myvar}'.}}"
 # Start both processors and see a message like this appeared in the 
application's log:  {{MYLOG: My message is: 'blah'.}}
 # Now leave the "mypg" Process Group and go back to the top-level canvas. Try 
to set the "myvar" variable to a different value, like "{{blah-blah}}", and hit 
"Apply". NiFi starts the steps to make the change, but fails during the 
"Applying Updates" step with the above error message. In the log, the following 
error message appears:

{noformat}
ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
Failed to update variable registry for Process Group with ID 
7f16c8da-016e-1000-aeb0-e65ea5e5f889
java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
referenced by 1 components that are currently running. {noformat}
Images and log file attached. The log file has the full exception trace.

 

  was:
This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.

Whenever I try to change the value of a variable that's defined outside a 
Process Group where it's used, NiFi fails during the Applying Updates with the 
following message (image also attached):
{noformat}
Variable Update Error
Unable to complete variable update request: Failed to update Variable Registry 
because failed while performing step: Applying updates to Variable 
Registry.{noformat}
To reproduce the problem:
 # On the top-level canvas, create a variable named "myvar", and set it to 
"{{blah}}".
 # Create a Process Group named "mypg".  Enter the group. 
 # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 day" 
to get just one flow file when it's started.
 # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} " 
and its "Log message" to "{{My message is: '${myvar}'.}}"
 # Start both processors and see a message like this appeared in the 
application's log:  {{MYLOG: My message is: 'blah'.}}
 # Now leave the "mypg" Process Group and go back to the top-level canvas. Try 
to set the "myvar" variable to a different value, like "{{blah-blah}}". NiFi 
starts the steps to make the change, but fails during the "Applying Updates" 
step with the above error message. In the log, the following error message 
appears: 

{noformat}
ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
Failed to update variable registry for Process Group with ID 
7f16c8da-016e-1000-aeb0-e65ea5e5f889
java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
referenced by 1 components that are currently running. {noformat}
Images and log file attached. The log file has the full exception trace.

 


> Variable Updater Error when trying to change variable used inside a Process 
> Group
> -
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set 

[jira] [Updated] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Attachment: 1_Variable_Update_Error.png

> Variable Updater Error when trying to change variable used inside a Process 
> Group
> -
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}". 
> NiFi starts the steps to make the change, but fails during the "Applying 
> Updates" step with the above error message. In the log, the following error 
> message appears: 
> {noformat}
> ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
> Failed to update variable registry for Process Group with ID 
> 7f16c8da-016e-1000-aeb0-e65ea5e5f889
> java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
> referenced by 1 components that are currently running. {noformat}
> Images and log file attached. The log file has the full exception trace.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Attachment: nifi-app.log

> Variable Updater Error when trying to change variable used inside a Process 
> Group
> -
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}". 
> NiFi starts the steps to make the change, but fails during the "Applying 
> Updates" step with the above error message. In the log, the following error 
> message appears: 
> {noformat}
> ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
> Failed to update variable registry for Process Group with ID 
> 7f16c8da-016e-1000-aeb0-e65ea5e5f889
> java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
> referenced by 1 components that are currently running. {noformat}
> Images and log file attached. The log file has the full exception trace.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Attachment: 2_Variables.png

> Variable Updater Error when trying to change variable used inside a Process 
> Group
> -
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
> Attachments: 1_Variable_Update_Error.png, 2_Variables.png, 
> nifi-app.log
>
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}". 
> NiFi starts the steps to make the change, but fails during the "Applying 
> Updates" step with the above error message. In the log, the following error 
> message appears: 
> {noformat}
> ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
> Failed to update variable registry for Process Group with ID 
> 7f16c8da-016e-1000-aeb0-e65ea5e5f889
> java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
> referenced by 1 components that are currently running. {noformat}
> Images and log file attached. The log file has the full exception trace.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Description: 
This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.

Whenever I try to change the value of a variable that's defined outside a 
Process Group where it's used, NiFi fails during the Applying Updates with the 
following message (image also attached):
{noformat}
Variable Update Error
Unable to complete variable update request: Failed to update Variable Registry 
because failed while performing step: Applying updates to Variable 
Registry.{noformat}
To reproduce the problem:
 # On the top-level canvas, create a variable named "myvar", and set it to 
"{{blah}}".
 # Create a Process Group named "mypg".  Enter the group. 
 # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 day" 
to get just one flow file when it's started.
 # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} " 
and its "Log message" to "{{My message is: '${myvar}'.}}"
 # Start both processors and see a message like this appeared in the 
application's log:  {{MYLOG: My message is: 'blah'.}}
 # Now leave the "mypg" Process Group and go back to the top-level canvas. Try 
to set the "myvar" variable to a different value, like "{{blah-blah}}". NiFi 
starts the steps to make the change, but fails during the "Applying Updates" 
step with the above error message. In the log, the following error message 
appears: 

{noformat}
ERROR [Variable Registry Update Thread] o.a.nifi.web.api.ProcessGroupResource 
Failed to update variable registry for Process Group with ID 
7f16c8da-016e-1000-aeb0-e65ea5e5f889
java.lang.IllegalStateException: Cannot update variable 'myvar' because it is 
referenced by 1 components that are currently running. {noformat}
Images and log file attached. The log file has the full exception trace.

 

  was:
This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.

Whenever I try to change the value of a variable that's defined outside a 
Process Group where it's used, NiFi fails during the Applying Updates with the 
following message (image also attached):
{noformat}
Variable Update Error
Unable to complete variable update request: Failed to update Variable Registry 
because failed while performing step: Applying updates to Variable 
Registry.{noformat}
To reproduce the problem:
 # On the top-level canvas, create a variable named "myvar", and set it to 
"{{blah}}".
 # Create a Process Group named "mypg".  Enter the group. 
 # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 day" 
to get just one flow file when it's started.
 # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} " 
and its "Log message" to "{{My message is: '${myvar}'.}}"
 # Start both processors and see a message like this appeared in the 
application's log:  {{MYLOG: My message is: 'blah'.}}
 # Now leave the "mypg" Process Group and go back to the top-level canvas. Try 
to set the "myvar" variable to a different value, like "{{blah-blah}}". NiFi 
starts the steps to make the change, but fails during the "Applying Updates" 
step with the above error message.

Images and log file attached.

 


> Variable Updater Error when trying to change variable used inside a Process 
> Group
> -
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go 

[jira] [Updated] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)


 [ 
https://issues.apache.org/jira/browse/NIFI-6879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-6879:
---
Environment: 
Host OS: Ubuntu 16.04
Docker version 19.03.4, build 9013bf583a
Docker Image: apache/nifi 1.10.0 4310dad3312f

  was:Ubuntu 16.04


> Variable Updater Error when trying to change variable used inside a Process 
> Group
> -
>
> Key: NIFI-6879
> URL: https://issues.apache.org/jira/browse/NIFI-6879
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Core Framework
>Affects Versions: 1.10.0
> Environment: Host OS: Ubuntu 16.04
> Docker version 19.03.4, build 9013bf583a
> Docker Image: apache/nifi 1.10.0 4310dad3312f
>Reporter: Marcio Sugar
>Priority: Major
>
> This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.
> Whenever I try to change the value of a variable that's defined outside a 
> Process Group where it's used, NiFi fails during the Applying Updates with 
> the following message (image also attached):
> {noformat}
> Variable Update Error
> Unable to complete variable update request: Failed to update Variable 
> Registry because failed while performing step: Applying updates to Variable 
> Registry.{noformat}
> To reproduce the problem:
>  # On the top-level canvas, create a variable named "myvar", and set it to 
> "{{blah}}".
>  # Create a Process Group named "mypg".  Enter the group. 
>  # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 
> day" to get just one flow file when it's started.
>  # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} 
> " and its "Log message" to "{{My message is: '${myvar}'.}}"
>  # Start both processors and see a message like this appeared in the 
> application's log:  {{MYLOG: My message is: 'blah'.}}
>  # Now leave the "mypg" Process Group and go back to the top-level canvas. 
> Try to set the "myvar" variable to a different value, like "{{blah-blah}}". 
> NiFi starts the steps to make the change, but fails during the "Applying 
> Updates" step with the above error message.
> Images and log file attached.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-6879) Variable Updater Error when trying to change variable used inside a Process Group

2019-11-18 Thread Marcio Sugar (Jira)
Marcio Sugar created NIFI-6879:
--

 Summary: Variable Updater Error when trying to change variable 
used inside a Process Group
 Key: NIFI-6879
 URL: https://issues.apache.org/jira/browse/NIFI-6879
 Project: Apache NiFi
  Issue Type: Bug
  Components: Core Framework
Affects Versions: 1.10.0
 Environment: Ubuntu 16.04
Reporter: Marcio Sugar


This works fine in NiFi 1.7.1 but fails in NiFi 1.10.1.

Whenever I try to change the value of a variable that's defined outside a 
Process Group where it's used, NiFi fails during the Applying Updates with the 
following message (image also attached):
{noformat}
Variable Update Error
Unable to complete variable update request: Failed to update Variable Registry 
because failed while performing step: Applying updates to Variable 
Registry.{noformat}
To reproduce the problem:
 # On the top-level canvas, create a variable named "myvar", and set it to 
"{{blah}}".
 # Create a Process Group named "mypg".  Enter the group. 
 # Inside "mypg", add a GenerateFlowFile and set its "Run Schedule" to "1 day" 
to get just one flow file when it's started.
 # Still inside "mypg", add a LogMessage. Set its "Log prefix" to "{{MYLOG:}} " 
and its "Log message" to "{{My message is: '${myvar}'.}}"
 # Start both processors and see a message like this appeared in the 
application's log:  {{MYLOG: My message is: 'blah'.}}
 # Now leave the "mypg" Process Group and go back to the top-level canvas. Try 
to set the "myvar" variable to a different value, like "{{blah-blah}}". NiFi 
starts the steps to make the change, but fails during the "Applying Updates" 
step with the above error message.

Images and log file attached.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (NIFI-5971) ExecuteSQL Avro schema: all fields are nullable

2019-01-23 Thread Marcio Sugar (JIRA)
Marcio Sugar created NIFI-5971:
--

 Summary: ExecuteSQL Avro schema: all fields are nullable
 Key: NIFI-5971
 URL: https://issues.apache.org/jira/browse/NIFI-5971
 Project: Apache NiFi
  Issue Type: Improvement
  Components: Extensions
Affects Versions: 1.7.1
 Environment: Ubuntu 16.04
Apache NiFi 1.7.1
IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8
IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 4.19.26 
/ v10.5 FP6, 4.19.72 / v10.5 FP9
Reporter: Marcio Sugar


JdbcCommon#createSchema creates an Avro schema with nullable types for all 
fields. It should check with java.sql.ResultSetMetaData#isNullable instead.

It's the same issue discussed on dev list 
[here|https://lists.apache.org/list.html?d...@nifi.apache.org:2018-12] a few 
months ago. A workaround exists, but it's inconvenient when you have lots of 
tables to deal with. I like the solution proposed by Matt Burgess, the "Honor 
Non-Nullable Fields" property.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-5312) QueryDatabaseTable updates state when an SQLException is thrown

2018-07-11 Thread Marcio Sugar (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-5312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540943#comment-16540943
 ] 

Marcio Sugar commented on NIFI-5312:


Hi [~patricker], Yes, I tested the same scenario with the 1.7.0 version and 
it's working now.

Thank you.

> QueryDatabaseTable updates state when an SQLException is thrown
> ---
>
> Key: NIFI-5312
> URL: https://issues.apache.org/jira/browse/NIFI-5312
> Project: Apache NiFi
>  Issue Type: Bug
> Environment: Ubuntu 16.04 
> Apache NiFi 1.5.0, 1.6.0 
> IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1) 
> IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 
> 4.19.26 / v10.5 FP6, 4.19.72 / v10.5 FP9 (2) 
> Notes: 
> (1) SELECT * FROM SYSIBMADM.ENV_INST_INFO 
> (2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version
>Reporter: Marcio Sugar
>Assignee: Peter Wicks
>Priority: Major
> Fix For: 1.8.0
>
>
> I noticed that when an SQLException is thrown, at least in the situation 
> described by NIFI-4926, QueryDatabaseTable still updates the state of the 
> Maximum-value Columns. It means that when something goes wrong, a potentially 
> big number of rows will be skipped pretty much silently. (Well, it will 
> appear in the Bulletin Board, but when the message disappears from the 
> Bulletin Board there will be no indication of the problem left. The processor 
> has no "terminate relationship" other than "Success".)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-07-11 Thread Marcio Sugar (JIRA)


[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16540938#comment-16540938
 ] 

Marcio Sugar commented on NIFI-4926:


Hi Matt, Yes, I tested the same scenario with the 1.7.0 version and it's 
working now. No need to set allowNextOnExhaustedResultSet`, 
resultSetHoldability or downgradeHoldCursorsUnderXa, either. Please feel free 
to close this ticket.

Thank you.

> QueryDatabaseTable throws SqlException after reading from DB2 table
> ---
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0, 1.6.0
> Environment: Ubuntu 16.04
> Apache NiFi 1.5.0, 1.6.0
> IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1)
> IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 
> 4.19.26 / v10.5 FP6, 4.19.72 / v10.5 FP9 (2)
> Notes:
> (1) SELECT * FROM SYSIBMADM.ENV_INST_INFO
> (2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
> Invalid 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-06-14 Thread Marcio Sugar (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and 
Matt Burgess' 
[reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html],
 this particular exception could be avoided by adding this setting (semicolon 

[jira] [Created] (NIFI-5312) QueryDatabaseTable updates state when an SQLException is thrown

2018-06-14 Thread Marcio Sugar (JIRA)
Marcio Sugar created NIFI-5312:
--

 Summary: QueryDatabaseTable updates state when an SQLException is 
thrown
 Key: NIFI-5312
 URL: https://issues.apache.org/jira/browse/NIFI-5312
 Project: Apache NiFi
  Issue Type: Bug
Affects Versions: 1.6.0, 1.5.0
 Environment: Ubuntu 16.04 
Apache NiFi 1.5.0, 1.6.0 
IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1) 
IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 4.19.26 
/ v10.5 FP6, 4.19.72 / v10.5 FP9 (2) 

Notes: 
(1) SELECT * FROM SYSIBMADM.ENV_INST_INFO 
(2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version
Reporter: Marcio Sugar


I noticed that when an SQLException is thrown, at least in the situation 
described by NIFI-4926, QueryDatabaseTable still updates the state of the 
Maximum-value Columns. It means that when something goes wrong, a potentially 
big number of rows will be skipped pretty much silently. (Well, it will appear 
in the Bulletin Board, but when the message disappears from the Bulletin Board 
there will be no indication of the problem left. The processor has no 
"terminate relationship" other than "Success".)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-06-14 Thread Marcio Sugar (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and 
Matt Burgess' 
[reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html],
 this particular exception could be avoided by adding this setting (semicolon 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-06-14 Thread Marcio Sugar (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and 
Matt Burgess' 
[reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html],
 this particular exception could be avoided by adding this setting (semicolon 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-06-14 Thread Marcio Sugar (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670] and 
Matt Burgess' 
[reply|https://community.hortonworks.com/questions/154948/connecting-apache-nifi-and-querying-tables-to-db2.html],
 this particular exception could be avoided by adding this setting (semicolon 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-06-14 Thread Marcio Sugar (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this 
particular exception could be avoided by adding this setting (semicolon 
included) to the JDBC connection URL:
{code:java}
allowNextOnExhaustedResultSet=1;{code}
But it didn't make a difference. I believe the reason 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-06-14 Thread Marcio Sugar (JIRA)


 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Affects Version/s: 1.6.0
  Environment: 
Ubuntu 16.04
Apache NiFi 1.5.0, 1.6.0
IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1)
IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 4.19.26 
/ v10.5 FP6, 4.19.72 / v10.5 FP9 (2)

Notes:
(1) SELECT * FROM SYSIBMADM.ENV_INST_INFO
(2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version

  was:
ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
JDBC driver db2jcc4-10.5.0.6

  Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)

[jira] [Commented] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-03-13 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16397339#comment-16397339
 ] 

Marcio Sugar commented on NIFI-4926:


To easily reproduce the bug, besides setting the connection pooling service, 
table name, and maximum value column, also set:
 * Fetch Size = 1
 * Max Rows per Flow File = 2
 * Maximum Number of Fragments = 1

Make sure the table has at least 1 row in it.

> QueryDatabaseTable throws SqlException after reading from DB2 table
> ---
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
> Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading from DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Summary: QueryDatabaseTable throws SqlException after reading from DB2 
table  (was: QueryDatabaseTable throws SqlException after reading entire DB2 
table)

> QueryDatabaseTable throws SqlException after reading from DB2 table
> ---
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
> at 
> org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
> at 
> org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
> Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.kd.a(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
> at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
> at 

[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this 
particular exception could be avoided by adding this setting (semicolon 
included) to the JDBC connection URL:
{code:java}
allowNextOnExhaustedResultSet=1;{code}
But it didn't make a difference in my case.

I also 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/3/18 12:13 AM:
-

In {{QueryDatabaseTable.java}}, method {{onTrigger}}, line 278, a JDBC result 
set is created but not used to control the _while_ loop two lines below. In 
fact, the {{resultSet}} is handled to another method, 
{{JdbcCommon.convertToAvroStream}}, which does its job and returns the number 
of rows it used to populate the output Avro file.

Because only the number of rows is returned, {{QueryDatabaseTable}} doesn't 
know the last {{rs.next()}} was false, which means (at least for the DB2 JDBC 
Driver) the result set is now closed. Instead of breaking out of the while 
loop, the processor once again calls {{convertToAvroStream}} and a 
{{SqlException}} is thrown soon after.

_Notes:_
 * Perhaps this logic works fine for other databases, but considering 
{{resultSet}} was not created with try-with-resouces and I couldn't find any 
explicit {{resultSet.close()}}, it stands to reason we could also have a 
resource leakage here.
 * Using {{resultSet.isAfterLast()}} may not be a good idea because the support 
for that method is optional for {{TYPE_FORWARD_ONLY}} result sets.
 * Checking the {{resultSet}} is closed right after entering 
{{JdbcCommon.convertToAvroStream}} could work as a quick fix, but it would make 
the whole thing even harder to understand and maintain. Maybe some refactoring 
would be in order?


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and as a 
consequence the resultSet it's now closed) is passed back to 
QueryDatabaseTable, so it cannot know it should break out of the while loop and 
once again calls convertToAvroStream. This time the method throws an exception 
when trying to create the Schema (first line of JdbcCommon.convertToAvroStream, 
line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:36 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and as a 
consequence the resultSet it's now closed) is passed back to 
QueryDatabaseTable, so it cannot know it should break out of the while loop and 
once again calls convertToAvroStream. This time the method throws an exception 
when trying to create the Schema (first line of JdbcCommon.convertToAvroStream, 
line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and it's 
now closed) is passed back to QueryDatabaseTable, so it cannot know it should 
break out of the while loop and once again calls convertToAvroStream. This time 
the method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:35 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file.

Since no other indication the last resultSet.next() returned false (and it's 
now closed) is passed back to QueryDatabaseTable, so it cannot know it should 
break out of the while loop and once again calls convertToAvroStream. This time 
the method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256).

Perhaps this logic works fine with other databases, but considering the 
resultSet was created without using try-with-resoruces and I couldn't find any 
explicit resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file but no 
other indication the resultSet.next() returned false. It means 
QueryDatabaseTable cannot know it should break out of the while loop and once 
again calls convertToAvroStream. This time the method throws an exception when 
trying to create the Schema (first line of JdbcCommon.convertToAvroStream, line 
256), which makes sense considering last time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:31 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which does its job 
and returns the number of rows it used to populate the output Avro file but no 
other indication the resultSet.next() returned false. It means 
QueryDatabaseTable cannot know it should break out of the while loop and once 
again calls convertToAvroStream. This time the method throws an exception when 
trying to create the Schema (first line of JdbcCommon.convertToAvroStream, line 
256), which makes sense considering last time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream. which does its job 
and returns the number of rows it used to populate the output Avro file. 
QueryDatabaseTable doesn't use that number to decide when it should break out 
of the while loop and once again calls convertToAvroStream. This time the 
method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256), which makes sense considering last 
time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:14 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream. which does its job 
and returns the number of rows it used to populate the output Avro file. 
QueryDatabaseTable doesn't use that number to decide when it should break out 
of the while loop and once again calls convertToAvroStream. This time the 
method throws an exception when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream, line 256), which makes sense considering last 
time rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop and once 
again convertToAvroStream is called. This time the latter throws an exception 
when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream), which makes sense considering the last 
rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 11:11 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop and once 
again convertToAvroStream is called. This time the latter throws an exception 
when trying to create the Schema (first line of 
JdbcCommon.convertToAvroStream), which makes sense considering the last 
rs.next() returned false.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> 

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 10:57 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

Note: Using resultSet.isAfterLast() may not a good idea. It may be driver-, 
database-, or result-set-type dependent.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)

[jira] [Comment Edited] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16384277#comment-16384277
 ] 

Marcio Sugar edited comment on NIFI-4926 at 3/2/18 10:54 PM:
-

In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
handled to another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. convertToAvroStream does its job but returns only the number 
of rows it used to populate the output Avro file. QueryDatabaseTable doesn't 
use that number to decide when it should break out of the while loop.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the it would be left open or not.


was (Author: msugar):
In QueryDatabaseTable.java, method onTrigger, line 278, a resultSet is created 
but not used to control the while loop two lines below. The resultSet is 
consumed by another method, JdbcCommon.convertToAvroStream, which is called 
inside a lambda. The convertToAvroStream does its job but returns only the 
number of rows it consumed to populate the Avro file, but QueryDatabaseTable 
doesn't use that number to decide if it should break out of the while loop or 
not.

Perhaps this logic works fine with other databases, but since the resultSet was 
created without using try-with-resoruces and I couldn't find any explicit 
resultSet.close(), I'm wondering if the resultSet would be left open or not.

> QueryDatabaseTable throws SqlException after reading entire DB2 table
> -
>
> Key: NIFI-4926
> URL: https://issues.apache.org/jira/browse/NIFI-4926
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to replicate a table from one database to another using NiFi. My 
> flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The 
> former fails with this SQLException after reading the whole table: 
> {code:java}
> 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
> o.a.n.c.s.StandardProcessScheduler Starting 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
> 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
> o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
> threads
> 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
> State: StandardStateMap[version=54, values={}]
> 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
> SELECT * FROM FXSCHEMA.USER
> 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
> o.a.nifi.controller.StandardFlowService Saved flow controller 
> org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
> false
> 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
> StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
> section=4], offset=0, 
> length=222061615],offset=0,name=264583001281149,size=222061615] contains 
> 652026 Avro records; transferring to 'success'
> 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
> o.a.n.p.standard.QueryDatabaseTable 
> QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
> SQL select query SELECT * FROM FXSCHEMA.USER due to 
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.: {}
> org.apache.nifi.processor.exception.ProcessException: Error during database 
> query or conversion of records to Avro.
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
> at 
> org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
> at 
> org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
> at 
> org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
> at 
> 

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Environment: 
ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
JDBC driver db2jcc4-10.5.0.6

  was:
ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more 
> than one table with the same name on the database (in different schemas)
> ---
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
> JDBC driver db2jcc4-10.5.0.6
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4926:
---
Description: 
I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table: 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
According to [DB2's 
documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this 
particular exception could be avoided by adding this setting (semicolon 
included) to the JDBC connection URL:
{code:java}
allowNextOnExhaustedResultSet=1;{code}
But it didn't make a difference.

I also tried to set 

[jira] [Created] (NIFI-4926) QueryDatabaseTable throws SqlException after reading entire DB2 table

2018-03-02 Thread Marcio Sugar (JIRA)
Marcio Sugar created NIFI-4926:
--

 Summary: QueryDatabaseTable throws SqlException after reading 
entire DB2 table
 Key: NIFI-4926
 URL: https://issues.apache.org/jira/browse/NIFI-4926
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
 Environment: ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
JDBC driver db2jcc4-10.5.0.6
Reporter: Marcio Sugar


I'm trying to replicate a table from one database to another using NiFi. My 
flow is just a  QueryDatabaseTable connected to a PutDatabaseRecord. The former 
fails with this SQLException after reading the whole table:

 

 
{code:java}
2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] 
o.a.n.c.s.StandardProcessScheduler Starting 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1]
2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] 
o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 
threads
2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER 
State: StandardStateMap[version=54, values={}]
2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query 
SELECT * FROM FXSCHEMA.USER
2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] 
o.a.nifi.controller.StandardFlowService Saved flow controller 
org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = 
false
2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 
StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, 
section=4], offset=0, 
length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 
Avro records; transferring to 'success'
2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] 
o.a.n.p.standard.QueryDatabaseTable 
QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute 
SQL select query SELECT * FROM FXSCHEMA.USER due to 
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.: {}
org.apache.nifi.processor.exception.ProcessException: Error during database 
query or conversion of records to Avro.
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291)
at 
org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285)
at 
org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147)
at 
org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47)
at 
org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] 
Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.kd.a(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source)
at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452)
at 
org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256)
at 
org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289)
... 13 common frames omitted
{code}
 


[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Summary: PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there 
is more than one table with the same name on the database (in different 
schemas)  (was: PutDatabaseRecord throws ArrayIndexOutOfBoundsException where 
there is more than one table with the same name on the database (in different 
schemas))

> PutDatabaseRecord throws ArrayIndexOutOfBoundsException when there is more 
> than one table with the same name on the database (in different schemas)
> ---
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

[ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16383989#comment-16383989
 ] 

Marcio Sugar edited comment on NIFI-4924 at 3/2/18 7:14 PM:


Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

However, even after fixing the typo I'm still getting a failure, so it seems 
this is not the root cause of my problem.


was (Author: msugar):
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

However, even after fixing the typo I'm still getting the 
ArrayIndexOutOfBoundsexception. So it seems this is not the root cause of my 
problem.

> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

I get errors like this when my AvroReader is set to use the 'Schema Text' 
property: 
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> I get errors like this when my AvroReader is set to use the 'Schema Text' 
> property: 
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  failed to process due to 
> org.apache.nifi.processor.exception.ProcessException: Failed to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> length=174960],offset=0,name=255478043043373,size=174960] due to 
> java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed 
> to process 
> StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
>  [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
> section=2], offset=175623, 
> 

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>Reporter: Marcio Sugar
>Priority: Major
>
> I'm trying to copy data from one table on DB2 database "A" to the same table 
> on another DB2 database "B". Schemas are identical.
> My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
> PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
> Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
> different instances.
> When I set the AvroReader to use the 'Schema Text' property, I get errors 
> like this:
> {code:java}
> PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
> org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
>  

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 

Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

 

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>

[jira] [Updated] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)

 [ 
https://issues.apache.org/jira/browse/NIFI-4924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcio Sugar updated NIFI-4924:
---
Description: 
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
 

Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter: 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

 

The proper call, I think, should set the schema name:
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.

  was:
I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:

 
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 

So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.


> PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more 
> than one table with the same name on the database (in different schemas)
> 
>
> Key: NIFI-4924
> URL: https://issues.apache.org/jira/browse/NIFI-4924
> Project: Apache NiFi
>  Issue Type: Bug
>  Components: Extensions
>Affects Versions: 1.5.0
> Environment: ubuntu 16.04
> nifi 1.5.0
> db2 v10.5.0.7
>  

[jira] [Created] (NIFI-4924) PutDatabaseRecord throws ArrayIndexOutOfBoundsException where there is more than one table with the same name on the database (in different schemas)

2018-03-02 Thread Marcio Sugar (JIRA)
Marcio Sugar created NIFI-4924:
--

 Summary: PutDatabaseRecord throws ArrayIndexOutOfBoundsException 
where there is more than one table with the same name on the database (in 
different schemas)
 Key: NIFI-4924
 URL: https://issues.apache.org/jira/browse/NIFI-4924
 Project: Apache NiFi
  Issue Type: Bug
  Components: Extensions
Affects Versions: 1.5.0
 Environment: ubuntu 16.04
nifi 1.5.0
db2 v10.5.0.7
Reporter: Marcio Sugar


I'm trying to copy data from one table on DB2 database "A" to the same table on 
another DB2 database "B". Schemas are identical.

My flow is simply a QueryDatabaseTable reading from "A" and connected to a 
PutDatabaseRecord writing to "B". The latter uses and AvroReader controller. 
Each processor uses its own DBCPConnectionPool since "A" and "B" are on 
different instances.

When I set the AvroReader to use the 'Schema Text' property, I get errors like 
this:

 
{code:java}
PutDatabaseRecord[id=e3905091-0161-1000-028c-982c192bf16f] 
org.apache.nifi.processors.standard.PutDatabaseRecord$$Lambda$438/1366813796@deb42
 failed to process due to org.apache.nifi.processor.exception.ProcessException: 
Failed to process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40; rolling back session: Failed to 
process 
StandardFlowFileRecord[uuid=58ce220f-2b43-4875-bdb4-c704b093f9d7,claim=StandardContentClaim
 [resourceClaim=StandardResourceClaim[id=1519955790257-2, container=default, 
section=2], offset=175623, 
length=174960],offset=0,name=255478043043373,size=174960] due to 
java.lang.ArrayIndexOutOfBoundsException: -40 
{code}
Debugging the code I found what I believe to be a typo. In 
PutDatabaseRecord.java, line 1031, the program is calling the getPrimaryKeys 
method but passing a null as the 2nd. parameter:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, null, tableName)) 
{{code}
 

So in fact it's getting the primary keys for all the tables with the same name 
found across all the schemas, which is wrong.

The proper call, I think, should set the schema name:

 
{code:java}
try (final ResultSet pkrs = dmd.getPrimaryKeys(catalog, schema, tableName)) 
{{code}
 

This is a subtle bug that can go unnoticed if the database doesn't have tables 
with the same name in different schemas.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)