[ https://issues.apache.org/jira/browse/NIFI-4926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Marcio Sugar updated NIFI-4926: ------------------------------- Affects Version/s: 1.6.0 Environment: Ubuntu 16.04 Apache NiFi 1.5.0, 1.6.0 IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1) IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) 4.19.26 / v10.5 FP6, 4.19.72 / v10.5 FP9 (2) Notes: (1) SELECT * FROM SYSIBMADM.ENV_INST_INFO (2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version was: ubuntu 16.04 nifi 1.5.0 db2 v10.5.0.7 JDBC driver db2jcc4-10.5.0.6 Description: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this particular exception could be avoided by adding this setting (semicolon included) to the JDBC connection URL: {code:java} allowNextOnExhaustedResultSet=1;{code} But it didn't make a difference in my case. I believe the reason is the ResultSet is not defined as TYPE_FORWARD_ONLY, in which case the above property will have no effect. I also tried to set the 'Fetch Size' and 'Max Rows Per Flow File' to different values, to no avail. Setting the 'Additional WHERE clause' to get just a small number of rows didn't help either. It looks like the connection is been prematurely closed by the processor. was: I'm trying to replicate a table from one database to another using NiFi. My flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The former fails with this SQLException after reading the whole table: {code:java} 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] o.a.n.c.s.StandardProcessScheduler Starting QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 threads 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER State: StandardStateMap[version=54, values={}] 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query SELECT * FROM FXSCHEMA.USER 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] o.a.nifi.controller.StandardFlowService Saved flow controller org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = false 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, section=4], offset=0, length=222061615],offset=0,name=264583001281149,size=222061615] contains 652026 Avro records; transferring to 'success' 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] o.a.n.p.standard.QueryDatabaseTable QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute SQL select query SELECT * FROM FXSCHEMA.USER due to org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro.: {} org.apache.nifi.processor.exception.ProcessException: Error during database query or conversion of records to Avro. at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) at org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) at org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) at org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.kd.a(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) at org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) at org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) at org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) ... 13 common frames omitted {code} According to [DB2's documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], this particular exception could be avoided by adding this setting (semicolon included) to the JDBC connection URL: {code:java} allowNextOnExhaustedResultSet=1;{code} But it didn't make a difference in my case. I also tried to set the 'Fetch Size' and 'Max Rows Per Flow File' to different values, to no avail. Setting the 'Additional WHERE clause' to get just a small number of rows didn't help either. It looks like the connection is been prematurely closed by the processor. > QueryDatabaseTable throws SqlException after reading from DB2 table > ------------------------------------------------------------------- > > Key: NIFI-4926 > URL: https://issues.apache.org/jira/browse/NIFI-4926 > Project: Apache NiFi > Issue Type: Bug > Components: Extensions > Affects Versions: 1.5.0, 1.6.0 > Environment: Ubuntu 16.04 > Apache NiFi 1.5.0, 1.6.0 > IBM DB2 for Linux, UNIX and Windows 10.5.0.7, 10.5.0.8 (1) > IBM Data Server Driver for JDBC and SQLJ, JDBC 4.0 Driver (db2jcc4.jar) > 4.19.26 / v10.5 FP6, 4.19.72 / v10.5 FP9 (2) > Notes: > (1) SELECT * FROM SYSIBMADM.ENV_INST_INFO > (2) java -cp ./db2jcc4.jar com.ibm.db2.jcc.DB2Jcc -version > Reporter: Marcio Sugar > Priority: Major > > I'm trying to replicate a table from one database to another using NiFi. My > flow is just a QueryDatabaseTable connected to a PutDatabaseRecord. The > former fails with this SQLException after reading the whole table: > {code:java} > 2018-03-02 15:20:44,688 INFO [NiFi Web Server-2017] > o.a.n.c.s.StandardProcessScheduler Starting > QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] > 2018-03-02 15:20:44,692 INFO [StandardProcessScheduler Thread-2] > o.a.n.c.s.TimerDrivenSchedulingAgent Scheduled > QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] to run with 1 > threads > 2018-03-02 15:20:44,692 DEBUG [Timer-Driven Process Thread-2] > o.a.n.p.standard.QueryDatabaseTable > QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Returning CLUSTER > State: StandardStateMap[version=54, values={}] > 2018-03-02 15:20:44,693 DEBUG [Timer-Driven Process Thread-2] > o.a.n.p.standard.QueryDatabaseTable > QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Executing query > SELECT * FROM FXSCHEMA.USER > 2018-03-02 15:20:45,159 INFO [Flow Service Tasks Thread-1] > o.a.nifi.controller.StandardFlowService Saved flow controller > org.apache.nifi.controller.FlowController@77b729c4 // Another save pending = > false > 2018-03-02 15:21:41,577 INFO [Timer-Driven Process Thread-2] > o.a.n.p.standard.QueryDatabaseTable > QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] > StandardFlowFileRecord[uuid=fc5e66c0-14ef-4ed5-8d84-7c4d582000b7,claim=StandardContentClaim > [resourceClaim=StandardResourceClaim[id=1520022044698-4, container=default, > section=4], offset=0, > length=222061615],offset=0,name=264583001281149,size=222061615] contains > 652026 Avro records; transferring to 'success' > 2018-03-02 15:21:41,578 ERROR [Timer-Driven Process Thread-2] > o.a.n.p.standard.QueryDatabaseTable > QueryDatabaseTable[id=e83d9370-0161-1000-d7d6-702ae791aaf1] Unable to execute > SQL select query SELECT * FROM FXSCHEMA.USER due to > org.apache.nifi.processor.exception.ProcessException: Error during database > query or conversion of records to Avro.: {} > org.apache.nifi.processor.exception.ProcessException: Error during database > query or conversion of records to Avro. > at > org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:291) > at > org.apache.nifi.controller.repository.StandardProcessSession.write(StandardProcessSession.java:2571) > at > org.apache.nifi.processors.standard.QueryDatabaseTable.onTrigger(QueryDatabaseTable.java:285) > at > org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1122) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:147) > at > org.apache.nifi.controller.tasks.ContinuallyRunProcessorTask.call(ContinuallyRunProcessorTask.java:47) > at > org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:128) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:748) > Caused by: com.ibm.db2.jcc.am.SqlException: [jcc][t4][10120][10898][4.19.26] > Invalid operation: result set is closed. ERRORCODE=-4470, SQLSTATE=null > at com.ibm.db2.jcc.am.kd.a(Unknown Source) > at com.ibm.db2.jcc.am.kd.a(Unknown Source) > at com.ibm.db2.jcc.am.kd.a(Unknown Source) > at com.ibm.db2.jcc.am.ResultSet.checkForClosedResultSet(Unknown Source) > at com.ibm.db2.jcc.am.ResultSet.getMetaDataX(Unknown Source) > at com.ibm.db2.jcc.am.ResultSet.getMetaData(Unknown Source) > at > org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) > at > org.apache.commons.dbcp.DelegatingResultSet.getMetaData(DelegatingResultSet.java:322) > at > org.apache.nifi.processors.standard.util.JdbcCommon.createSchema(JdbcCommon.java:452) > at > org.apache.nifi.processors.standard.util.JdbcCommon.convertToAvroStream(JdbcCommon.java:256) > at > org.apache.nifi.processors.standard.QueryDatabaseTable.lambda$onTrigger$0(QueryDatabaseTable.java:289) > ... 13 common frames omitted > {code} > According to [DB2's > documentation|http://www-01.ibm.com/support/docview.wss?uid=swg21461670], > this particular exception could be avoided by adding this setting (semicolon > included) to the JDBC connection URL: > {code:java} > allowNextOnExhaustedResultSet=1;{code} > But it didn't make a difference in my case. I believe the reason is the > ResultSet is not defined as TYPE_FORWARD_ONLY, in which case the above > property will have no effect. > I also tried to set the 'Fetch Size' and 'Max Rows Per Flow File' to > different values, to no avail. > Setting the 'Additional WHERE clause' to get just a small number of rows > didn't help either. > It looks like the connection is been prematurely closed by the processor. -- This message was sent by Atlassian JIRA (v7.6.3#76005)