Ken Geis created FLINK-7221:
-------------------------------

             Summary: JDBCOutputFormat swallows errors on last batch
                 Key: FLINK-7221
                 URL: https://issues.apache.org/jira/browse/FLINK-7221
             Project: Flink
          Issue Type: Bug
          Components: Batch Connectors and Input/Output Formats
    Affects Versions: 1.3.1
         Environment: Java 1.8.0_131, PostgreSQL driver 42.1.3
            Reporter: Ken Geis


I have a data set with ~17000 rows that I was trying to write to a PostgreSQL 
table that I did not (yet) have permission on. No data was loaded, and Flink 
did not report any problem outputting the data set. The only indication I found 
of my problem was in the PostgreSQL log.

With the default parallelism (8) and the default batch interval (5000), my 
batches were ~2000 rows each, so they were never executed in 
{{JDBCOutputFormat.writeRecord(..)}}. {{JDBCOutputFormat.close()}} does a final 
call on {{upload.executeBatch()}}, but if there is a problem, it is logged at 
INFO level and not rethrown. 

If I decrease the batch interval to 100 or 1000, then an error is properly 
reported.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to