Github user divyabhargov commented on a diff in the pull request:
https://github.com/apache/hawq/pull/1353#discussion_r215019806
--- Diff:
pxf/pxf-jdbc/src/main/java/org/apache/hawq/pxf/plugins/jdbc/writercallable/BatchWriterCallable.java
---
@@ -88,24 +88,41 @@ public SQLException call() throws IOException,
SQLException, ClassNotFoundExcept
/**
* Construct a new batch writer
*/
- BatchWriterCallable(JdbcPlugin plugin, String query, PreparedStatement
statement, int maxRowsCount) {
- if ((plugin == null) || (query == null)) {
+ BatchWriterCallable(JdbcPlugin plugin, String query, PreparedStatement
statement, int batchSize) {
+ if (plugin == null || query == null) {
throw new IllegalArgumentException("The provided JdbcPlugin or
SQL query is null");
}
+
+ try {
+ if
(!plugin.getConnection().getMetaData().supportsBatchUpdates()) {
+ throw new IllegalArgumentException("The external database
does not support batch updates");
+ }
+ }
--- End diff --
Since batching is default, it is risky to have this failure happen. We
should make a decision to use SimpleWriterCallable much before this. Basically,
the decision to use batching should depend upon both the batch size specified
(or implied by default) and if the external database supports batching or not.
---