jredzepovic commented on code in PR #33:
URL: 
https://github.com/apache/flink-connector-jdbc/pull/33#discussion_r1178198860


##########
flink-connector-jdbc/src/main/java/org/apache/flink/connector/jdbc/dialect/vertica/VerticaDialect.java:
##########
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.connector.jdbc.dialect.vertica;
+
+import org.apache.flink.connector.jdbc.converter.JdbcRowConverter;
+import org.apache.flink.connector.jdbc.dialect.AbstractDialect;
+import org.apache.flink.connector.jdbc.internal.converter.VerticaRowConverter;
+import org.apache.flink.table.types.logical.LogicalTypeRoot;
+import org.apache.flink.table.types.logical.RowType;
+
+import java.util.EnumSet;
+import java.util.Optional;
+import java.util.Set;
+
+/** JDBC dialect for Vertica. */
+public class VerticaDialect extends AbstractDialect {
+
+    private static final long serialVersionUID = 1L;
+
+    // Define MAX/MIN precision of TIMESTAMP type according to Vertica docs:
+    // 
https://www.vertica.com/docs/12.0.x/HTML/Content/Authoring/SQLReferenceManual/DataTypes/Date-Time/DateTimeDataTypes.htm
+    private static final int MAX_TIMESTAMP_PRECISION = 6;
+    private static final int MIN_TIMESTAMP_PRECISION = 0;
+
+    // Define MAX/MIN precision of DECIMAL type according to Vertica docs:
+    // 
https://www.vertica.com/docs/12.0.x/HTML/Content/Authoring/SQLReferenceManual/DataTypes/Numeric/NUMERIC.htm
+    private static final int MAX_DECIMAL_PRECISION = 1024;
+    private static final int MIN_DECIMAL_PRECISION = 1;
+
+    @Override
+    public Set<LogicalTypeRoot> supportedTypes() {
+        // List of Vertica data types:
+        // 
https://www.vertica.com/docs/12.0.x/HTML/Content/Authoring/SQLReferenceManual/DataTypes/SQLDataTypes.htm
+        // 
https://www.vertica.com/docs/12.0.x/HTML/Content/Authoring/ConnectingToVertica/ClientJDBC/JDBCDataTypes.htm
+
+        return EnumSet.of(
+                LogicalTypeRoot.BINARY,
+                LogicalTypeRoot.VARBINARY,
+                LogicalTypeRoot.BOOLEAN,
+                LogicalTypeRoot.CHAR,
+                LogicalTypeRoot.VARCHAR,
+                LogicalTypeRoot.DATE,
+                LogicalTypeRoot.TIME_WITHOUT_TIME_ZONE,
+                LogicalTypeRoot.TIMESTAMP_WITH_TIME_ZONE,
+                LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE,
+                LogicalTypeRoot.DOUBLE,
+                LogicalTypeRoot.DECIMAL,
+                LogicalTypeRoot.BIGINT,
+                LogicalTypeRoot.ARRAY);
+    }
+
+    @Override
+    public String dialectName() {
+        return "Vertica";
+    }
+
+    @Override
+    public JdbcRowConverter getRowConverter(RowType rowType) {
+        return new VerticaRowConverter(rowType);
+    }
+
+    @Override
+    public String getLimitClause(long limit) {
+        return "LIMIT " + limit;
+    }
+
+    @Override
+    public Optional<String> defaultDriverName() {
+        return Optional.of("com.vertica.jdbc.Driver");
+    }
+
+    @Override
+    public String quoteIdentifier(String identifier) {
+        return "\"" + identifier + "\"";
+    }
+
+    @Override
+    public Optional<Range> decimalPrecisionRange() {
+        return Optional.of(Range.of(MIN_DECIMAL_PRECISION, 
MAX_DECIMAL_PRECISION));
+    }
+
+    @Override
+    public Optional<Range> timestampPrecisionRange() {
+        return Optional.of(Range.of(MIN_TIMESTAMP_PRECISION, 
MAX_TIMESTAMP_PRECISION));
+    }
+
+    @Override
+    public Optional<String> getUpsertStatement(
+            String tableName, String[] fieldNames, String[] uniqueKeyFields) {
+        throw new UnsupportedOperationException(

Review Comment:
   Even though Vertica does support the MERGE statement, I wasn't able to have 
it working with Flink. This was the implementation I originally did, based on 
the OracleDialect example:
   ```java
   @Override
   public Optional<String> getUpsertStatement(
           String tableName, String[] fieldNames, String[] uniqueKeyFields) {
   
       String valuesBinding =
               Arrays.stream(fieldNames)
                       .map(f -> ":" + f + " " + quoteIdentifier(f))
                       .collect(Collectors.joining(", "));
   
       String usingClause = String.format("SELECT %s", valuesBinding);
   
       String onClause =
               Arrays.stream(uniqueKeyFields)
                       .map(f -> "t." + quoteIdentifier(f) + "=s." + 
quoteIdentifier(f))
                       .collect(Collectors.joining(" AND "));
   
       final Set<String> uniqueKeyFieldsSet =
               Arrays.stream(uniqueKeyFields).collect(Collectors.toSet());
       String updateClause =
               Arrays.stream(fieldNames)
                       .filter(f -> !uniqueKeyFieldsSet.contains(f))
                       .map(f -> quoteIdentifier(f) + " = s." + 
quoteIdentifier(f))
                       .collect(Collectors.joining(", "));
   
       String insertFields =
               Arrays.stream(fieldNames)
                       .map(this::quoteIdentifier)
                       .collect(Collectors.joining(", "));
   
       String valuesClause =
               Arrays.stream(fieldNames)
                       .map(f -> "s." + quoteIdentifier(f))
                       .collect(Collectors.joining(", "));
   
       String mergeQuery =
               "MERGE INTO %s t USING (%s) s "
                       + "ON %s "
                       + "WHEN MATCHED THEN UPDATE SET %s "
                       + "WHEN NOT MATCHED THEN INSERT (%s) "
                       + "VALUES (%s)";
   
       return Optional.of(
               String.format(
                       mergeQuery,
                       quoteIdentifier(tableName),
                       usingClause,
                       onClause,
                       updateClause,
                       insertFields,
                       valuesClause));
   }
   ```
   The constructed MERGE query was syntactically correct, which I have 
validated by running it on the Vertica database instance (I had to replace 
:placeholders with the actual values). 
   When running the query within Flink(upsert statement) I got this exception, 
originating from the Vertica JDBC driver:
   `java.sql.SQLException: [Vertica][VJDBC](3376) ERROR: Failed to find 
conversion function from unknown to int`
   After trying to find out what was the reason for the exception, I've 
stumbled upon this StackOverflow post:
   
https://stackoverflow.com/questions/18073901/failed-to-find-conversion-function-from-unknown-to-text.
    
   My guess is that parametrized MERGE statement is not supported by Vertica 
JDBC driver and that could also be the reason KafkaConnect JDBC connector 
doesn't support upsert statement with Vertica as well, but only insert.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to