[ 
https://issues.apache.org/jira/browse/PHOENIX-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17799271#comment-17799271
 ] 

ASF GitHub Bot commented on PHOENIX-7014:
-----------------------------------------

haridsv commented on code in PR #1766:
URL: https://github.com/apache/phoenix/pull/1766#discussion_r1433516465


##########
phoenix-core/src/it/java/org/apache/phoenix/end2end/CDCMiscIT.java:
##########
@@ -269,4 +273,109 @@ public void testDropCDCIndex () throws SQLException {
         }
     }
 
+    private void assertResultSet(ResultSet rs) throws Exception{
+        Gson gson = new Gson();
+        assertEquals(true, rs.next());
+        assertEquals(1, rs.getInt(2));
+        assertEquals(new HashMap(){{put("V1", 100d);}}, 
gson.fromJson(rs.getString(3),
+                HashMap.class));
+        assertEquals(true, rs.next());
+        assertEquals(2, rs.getInt(2));
+        assertEquals(new HashMap(){{put("V1", 200d);}}, 
gson.fromJson(rs.getString(3),
+                HashMap.class));
+        assertEquals(true, rs.next());
+        assertEquals(1, rs.getInt(2));
+        assertEquals(new HashMap(){{put("V1", 101d);}}, 
gson.fromJson(rs.getString(3),
+                HashMap.class));
+        assertEquals(false, rs.next());
+        rs.close();
+    }
+
+    @Test
+    public void testSelectCDC() throws Exception {
+        Properties props = new Properties();
+        props.put(QueryServices.TASK_HANDLING_INTERVAL_MS_ATTRIB, 
Long.toString(Long.MAX_VALUE));
+        props.put("hbase.client.scanner.timeout.period", "6000000");
+        props.put("phoenix.query.timeoutMs", "6000000");
+        props.put("zookeeper.session.timeout", "6000000");
+        props.put("hbase.rpc.timeout", "6000000");
+        Connection conn = DriverManager.getConnection(getUrl(), props);
+        String tableName = generateUniqueName();
+        conn.createStatement().execute(
+                "CREATE TABLE  " + tableName + " ( k INTEGER PRIMARY KEY," + " 
v1 INTEGER)");
+        conn.createStatement().execute("UPSERT INTO " + tableName + " (k, v1) 
VALUES (1, 100)");
+        conn.createStatement().execute("UPSERT INTO " + tableName + " (k, v1) 
VALUES (2, 200)");
+        conn.commit();
+        Thread.sleep(1000);
+        conn.createStatement().execute("UPSERT INTO " + tableName + " (k, v1) 
VALUES (1, 101)");
+        conn.commit();
+        String cdcName = generateUniqueName();
+        String cdc_sql = "CREATE CDC " + cdcName

Review Comment:
   > Everytime we run CREATE CDC do we need to pass PHOENIX_ROW_TIMESTAMP ? If 
that is something mandatory we shouldn't make it part of the interface and the 
system should do it implicitly under the hood.
   
   The feature allows any timestamp like column to be used instead of the 
PHOENIX_ROW_TIMESTMAP that is why this flexibility is being allowed, however I 
am not 100% sure about the use cases, I will have an offline discussion on 
this, thank you!





> CDC query complier and optimizer
> --------------------------------
>
>                 Key: PHOENIX-7014
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-7014
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: Viraj Jasani
>            Assignee: Hari Krishna Dara
>            Priority: Major
>
> For CDC table type, the query optimizer should be able to query from the 
> uncovered global index table with data table associated with the given CDC 
> table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to