[ 
https://issues.apache.org/jira/browse/PHOENIX-2925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15295822#comment-15295822
 ] 

Sergey Soldatov commented on PHOENIX-2925:
------------------------------------------

[~rajesh23] That's the part of the new patch. Empty KV are passing between 
mapper-reducer instead of automatically generating it in reducer. I will revise 
PHOENIX-1734  as well. Testing is quite slow, since I'm double checking that 
number of records and their values in hbase tables are those that are supposed 
to be for all possible combinations . But I really hope to finish it today.

> CsvBulkloadTool not working properly if there are multiple local indexes to 
> the same table(After PHOENIX-1973)
> --------------------------------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-2925
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-2925
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.7.0
>            Reporter: Rajeshbabu Chintaguntla
>            Assignee: Rajeshbabu Chintaguntla
>             Fix For: 4.8.0
>
>         Attachments: PHOENIX-2925.patch, PHOENIX-2925.patch, 
> PHOENIX-2925_v2.patch
>
>
> When there are multiple local indexes then only for first index data is 
> getting generated properly and other indexes doesn't have any data. Changing 
> testImportWithLocalIndex test as below is failing. ping [~sergey.soldatov]?
> {noformat}
>     @Test
>     public void testImportWithLocalIndex() throws Exception {
>         Statement stmt = conn.createStatement();
>         stmt.execute("CREATE TABLE TABLE6 (ID INTEGER NOT NULL PRIMARY KEY, " 
> +
>                 "FIRST_NAME VARCHAR, LAST_NAME VARCHAR) SPLIt ON (1,2)");
>         String ddl = "CREATE LOCAL INDEX TABLE6_IDX ON TABLE6 "
>                 + " (FIRST_NAME ASC)";
>         stmt.execute(ddl);
>         ddl = "CREATE LOCAL INDEX TABLE6_IDX2 ON TABLE6 " + " (LAST_NAME 
> ASC)";
>         stmt.execute(ddl);
>         FileSystem fs = FileSystem.get(getUtility().getConfiguration());
>         FSDataOutputStream outputStream = fs.create(new 
> Path("/tmp/input3.csv"));
>         PrintWriter printWriter = new PrintWriter(outputStream);
>         printWriter.println("1,FirstName 1,LastName 1");
>         printWriter.println("2,FirstName 2,LastName 2");
>         printWriter.close();
>         CsvBulkLoadTool csvBulkLoadTool = new CsvBulkLoadTool();
>         csvBulkLoadTool.setConf(getUtility().getConfiguration());
>         int exitCode = csvBulkLoadTool.run(new String[] {
>                 "--input", "/tmp/input3.csv",
>                 "--table", "table6",
>                 "--zookeeper", zkQuorum});
>         assertEquals(0, exitCode);
>         ResultSet rs = stmt.executeQuery("SELECT id, FIRST_NAME FROM TABLE6 
> where first_name='FirstName 2'");
>         assertTrue(rs.next());
>         assertEquals(2, rs.getInt(1));
>         assertEquals("FirstName 2", rs.getString(2));
>         rs = stmt.executeQuery("SELECT LAST_NAME FROM TABLE6  where 
> last_name='LastName 1'");
>         assertTrue(rs.next());
>         assertEquals("LastName 1", rs.getString(1));
>         rs.close();
>         stmt.close();
>     }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to