[ https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15213759#comment-15213759 ]
Hadoop QA commented on PHOENIX-2783: ------------------------------------ {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12795565/PHOENIX-2783-3.patch against master branch at commit cd8e86ca7170876a30771fcc16c027f8dc8dd386. ATTACHMENT ID: 12795565 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:red}-1 tests included{color}. The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 javadoc{color}. The javadoc tool appears to have generated 20 warning messages. {color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings (more than the master's current 0 warnings). {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: + public CreateStatementErrorException(String message, String schemaName, String tableName, String familyName, String columnName) { + throw new CreateStatementErrorException(message, schemaName, tblName, familyName, columnName); {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.phoenix.compile.QueryCompilerTest {color:red}-1 core zombie tests{color}. There are 1 zombie test(s): at org.apache.hadoop.hdfs.server.namenode.ha.TestPendingCorruptDnMessages.testChangedStorageId(TestPendingCorruptDnMessages.java:102) Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/286//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/286//artifact/patchprocess/patchReleaseAuditWarnings.txt Javadoc warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/286//artifact/patchprocess/patchJavadocWarnings.txt Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/286//console This message is automatically generated. > Creating secondary index with duplicated columns makes the catalog corrupted > ---------------------------------------------------------------------------- > > Key: PHOENIX-2783 > URL: https://issues.apache.org/jira/browse/PHOENIX-2783 > Project: Phoenix > Issue Type: Bug > Affects Versions: 4.7.0 > Reporter: Sergey Soldatov > Assignee: Sergey Soldatov > Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch, > PHOENIX-2783-3.patch, PHOENIX-2783-INIT.patch > > > Simple example > {noformat} > create table x (t1 varchar primary key, t2 varchar, t3 varchar); > create index idx on x (t2) include (t1,t3,t3); > {noformat} > cause an exception that duplicated column was detected, but the client > updates the catalog before throwing it and makes it unusable. All following > attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem > was discussed on the user list recently. > The cause of the problem is that check for duplicated columns happen in > PTableImpl after MetaDataClient complete the server createTable. > The simple way to fix is to add a similar check in MetaDataClient before > createTable is called. > Possible someone can suggest a more elegant way to fix it? -- This message was sent by Atlassian JIRA (v6.3.4#6332)