[
https://issues.apache.org/jira/browse/PHOENIX-2783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15208939#comment-15208939
]
Sergey Soldatov commented on PHOENIX-2783:
------------------------------------------
Actually I was thinking about using HashSet, but I don't like string
concatenation. And I did some simple benchmarking for 100000 entries
Benchmark Mode Cnt Score Error Units
MyBenchmark.hashSetTest avgt 15 21776823.378 ± 3728502.403 ns/op
MyBenchmark.listMultimapTest avgt 15 20121794.022 ± 2185643.518 ns/op
It seems that ListMultiMap is still better a bit.
> Creating secondary index with duplicated columns makes the catalog corrupted
> ----------------------------------------------------------------------------
>
> Key: PHOENIX-2783
> URL: https://issues.apache.org/jira/browse/PHOENIX-2783
> Project: Phoenix
> Issue Type: Bug
> Affects Versions: 4.7.0
> Reporter: Sergey Soldatov
> Assignee: Sergey Soldatov
> Attachments: PHOENIX-2783-1.patch, PHOENIX-2783-2.patch
>
>
> Simple example
> {noformat}
> create table x (t1 varchar primary key, t2 varchar, t3 varchar);
> create index idx on x (t2) include (t1,t3,t3);
> {noformat}
> cause an exception that duplicated column was detected, but the client
> updates the catalog before throwing it and makes it unusable. All following
> attempt to use table x cause an exception ArrayIndexOutOfBounds. This problem
> was discussed on the user list recently.
> The cause of the problem is that check for duplicated columns happen in
> PTableImpl after MetaDataClient complete the server createTable.
> The simple way to fix is to add a similar check in MetaDataClient before
> createTable is called.
> Possible someone can suggest a more elegant way to fix it?
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)