[jira] [Updated] (PHOENIX-5607) Client-server backward compatibility tests
[ https://issues.apache.org/jira/browse/PHOENIX-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Guggilam updated PHOENIX-5607: -- Attachment: PHOENIX-5607.4.x-HBase-1.3.v3.patch > Client-server backward compatibility tests > --- > > Key: PHOENIX-5607 > URL: https://issues.apache.org/jira/browse/PHOENIX-5607 > Project: Phoenix > Issue Type: Test >Affects Versions: 4.15.0 >Reporter: Lars Hofhansl >Assignee: Sandeep Guggilam >Priority: Blocker > Labels: phoenix-hardening > Fix For: 4.16.0 > > Attachments: PHOENIX-5607.4.x-HBase-1.3.v1.patch, > PHOENIX-5607.4.x-HBase-1.3.v2.patch, PHOENIX-5607.4.x-HBase-1.3.v3.patch, > PHOENIX-5607.4.x-HBase-1.3.v3.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Filing this as a blocker for 4.16.0. > As we've seen with the various failed attempts to release 4.15.0 Phoenix' > backwards compatibility story is weak, and lacks tests - in fact there're no > tests. > We should not allow to ship 4.16.0 without improving that and without tests. > [~ckulkarni], [~gjacoby] , FYI, what we discussed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5607) Client-server backward compatibility tests
[ https://issues.apache.org/jira/browse/PHOENIX-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Guggilam updated PHOENIX-5607: -- Attachment: (was: PHOENIX-5607.4.x-HBase-1.3.v3.patch) > Client-server backward compatibility tests > --- > > Key: PHOENIX-5607 > URL: https://issues.apache.org/jira/browse/PHOENIX-5607 > Project: Phoenix > Issue Type: Test >Affects Versions: 4.15.0 >Reporter: Lars Hofhansl >Assignee: Sandeep Guggilam >Priority: Blocker > Labels: phoenix-hardening > Fix For: 4.16.0 > > Attachments: PHOENIX-5607.4.x-HBase-1.3.v1.patch, > PHOENIX-5607.4.x-HBase-1.3.v2.patch, PHOENIX-5607.4.x-HBase-1.3.v3.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Filing this as a blocker for 4.16.0. > As we've seen with the various failed attempts to release 4.15.0 Phoenix' > backwards compatibility story is weak, and lacks tests - in fact there're no > tests. > We should not allow to ship 4.16.0 without improving that and without tests. > [~ckulkarni], [~gjacoby] , FYI, what we discussed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5736) Mutable global index rebuilds are incorrect after PHOENIX-5494
[ https://issues.apache.org/jira/browse/PHOENIX-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kadir OZDEMIR updated PHOENIX-5736: --- Affects Version/s: (was: 4.14.3) (was: 4.15.0) > Mutable global index rebuilds are incorrect after PHOENIX-5494 > -- > > Key: PHOENIX-5736 > URL: https://issues.apache.org/jira/browse/PHOENIX-5736 > Project: Phoenix > Issue Type: Bug >Affects Versions: 5.0.0 >Reporter: Kadir OZDEMIR >Priority: Critical > Attachments: skipScanTest.txt > > > PHOENIX-5494 uses skip scans to improve write performance for tables with > indexes. Before this jira, a separate scanner was opened for each data table > mutation to read all versions and delete markers of for the row to be > mutated. With this jira, a single scanner is opened using a raw scan with a > skip scan filter to read all versions and delete markers of the all rows in a > batch. Reading existing data table rows is required to generate index updates. > However, I have discovered that a raw scan with a skip scan filter does not > return all raw versions. This means that after PHOENIX-5494 index rebuilds > for global indexes will not be correct. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5738) Local index DDL failure leaves behind mutations
Siddhi Mehta created PHOENIX-5738: - Summary: Local index DDL failure leaves behind mutations Key: PHOENIX-5738 URL: https://issues.apache.org/jira/browse/PHOENIX-5738 Project: Phoenix Issue Type: Bug Affects Versions: 4.15.0 Reporter: Siddhi Mehta Steps to reproduce: # create table example (id integer not null,fn varchar,\"ln\" varchar constraint pk primary key(id)) DEFAULT_COLUMN_FAMILY='F') # create local index my_idx on example (fn) DEFAULT_COLUMN_FAMILY='F' # When we execute the index creation it will fail due to 'Default column family not allowed on VIEW or shared INDEX.' If you do a conn.commit() and look the mutations on the connection you will see mutation left behind -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5317) Upserting rows into child views with pk fails when the base view has an index on it.
[ https://issues.apache.org/jira/browse/PHOENIX-5317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Guggilam reassigned PHOENIX-5317: - Assignee: Sandeep Guggilam > Upserting rows into child views with pk fails when the base view has an index > on it. > > > Key: PHOENIX-5317 > URL: https://issues.apache.org/jira/browse/PHOENIX-5317 > Project: Phoenix > Issue Type: Sub-task >Affects Versions: 4.13.0, 4.14.1 >Reporter: Jacob Isaac >Assignee: Sandeep Guggilam >Priority: Major > Attachments: PHOENIX-5137-TestFailure.txt > > > Steps to reproduce - > 1 Create Base Table, Base/Global View and Index using non tenanted connection. > CREATE TABLE IF NOT EXISTS TEST.BASETABLE ( > TENANT_ID CHAR(15) NOT NULL, > KEY_PREFIX CHAR(3) NOT NULL, > CREATED_DATE DATE, > CREATED_BY CHAR(15), > SYSTEM_MODSTAMP DATE > CONSTRAINT PK PRIMARY KEY ( > TENANT_ID, > KEY_PREFIX > ) > ) VERSIONS=1, MULTI_TENANT=true, IMMUTABLE_ROWS=TRUE, REPLICATION_SCOPE=1; > CREATE VIEW IF NOT EXISTS TEST.MY_GLOBAL_VIEW ( > TEXT1 VARCHAR NOT NULL, > INT1 BIGINT NOT NULL, > DOUBLE1 DECIMAL(12, 3), > IS_BOOLEAN BOOLEAN, > RELATIONSHIP_ID CHAR(15), > TEXT_READ_ONLY VARCHAR, > DATE_TIME1 DATE, > JSON1 VARCHAR, > IP_START_ADDRESS VARCHAR > CONSTRAINT PKVIEW PRIMARY KEY > ( > TEXT1, INT1 > ) > ) > AS SELECT * FROM TEST.BASETABLE WHERE KEY_PREFIX = '0CY'; > CREATE INDEX IF NOT EXISTS TEST_MY_GLOBAL_VIEW_SEC_INDEX > ON TEST.MY_GLOBAL_VIEW (TEXT1, INT1) > INCLUDE (CREATED_BY, RELATIONSHIP_ID, JSON1, DOUBLE1, IS_BOOLEAN, > IP_START_ADDRESS, CREATED_DATE, SYSTEM_MODSTAMP, TEXT_READ_ONLY); > 2. Create child view using an tenant-owned connection > CREATE VIEW IF NOT EXISTS TEST."z01" (COL1 VARCHAR, COL2 VARCHAR, COL3 > VARCHAR, COL4 VARCHAR CONSTRAINT PK PRIMARY KEY (COL1, COL2, COL3, COL4)) AS > SELECT * FROM TEST.MY_GLOBAL_VIEW; > 3. Upsert into child view > UPSERT INTO TEST."z01" (DATE_TIME1, INT1, TEXT1, COL1, COL2, COL3, COL4) > VALUES (TO_DATE('2017-10-16 22:00:00', '-MM-dd HH:mm:ss'), 10, 'z', > '8', 'z', 'z', 'z'); > Following exception is thrown - > 0: jdbc:phoenix:localhost> UPSERT INTO TEST."z01" (DATE_TIME1, INT1, TEXT1, > COL1, COL2, COL3, COL4) VALUES (TO_DATE('2017-10-16 22:00:00', '-MM-dd > HH:mm:ss'), 10, 'z', '8', 'z', 'z', 'z'); > java.lang.IllegalArgumentException > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:76) > at > com.google.common.collect.Lists.computeArrayListCapacity(Lists.java:105) > at > com.google.common.collect.Lists.newArrayListWithExpectedSize(Lists.java:195) > at > org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:424) > at > org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:143) > at > org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1176) > at > org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:303) > at > org.apache.phoenix.execute.MutationState$1.next(MutationState.java:519) > at > org.apache.phoenix.execute.MutationState$1.next(MutationState.java:501) > at org.apache.phoenix.execute.MutationState.send(MutationState.java:941) > at > org.apache.phoenix.execute.MutationState.send(MutationState.java:1387) > at > org.apache.phoenix.execute.MutationState.commit(MutationState.java:1228) > at > org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:666) > at > org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:662) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at > org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:662) > at > org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:399) > at > org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379) > at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53) > at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378) > at > org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366) > at > org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1775) > at sqlline.Commands.execute(Commands.java:822) > at sqlline.Commands.sql(Commands.java:732) > at sqlline.SqlLine.dispatch(SqlLine.java:807) > at sqlline.SqlLine.begin(SqlLine.java:681) > at sqlline.SqlLine.start(SqlLine.java:398) > at sqlline.SqlLine.main(SqlLine.java:292) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5737) Hadoop QA run says no tests even though there are added IT tests
Sandeep Guggilam created PHOENIX-5737: - Summary: Hadoop QA run says no tests even though there are added IT tests Key: PHOENIX-5737 URL: https://issues.apache.org/jira/browse/PHOENIX-5737 Project: Phoenix Issue Type: Bug Affects Versions: 5.0.0 Reporter: Sandeep Guggilam Assignee: Sandeep Guggilam Fix For: 5.1.0 Even though there are ITs added in the patch, Hadoop QA run complains about not adding any new tests -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-4521) Allow Pherf scenario to define per table max allowed query duration after which thread is interrupted
[ https://issues.apache.org/jira/browse/PHOENIX-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Feng updated PHOENIX-4521: Description: Some clients interrupt the client thread if it doesn't complete in a required amount of time. It would be good if Pherf supported setting this up so we mimic client behavior more closely, as we're theorizing this may be causing some issues. PLAN # Make necessary changes so new timeoutDuration property is recognized and parsed correctly from the scenario .xml file (completed) # Implement a timeout for query execution stage based on each table's timeoutDuration ## Serial execution: each thread should be interrupted after exceeding timeoutDuration ## Parallel execution: all threads should be interrupted after one thread exceeds timeoutDuration # Test was: Some clients interrupt the client thread if it doesn't complete in a required amount of time. It would be good if Pherf supported setting this up so we mimic client behavior more closely, as we're theorizing this may be causing some issues. PLAN # Make necessary changes so new timeoutDuration property is recognized and parsed correctly from the scenario .xml file (completed) # Implement a timeout for query execution stage based on each table's timeoutDuration # Test > Allow Pherf scenario to define per table max allowed query duration after > which thread is interrupted > - > > Key: PHOENIX-4521 > URL: https://issues.apache.org/jira/browse/PHOENIX-4521 > Project: Phoenix > Issue Type: Improvement >Reporter: James R. Taylor >Assignee: Christine Feng >Priority: Major > Labels: phoenix-hardening > > Some clients interrupt the client thread if it doesn't complete in a required > amount of time. It would be good if Pherf supported setting this up so we > mimic client behavior more closely, as we're theorizing this may be causing > some issues. > > PLAN > # Make necessary changes so new timeoutDuration property is recognized and > parsed correctly from the scenario .xml file (completed) > # Implement a timeout for query execution stage based on each table's > timeoutDuration > ## Serial execution: each thread should be interrupted after exceeding > timeoutDuration > ## Parallel execution: all threads should be interrupted after one thread > exceeds timeoutDuration > # Test -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5736) Mutable global index rebuilds are incorrect after PHOENIX-5494
[ https://issues.apache.org/jira/browse/PHOENIX-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kadir OZDEMIR updated PHOENIX-5736: --- Attachment: skipScanTest.txt > Mutable global index rebuilds are incorrect after PHOENIX-5494 > -- > > Key: PHOENIX-5736 > URL: https://issues.apache.org/jira/browse/PHOENIX-5736 > Project: Phoenix > Issue Type: Bug >Affects Versions: 5.0.0, 4.15.0, 4.14.3 >Reporter: Kadir OZDEMIR >Priority: Critical > Attachments: skipScanTest.txt > > > PHOENIX-5494 uses skip scans to improve write performance for tables with > indexes. Before this jira, a separate scanner was opened for each data table > mutation to read all versions and delete markers of for the row to be > mutated. With this jira, a single scanner is opened using a raw scan with a > skip scan filter to read all versions and delete markers of the all rows in a > batch. Reading existing data table rows is required to generate index updates. > However, I have discovered that a raw scan with a skip scan filter does not > return all raw versions. This means that after PHOENIX-5494 index rebuilds > for global indexes will not be correct. > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5735) IndexTool's inline verification should not verify rows beyond max lookback age
[ https://issues.apache.org/jira/browse/PHOENIX-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5735: Fix Version/s: 4.15.1 5.1.0 > IndexTool's inline verification should not verify rows beyond max lookback age > -- > > Key: PHOENIX-5735 > URL: https://issues.apache.org/jira/browse/PHOENIX-5735 > Project: Phoenix > Issue Type: Improvement >Reporter: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > IndexTool's inline verification should not verify rows beyond max lookback age > Similar to Phoenix-5734 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5736) Mutable global index rebuilds are incorrect after PHOENIX-5494
Kadir OZDEMIR created PHOENIX-5736: -- Summary: Mutable global index rebuilds are incorrect after PHOENIX-5494 Key: PHOENIX-5736 URL: https://issues.apache.org/jira/browse/PHOENIX-5736 Project: Phoenix Issue Type: Bug Affects Versions: 4.14.3, 5.0.0, 4.15.0 Reporter: Kadir OZDEMIR PHOENIX-5494 uses skip scans to improve write performance for tables with indexes. Before this jira, a separate scanner was opened for each data table mutation to read all versions and delete markers of for the row to be mutated. With this jira, a single scanner is opened using a raw scan with a skip scan filter to read all versions and delete markers of the all rows in a batch. Reading existing data table rows is required to generate index updates. However, I have discovered that a raw scan with a skip scan filter does not return all raw versions. This means that after PHOENIX-5494 index rebuilds for global indexes will not be correct. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5735) IndexTool's inline verification should not verify rows beyond max lookback age
Swaroopa Kadam created PHOENIX-5735: --- Summary: IndexTool's inline verification should not verify rows beyond max lookback age Key: PHOENIX-5735 URL: https://issues.apache.org/jira/browse/PHOENIX-5735 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam IndexTool's inline verification should not verify rows beyond max lookback age Similar to Phoenix-5734 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5734) IndexScrutinyTool should not report rows beyond maxLookBack age
Swaroopa Kadam created PHOENIX-5734: --- Summary: IndexScrutinyTool should not report rows beyond maxLookBack age Key: PHOENIX-5734 URL: https://issues.apache.org/jira/browse/PHOENIX-5734 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Index Scrutiny tool should not report row mismatch if the row gets rewritten during the run and the last version around is beyond max look back age which will then get removed by compaction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5607) Client-server backward compatibility tests
[ https://issues.apache.org/jira/browse/PHOENIX-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Guggilam updated PHOENIX-5607: -- Attachment: PHOENIX-5607.4.x-HBase-1.3.v3.patch > Client-server backward compatibility tests > --- > > Key: PHOENIX-5607 > URL: https://issues.apache.org/jira/browse/PHOENIX-5607 > Project: Phoenix > Issue Type: Test >Affects Versions: 4.15.0 >Reporter: Lars Hofhansl >Assignee: Sandeep Guggilam >Priority: Blocker > Labels: phoenix-hardening > Fix For: 4.16.0 > > Attachments: PHOENIX-5607.4.x-HBase-1.3.v1.patch, > PHOENIX-5607.4.x-HBase-1.3.v2.patch, PHOENIX-5607.4.x-HBase-1.3.v3.patch, > PHOENIX-5607.4.x-HBase-1.3.v3.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Filing this as a blocker for 4.16.0. > As we've seen with the various failed attempts to release 4.15.0 Phoenix' > backwards compatibility story is weak, and lacks tests - in fact there're no > tests. > We should not allow to ship 4.16.0 without improving that and without tests. > [~ckulkarni], [~gjacoby] , FYI, what we discussed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-4521) Allow Pherf scenario to define per table max allowed query duration after which thread is interrupted
[ https://issues.apache.org/jira/browse/PHOENIX-4521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Feng updated PHOENIX-4521: Description: Some clients interrupt the client thread if it doesn't complete in a required amount of time. It would be good if Pherf supported setting this up so we mimic client behavior more closely, as we're theorizing this may be causing some issues. PLAN # Make necessary changes so new timeoutDuration property is recognized and parsed correctly from the scenario .xml file (completed) # Implement a timeout for query execution stage based on each table's timeoutDuration # Test was: Some clients interrupt the client thread if it doesn't complete in a required amount of time. It would be good if Pherf supported setting this up so we mimic client behavior more closely, as we're theorizing this may be causing some issues. PLAN # Make necessary changes so new timeoutDuration property is recognized and parsed correctly from the scenario .xml file (completed) # Implement a timeout based on each table's timeoutDuration ** Timeout each individual job? Loading schema, executing queries, etc. ** General timeout for all jobs? # Test > Allow Pherf scenario to define per table max allowed query duration after > which thread is interrupted > - > > Key: PHOENIX-4521 > URL: https://issues.apache.org/jira/browse/PHOENIX-4521 > Project: Phoenix > Issue Type: Improvement >Reporter: James R. Taylor >Assignee: Christine Feng >Priority: Major > Labels: phoenix-hardening > > Some clients interrupt the client thread if it doesn't complete in a required > amount of time. It would be good if Pherf supported setting this up so we > mimic client behavior more closely, as we're theorizing this may be causing > some issues. > > PLAN > # Make necessary changes so new timeoutDuration property is recognized and > parsed correctly from the scenario .xml file (completed) > # Implement a timeout for query execution stage based on each table's > timeoutDuration > # Test -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5607) Client-server backward compatibility tests
[ https://issues.apache.org/jira/browse/PHOENIX-5607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Guggilam updated PHOENIX-5607: -- Attachment: (was: PHOENIX-5607.4.x-HBase-1.3.v3.patch) > Client-server backward compatibility tests > --- > > Key: PHOENIX-5607 > URL: https://issues.apache.org/jira/browse/PHOENIX-5607 > Project: Phoenix > Issue Type: Test >Affects Versions: 4.15.0 >Reporter: Lars Hofhansl >Assignee: Sandeep Guggilam >Priority: Blocker > Labels: phoenix-hardening > Fix For: 4.16.0 > > Attachments: PHOENIX-5607.4.x-HBase-1.3.v1.patch, > PHOENIX-5607.4.x-HBase-1.3.v2.patch, PHOENIX-5607.4.x-HBase-1.3.v3.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Filing this as a blocker for 4.16.0. > As we've seen with the various failed attempts to release 4.15.0 Phoenix' > backwards compatibility story is weak, and lacks tests - in fact there're no > tests. > We should not allow to ship 4.16.0 without improving that and without tests. > [~ckulkarni], [~gjacoby] , FYI, what we discussed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5496) Ensure that we handle all server-side mutation codes on the client
[ https://issues.apache.org/jira/browse/PHOENIX-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Neha Gupta updated PHOENIX-5496: Attachment: PHOENIX-5496.v3.patch > Ensure that we handle all server-side mutation codes on the client > -- > > Key: PHOENIX-5496 > URL: https://issues.apache.org/jira/browse/PHOENIX-5496 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.15.0, 5.1.0 >Reporter: Chinmay Kulkarni >Assignee: Neha Gupta >Priority: Major > Fix For: 4.15.1, 5.1.1 > > Attachments: PHOENIX-5496.patch, PHOENIX-5496.v1.patch, > PHOENIX-5496.v2.patch, PHOENIX-5496.v3.patch > > Time Spent: 3h > Remaining Estimate: 0h > > There are many instances throughout wherein we set a certain error mutation > code in the RPC callback, however we do not handle these mutation codes on > the client. > For example: > If the metadata rows for a tableKey are no longer in that SYSCAT region, > checkTableKeyInRegion() fails, the metadata for this table is not written to > SYSCAT and [the TABLE_NOT_IN_REGION mutation code is > set|https://github.com/apache/phoenix/blob/11997d48d1957cf613526f01c5ccbe2812cf095d/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1785-L1790]. > This is handled for 1 retry inside > [CQSI.metaDataCoprocessorExec|https://github.com/apache/phoenix/blob/11997d48d1957cf613526f01c5ccbe2812cf095d/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L1568-L1570], > but if this happens again, it is returned back to the client where it goes > to the default case and succeeds. > Apart from the fact that partial metadata updates are possible leading to > orphan metadata rows in system tables, this also wrongly returns success for > clients even though there is no record of that table/view being created > inside Phoenix's system tables. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5733) new index on a table with old corpoc should also use old design unless upgraded
Swaroopa Kadam created PHOENIX-5733: --- Summary: new index on a table with old corpoc should also use old design unless upgraded Key: PHOENIX-5733 URL: https://issues.apache.org/jira/browse/PHOENIX-5733 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Currently, if the table uses old design indexer coproc, creating an index with new design enabled (Not recommended) will end up loading new corpoc on index. This is prone to errors and incorrectness in design. Hence, we should create an index with old coproc if the table has not been upgraded yet although the new design flag is enabled. Later, when ready upgrade the pair using IndexUpgradeTool -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5732) Implement starttime, endtime in IndexTool for rebuild and verification
Swaroopa Kadam created PHOENIX-5732: --- Summary: Implement starttime, endtime in IndexTool for rebuild and verification Key: PHOENIX-5732 URL: https://issues.apache.org/jira/browse/PHOENIX-5732 Project: Phoenix Issue Type: New Feature Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 5.1.0, 4.15.1 IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5731) Loading bulkload hfiles should not be blocked if the upsert select happening for differet table.
[ https://issues.apache.org/jira/browse/PHOENIX-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajeshbabu Chintaguntla updated PHOENIX-5731: - Attachment: PHOENIX-5731.patch > Loading bulkload hfiles should not be blocked if the upsert select happening > for differet table. > > > Key: PHOENIX-5731 > URL: https://issues.apache.org/jira/browse/PHOENIX-5731 > Project: Phoenix > Issue Type: Bug >Reporter: Rajeshbabu Chintaguntla >Assignee: Rajeshbabu Chintaguntla >Priority: Major > Fix For: 5.1.0, 4.16.0 > > Attachments: PHOENIX-5731.patch > > > currently we are not allowing to load hfiles after bulkload to avoid deadlock > cases when upsert select is happening to same table but when the upserting to > different table we should not block the load incremental hfiles. -- This message was sent by Atlassian Jira (v8.3.4#803005)