[ 
https://issues.apache.org/jira/browse/PHOENIX-7280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rushabh Shah updated PHOENIX-7280:
----------------------------------
    Description: 
Test failure: 
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1778/56/testReport/junit/org.apache.phoenix.end2end/ViewMetadataIT/testViewAndTableAndDropCascadeWithIndexes/
The test is doing following:
1. Create a data table
2. Create 2 views on the data table.
3. Create 3 indexes, one on data table and 2 indexes on 2 views.
4. Drop data table with CASCADE option.
5. Run drop child views task.
6. Validate view1 and view2 doesn't exist.

The test is failing in step5 while dropping view view2.

It fails with the following error while doing getTable call on the base table.
{noformat}
2024-03-15T11:58:51,682 ERROR 
[RpcServer.Metadata.Fifo.handler=3,queue=0,port=61097] 
coprocessor.MetaDataEndpointImpl(715): getTable failed
java.lang.IllegalArgumentException: offset (0) must be < array length (0)
        at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:302)
 ~[hbase-shaded-miscellaneous-4.1.5.jar:4.1.5]
        at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:408) 
~[hbase-common-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:395) 
~[hbase-common-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:675)
 ~[classes/:?]
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17524)
 ~[classes/:?]
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7930) 
~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2535)
 ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2509)
 ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45014)
 ~[hbase-protocol-shaded-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) 
~[hbase-server-2.5.7-hadoop3.jar:?]
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) 
~[hbase-server-2.5.7-hadoop3.jar:?]
        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) 
~[hbase-server-2.5.7-hadoop3.jar:?]
        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) 
~[hbase-server-2.5.7-hadoop3.jar:?]
{noformat}

 The base table is already dropped at step#4 and MDEI will cache the Deleted 
Table Marker in its cache. See 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2829-L2831]
 for more details.

{code}
                long currentTime = 
MetaDataUtil.getClientTimeStamp(tableMetadata);
                for (ImmutableBytesPtr ckey : invalidateList) {
                    metaDataCache.put(ckey, newDeletedTableMarker(currentTime));
                }
{code}

DeletedTableMarker is an empty PTable object. See 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1870-L1881]
 for the definition.

{code}
    private static PTable newDeletedTableMarker(long timestamp) {
        try {
            return new PTableImpl.Builder()
                    .setType(PTableType.TABLE)
                    .setTimeStamp(timestamp)
                    .setPkColumns(Collections.<PColumn>emptyList())
                    .setAllColumns(Collections.<PColumn>emptyList())
                    .setFamilyAttributes(Collections.<PColumnFamily>emptyList())
                    .setRowKeySchema(RowKeySchema.EMPTY_SCHEMA)
                    .setIndexes(Collections.<PTable>emptyList())
                    .setPhysicalNames(Collections.<PName>emptyList())
                    .build();
        } catch (SQLException e) {
            // Should never happen
            return null;
        }
    }
{code}


Now while dropping view2, it is not able to find the view2 in its cache so it 
is trying to construct the view.
It has to scan the SYSCAT regionserver to construct the view.
It calls the method 
[getTableFromCells|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1089]

Since this is a view, it will have LINK_TYPE = 2 in the view's header row which 
will link it to the physical table. It will go through the code 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1488-L1502].
{code}
                } else if (linkType == LinkType.PHYSICAL_TABLE) {
                    // famName contains the logical name of the parent table. 
We need to get the actual physical name of the table
                    PTable parentTable = null;
                    if (indexType != IndexType.LOCAL) {
                        parentTable = getTable(null, 
SchemaUtil.getSchemaNameFromFullName(famName.getBytes()).getBytes(StandardCharsets.UTF_8),
                                
SchemaUtil.getTableNameFromFullName(famName.getBytes()).getBytes(StandardCharsets.UTF_8),
 clientTimeStamp, clientVersion);
                        if (parentTable == null) {
                            // parentTable is not in the cache. Since famName 
is only logical name, we need to find the physical table.
                            try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
 {
                                parentTable = 
connection.getTableNoCache(famName.getString());
                            } catch (TableNotFoundException e) {
                                // It is ok to swallow this exception since 
this could be a view index and _IDX_ table is not there.
                            }
                        }
                    }
{code}

Since the parent table is an empty table(not null) in the cache, it will set 
the physical table name as empty string 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1526-L1531].
{code}
                    } else {
                        String parentPhysicalTableName = 
parentTable.getPhysicalName().getString();
                        
physicalTables.add(PNameFactory.newName(parentPhysicalTableName));
                        setPhysicalName = true;
                        parentLogicalName = 
SchemaUtil.getTableName(parentTable.getSchemaName(), 
parentTable.getTableName());
                    }
{code}

Later on, the test fails 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L675]





  was:
Test failure: 
https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1778/56/testReport/junit/org.apache.phoenix.end2end/ViewMetadataIT/testViewAndTableAndDropCascadeWithIndexes/
The test is doing following:
1. Create a data table
2. Create 2 views on the data table.
3. Create 3 indexes, one on data table and 2 indexes on 2 views.
4. Drop data table with CASCADE option.
5. Run drop child views task.
6. Validate view1 and view2 doesn't exist.

The test is failing in step5 while dropping view view2.

It fails with the following error while doing getTable call on the base table.
{noformat}
2024-03-15T11:58:51,682 ERROR 
[RpcServer.Metadata.Fifo.handler=3,queue=0,port=61097] 
coprocessor.MetaDataEndpointImpl(715): getTable failed
java.lang.IllegalArgumentException: offset (0) must be < array length (0)
        at 
org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:302)
 ~[hbase-shaded-miscellaneous-4.1.5.jar:4.1.5]
        at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:408) 
~[hbase-common-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:395) 
~[hbase-common-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:675)
 ~[classes/:?]
        at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17524)
 ~[classes/:?]
        at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7930) 
~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2535)
 ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2509)
 ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45014)
 ~[hbase-protocol-shaded-2.5.7-hadoop3.jar:2.5.7-hadoop3]
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) 
~[hbase-server-2.5.7-hadoop3.jar:?]
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) 
~[hbase-server-2.5.7-hadoop3.jar:?]
        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) 
~[hbase-server-2.5.7-hadoop3.jar:?]
        at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) 
~[hbase-server-2.5.7-hadoop3.jar:?]
{noformat}

 The base table is already dropped at step#4 and MDEI will cache the Deleted 
Table Marker in its cache. See 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2829-L2831]
 for more details.

{code}
                long currentTime = 
MetaDataUtil.getClientTimeStamp(tableMetadata);
                for (ImmutableBytesPtr ckey : invalidateList) {
                    metaDataCache.put(ckey, newDeletedTableMarker(currentTime));
                }
{code}

DeletedTableMarker is an empty PTable object. See 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1870-L1881]
 for the definition.

{code}
    private static PTable newDeletedTableMarker(long timestamp) {
        try {
            return new PTableImpl.Builder()
                    .setType(PTableType.TABLE)
                    .setTimeStamp(timestamp)
                    .setPkColumns(Collections.<PColumn>emptyList())
                    .setAllColumns(Collections.<PColumn>emptyList())
                    .setFamilyAttributes(Collections.<PColumnFamily>emptyList())
                    .setRowKeySchema(RowKeySchema.EMPTY_SCHEMA)
                    .setIndexes(Collections.<PTable>emptyList())
                    .setPhysicalNames(Collections.<PName>emptyList())
                    .build();
        } catch (SQLException e) {
            // Should never happen
            return null;
        }
    }
{code}


Now while dropping view2, it is not able to find the view2 in its cache so it 
is trying to construct the view.
It has to scan the SYSCAT regionserver to construct the view.
It calls the method 
[getTableFromCells|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1089]

Since this is a view, it will have LINK_TYPE = 2 in the view's header row which 
will link it to the physical table. It will go through the code 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1488-L1502].

Since the parent table is an empty table in the cache, it will set the physical 
table name as empty string 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1526-L1531].

Later on, the test fails 
[here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L675]






> Test failure: ViewMetadataIT#testViewAndTableAndDropCascadeWithIndexes
> ----------------------------------------------------------------------
>
>                 Key: PHOENIX-7280
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-7280
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: Rushabh Shah
>            Priority: Major
>
> Test failure: 
> https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1778/56/testReport/junit/org.apache.phoenix.end2end/ViewMetadataIT/testViewAndTableAndDropCascadeWithIndexes/
> The test is doing following:
> 1. Create a data table
> 2. Create 2 views on the data table.
> 3. Create 3 indexes, one on data table and 2 indexes on 2 views.
> 4. Drop data table with CASCADE option.
> 5. Run drop child views task.
> 6. Validate view1 and view2 doesn't exist.
> The test is failing in step5 while dropping view view2.
> It fails with the following error while doing getTable call on the base table.
> {noformat}
> 2024-03-15T11:58:51,682 ERROR 
> [RpcServer.Metadata.Fifo.handler=3,queue=0,port=61097] 
> coprocessor.MetaDataEndpointImpl(715): getTable failed
> java.lang.IllegalArgumentException: offset (0) must be < array length (0)
>       at 
> org.apache.hbase.thirdparty.com.google.common.base.Preconditions.checkArgument(Preconditions.java:302)
>  ~[hbase-shaded-miscellaneous-4.1.5.jar:4.1.5]
>       at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:408) 
> ~[hbase-common-2.5.7-hadoop3.jar:2.5.7-hadoop3]
>       at org.apache.hadoop.hbase.TableName.valueOf(TableName.java:395) 
> ~[hbase-common-2.5.7-hadoop3.jar:2.5.7-hadoop3]
>       at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:675)
>  ~[classes/:?]
>       at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17524)
>  ~[classes/:?]
>       at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7930) 
> ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2535)
>  ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
>       at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2509)
>  ~[hbase-server-2.5.7-hadoop3.jar:2.5.7-hadoop3]
>       at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45014)
>  ~[hbase-protocol-shaded-2.5.7-hadoop3.jar:2.5.7-hadoop3]
>       at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415) 
> ~[hbase-server-2.5.7-hadoop3.jar:?]
>       at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) 
> ~[hbase-server-2.5.7-hadoop3.jar:?]
>       at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102) 
> ~[hbase-server-2.5.7-hadoop3.jar:?]
>       at org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82) 
> ~[hbase-server-2.5.7-hadoop3.jar:?]
> {noformat}
>  The base table is already dropped at step#4 and MDEI will cache the Deleted 
> Table Marker in its cache. See 
> [here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2829-L2831]
>  for more details.
> {code}
>                 long currentTime = 
> MetaDataUtil.getClientTimeStamp(tableMetadata);
>                 for (ImmutableBytesPtr ckey : invalidateList) {
>                     metaDataCache.put(ckey, 
> newDeletedTableMarker(currentTime));
>                 }
> {code}
> DeletedTableMarker is an empty PTable object. See 
> [here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1870-L1881]
>  for the definition.
> {code}
>     private static PTable newDeletedTableMarker(long timestamp) {
>         try {
>             return new PTableImpl.Builder()
>                     .setType(PTableType.TABLE)
>                     .setTimeStamp(timestamp)
>                     .setPkColumns(Collections.<PColumn>emptyList())
>                     .setAllColumns(Collections.<PColumn>emptyList())
>                     
> .setFamilyAttributes(Collections.<PColumnFamily>emptyList())
>                     .setRowKeySchema(RowKeySchema.EMPTY_SCHEMA)
>                     .setIndexes(Collections.<PTable>emptyList())
>                     .setPhysicalNames(Collections.<PName>emptyList())
>                     .build();
>         } catch (SQLException e) {
>             // Should never happen
>             return null;
>         }
>     }
> {code}
> Now while dropping view2, it is not able to find the view2 in its cache so it 
> is trying to construct the view.
> It has to scan the SYSCAT regionserver to construct the view.
> It calls the method 
> [getTableFromCells|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1089]
> Since this is a view, it will have LINK_TYPE = 2 in the view's header row 
> which will link it to the physical table. It will go through the code 
> [here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1488-L1502].
> {code}
>                 } else if (linkType == LinkType.PHYSICAL_TABLE) {
>                     // famName contains the logical name of the parent table. 
> We need to get the actual physical name of the table
>                     PTable parentTable = null;
>                     if (indexType != IndexType.LOCAL) {
>                         parentTable = getTable(null, 
> SchemaUtil.getSchemaNameFromFullName(famName.getBytes()).getBytes(StandardCharsets.UTF_8),
>                                 
> SchemaUtil.getTableNameFromFullName(famName.getBytes()).getBytes(StandardCharsets.UTF_8),
>  clientTimeStamp, clientVersion);
>                         if (parentTable == null) {
>                             // parentTable is not in the cache. Since famName 
> is only logical name, we need to find the physical table.
>                             try (PhoenixConnection connection = 
> QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class))
>  {
>                                 parentTable = 
> connection.getTableNoCache(famName.getString());
>                             } catch (TableNotFoundException e) {
>                                 // It is ok to swallow this exception since 
> this could be a view index and _IDX_ table is not there.
>                             }
>                         }
>                     }
> {code}
> Since the parent table is an empty table(not null) in the cache, it will set 
> the physical table name as empty string 
> [here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1526-L1531].
> {code}
>                     } else {
>                         String parentPhysicalTableName = 
> parentTable.getPhysicalName().getString();
>                         
> physicalTables.add(PNameFactory.newName(parentPhysicalTableName));
>                         setPhysicalName = true;
>                         parentLogicalName = 
> SchemaUtil.getTableName(parentTable.getSchemaName(), 
> parentTable.getTableName());
>                     }
> {code}
> Later on, the test fails 
> [here|https://github.com/apache/phoenix/blob/PHOENIX-6883-feature/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L675]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to