[
https://issues.apache.org/jira/browse/PHOENIX-5278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saksham Gangwar updated PHOENIX-5278:
-------------------------------------
Description:
There have been scenarios similar to: deleting a tenant-specific view,
recreating the same tenant-specific view with new columns and while querying
the query fails with NPE over syscat due to corrupt data. View column count is
changed but Phoenix syscat table did not properly update this info which
causing querying the view always trigger null pointer exception. So the
addition of this unit test will help us further debug the exact issue of
corruption and give us confidence over this use case.
Exception Stacktrace:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
at
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: at index 50
at
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
... 10 more
was:
There have been scenarios similar to: deleting a tenant-specific view,
recreating the same tenant-specific view with new columns and while querying
the query fails with NPE over syscat due to corrupt data. So the addition of
this unit test will help us further debug the exact issue of corruption and
give us confidence over this use case.
Exception Stacktrace:
org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
at
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
at
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException: at index 50
at
com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
at
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
... 10 more
> Add unit test to make sure drop/recreate of tenant view with added columns
> doesn't corrupt syscat
> -------------------------------------------------------------------------------------------------
>
> Key: PHOENIX-5278
> URL: https://issues.apache.org/jira/browse/PHOENIX-5278
> Project: Phoenix
> Issue Type: Bug
> Reporter: Saksham Gangwar
> Priority: Minor
>
> There have been scenarios similar to: deleting a tenant-specific view,
> recreating the same tenant-specific view with new columns and while querying
> the query fails with NPE over syscat due to corrupt data. View column count
> is changed but Phoenix syscat table did not properly update this info which
> causing querying the view always trigger null pointer exception. So the
> addition of this unit test will help us further debug the exact issue of
> corruption and give us confidence over this use case.
> Exception Stacktrace:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException: VIEW_NAME_ABC: at index 50
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:111)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:566)
> at
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6143)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3552)
> at
> org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3534)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32496)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2213)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
> at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NullPointerException: at index 50
> at
> com.google.common.collect.ObjectArrays.checkElementNotNull(ObjectArrays.java:191)
> at com.google.common.collect.ImmutableList.construct(ImmutableList.java:320)
> at com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:290)
> at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:548)
> at org.apache.phoenix.schema.PTableImpl.<init>(PTableImpl.java:421)
> at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:406)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1015)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:578)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3220)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3167)
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:532)
> ... 10 more
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)