[ 
https://issues.apache.org/jira/browse/HBASE-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16013693#comment-16013693
 ] 

Guanghao Zhang commented on HBASE-15616:
----------------------------------------

bq. null qualifier is allowed for Put/Get/Scan/Append, users may have used null 
qualifier in these operations, so also need to allow null qualifier for 
checkAndMutate and increment
+1 for this. I met this problem recently. And for put operation, if you pass a 
null qualifier, it will set a new byte[0] to protobuf. So we should keep same 
behavior for checkAndMutate operation, too. [~anoop.hbase] [~stack] Any more 
concerns? Thanks.

> CheckAndMutate will encouter NPE if qualifier to check is null
> --------------------------------------------------------------
>
>                 Key: HBASE-15616
>                 URL: https://issues.apache.org/jira/browse/HBASE-15616
>             Project: HBase
>          Issue Type: Bug
>          Components: Client
>    Affects Versions: 2.0.0
>            Reporter: Jianwei Cui
>            Assignee: Jianwei Cui
>         Attachments: HBASE-15616-v1.patch, HBASE-15616-v2.patch
>
>
> If qualifier to check is null, the checkAndMutate/checkAndPut/checkAndDelete 
> will encounter NPE.
> The test code:
> {code}
> table.checkAndPut(row, family, null, Bytes.toBytes(0), new 
> Put(row).addColumn(family, null, Bytes.toBytes(1)));
> {code}
> The exception:
> {code}
> Exception in thread "main" 
> org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after 
> attempts=3, exceptions:
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:31 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
> Fri Apr 08 15:51:32 CST 2016, 
> RpcRetryingCaller{globalStartTime=1460101891615, pause=100, maxAttempts=3}, 
> java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:120)
>       at org.apache.hadoop.hbase.client.HTable.checkAndPut(HTable.java:772)
>       at ...
> Caused by: java.io.IOException: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:341)
>       at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:768)
>       at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:755)
>       at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:99)
>       ... 2 more
> Caused by: com.google.protobuf.ServiceException: 
> java.lang.NullPointerException
>       at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:239)
>       at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.mutate(ClientProtos.java:35252)
>       at org.apache.hadoop.hbase.client.HTable$7.call(HTable.java:765)
>       ... 4 more
> Caused by: java.lang.NullPointerException
>       at com.google.protobuf.LiteralByteString.size(LiteralByteString.java:76)
>       at 
> com.google.protobuf.CodedOutputStream.computeBytesSizeNoTag(CodedOutputStream.java:767)
>       at 
> com.google.protobuf.CodedOutputStream.computeBytesSize(CodedOutputStream.java:539)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$Condition.getSerializedSize(ClientProtos.java:7483)
>       at 
> com.google.protobuf.CodedOutputStream.computeMessageSizeNoTag(CodedOutputStream.java:749)
>       at 
> com.google.protobuf.CodedOutputStream.computeMessageSize(CodedOutputStream.java:530)
>       at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MutateRequest.getSerializedSize(ClientProtos.java:12431)
>       at 
> org.apache.hadoop.hbase.ipc.IPCUtil.getTotalSizeWhenWrittenDelimited(IPCUtil.java:311)
>       at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.writeRequest(AsyncRpcChannel.java:409)
>       at 
> org.apache.hadoop.hbase.ipc.AsyncRpcChannel.callMethod(AsyncRpcChannel.java:333)
>       at 
> org.apache.hadoop.hbase.ipc.AsyncRpcClient.call(AsyncRpcClient.java:245)
>       at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226)
>       ... 7 more
> {code}
> The reason is {{LiteralByteString.size()}} will throw NPE if wrapped byte 
> array is null. It is possible to invoke {{put}} and {{checkAndMutate}} on the 
> same column, because null qualifier is allowed for {{Put}},  users may be 
> confused if null qualifier is not allowed for {{checkAndMutate}}. We can also 
> convert null qualifier to empty byte array for {{checkAndMutate}} in client 
> side. Discussions and suggestions are welcomed. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to