[ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dev Lakhani updated HBASE-13825:
--------------------------------
    Description: 
When performing a get operation on a column family with more than 64MB of data, 
the operation fails with:

Caused by: Portable(java.io.IOException): Call to host:port failed on local 
exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message 
was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to 
increase the size limit.
        at 
org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
        at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
        at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
        at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
        at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
        at 
org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)

This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
that issue is related to cluster status. 

Scan and put operations on the same data work fine

Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.




  was:
When performing a get operation on a column family with more than 64MB of data, 
the operation fails with:

Caused by: Portable(java.io.IOException): Call to host:port failed on local 
exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message 
was too large.  May be malicious.  Use CodedInputStream.setSizeLimit() to 
increase the size limit.
        at 
org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
        at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
        at 
org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
        at 
org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
        at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
        at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
        at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
        at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
        at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
        at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
        at 
org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)

This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
that issue is related to cluster status. 

Can and put operations on the same data work fine

Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.





> Get operations on large objects fail with protocol errors
> ---------------------------------------------------------
>
>                 Key: HBASE-13825
>                 URL: https://issues.apache.org/jira/browse/HBASE-13825
>             Project: HBase
>          Issue Type: Bug
>    Affects Versions: 1.0.0, 1.0.1
>            Reporter: Dev Lakhani
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
>         at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
>         at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
>         at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
>         at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
>         at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
>         at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
>         at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
>         at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
>         at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
>         at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
>         at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
>         at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to