Hi Noam,

Can you quantify the query you run that shows this error? Also, when you change the criteria to retrieve less data, do you mean that you're fetching fewer rows?

Bulvik, Noam wrote:
I am using phonix 4.5.2 and in my table the data in in Array.

When I issue a query sometime the query gets the error bellow, if I
change the criteria for the query to retrieve less data then I get
results without problems so it is not corrupted data.

When I set limit in the query it does not help only if the criteria
limits the data.

Any idea which parameter I need to change in order to prevent this from
happened

Thanks.

org.apache.phoenix.exception.PhoenixIOException:
org.apache.hadoop.hbase.DoNotRetryIOException:
CEM_CDR_SMS,83689\x002926453\x00Undefined\x00\x80\x00\x01W=\xB9\x15\x08\x00\x00\x00\x00SMSTP_DELIVER_WRONG_TRANSIT,1479819771390.5720714e04cf18b12c7eaf09b44ed145.:
null

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:327)

at
org.apache.phoenix.iterate.RegionScannerResultIterator.next(RegionScannerResultIterator.java:50)

at
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:240)

at
org.apache.phoenix.iterate.OrderedResultIterator.next(OrderedResultIterator.java:193)

at
org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(ScanRegionObserver.java:239)

at
org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:220)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1308)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1663)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1738)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1702)

at
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1303)

at
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2124)

at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)

at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)

at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.lang.ArrayIndexOutOfBoundsException

at
org.apache.phoenix.schema.KeyValueSchema.writeVarLengthField(KeyValueSchema.java:152)

at org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:118)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.replaceArrayIndexElement(BaseScannerRegionObserver.java:386)

at
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:310)

... 18 more [SQL State=08000, DB Errorcode=101]


------------------------------------------------------------------------

PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and
confidential, and is intended only for the use of the individual to whom
it is addressed and others who have been specifically authorized to
receive it. If you are not the intended recipient, you are hereby
notified that any dissemination, distribution or copying of this
communication is strictly prohibited. If you have received this
communication in error, or if any problems occur with transmission,
please contact sender. Thank you.

Reply via email to