Hi Pariksheet,

   If possible, can you please share the error log.  That helps a ton!

Regards
Ravi

On Thu, Nov 13, 2014 at 9:58 AM, Vasudevan, Ramkrishna S <
ramkrishna.s.vasude...@intel.com> wrote:

> Hi Pariksheet
>
> 1) Seems to be an issue. Pls file a JIRA along with a unit test to
> indicate the problem.
> Similarly for (3).
>
> 2) update_statistics is available in 4.2 onwards.
>
> Regards
> Ram
>
> -----Original Message-----
> From: Pariksheet Barapatre [mailto:pbarapa...@gmail.com]
> Sent: Thursday, November 13, 2014 4:13 PM
> To: dev@phoenix.apache.org
> Subject: Pheonix Questions
>
> Hello Guys,
>
> Hope everybody keeping good.
>
> I have few questions -
>
> Tested ON - Phoenix4.1 , HBase0.98.1 , 7 RS heap Size - 8 GB
>
> 1) PIG gives error whenever I am trying to LOAD data from Salted Phoenix
> table
>
> A = LOAD 'hbase://table/RAW_LOG' USING
> org.apache.phoenix.pig.PhoenixHBaseLoader('pari');
>
> 2) STATS not working through SQLLine
>  UPDATE STATISTICS my_table ;
> Error: ERROR 601 (42P00): Syntax error. Encountered "UPDATE" at line 1,
> column 1. (state=42P00,code=601)
>
> http://phoenix.apache.org/update_statistics.html
>
> 3) We have RAW table with ~25M rows, we want to load aggregated data to
> new table. Everything works if we use upto two aggregate functions. But
> fails with
> Error: org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException:
>
> G_DEV_YDSP.RAW_LOG,\x0F\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1415059496937.22f088da9f0e53f6fc0ad1275e901ceb.:
> null
>         at
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:77)
>         at
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:45)
>         at
>
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:152)
>         at
>
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1663)
>         at
>
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3071)
>         at
>
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29497)
>         at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2012)
>         at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
>         at
>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
>         at
>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
>         at
>
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
>         at java.lang.Thread.run(Thread.java:744)
> Caused by: java.lang.ArrayIndexOutOfBoundsException
>         at org.apache.hadoop.hbase.util.Bytes.putBytes(Bytes.java:290)
>         at
> org.apache.hadoop.hbase.KeyValue.createByteArray(KeyValue.java:1031)
>         at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:639)
>         at org.apache.hadoop.hbase.KeyValue.<init>(KeyValue.java:567)
>         at
> org.apache.phoenix.util.KeyValueUtil.newKeyValue(KeyValueUtil.java:63)
>         at
>
> org.apache.phoenix.cache.aggcache.SpillManager.getAggregators(SpillManager.java:200)
>         at
>
> org.apache.phoenix.cache.aggcache.SpillManager.loadEntry(SpillManager.java:273)
>         at
>
> org.apache.phoenix.cache.aggcache.SpillableGroupByCache.cache(SpillableGroupByCache.java:231)
>         at org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver
> .scanUnordered(GroupedAggregateRegionObserver.java:432)
>         at
>
> org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:161)
>         at
>
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:134)
>
> --
> Cheers,
> Pari
>

Reply via email to